Tuesday, 2017-07-11

johnsomfrom sqlalchemy.orm import joinedload00:01
rm_workyeah i got it00:07
rm_workit's ...00:07
rm_worki dunno00:07
rm_worki don't have many LBs here00:07
rm_workso00:07
*** cpuga has quit IRC00:08
rm_workhmm so what I have noticed is, when it has to refresh policy, it does take a bit of extra time00:08
rm_workbut only like 1.5s00:10
rm_workjohnsom: it doesn't appear to make things ... worse?00:11
rm_worknot sure if better though00:11
rm_worktesting in staging and i only have like 1 LB00:11
johnsomYour times didn't drop?00:11
rm_workI can TRY to push it to prod00:12
rm_workbut it'd be easier if you had a patch up00:12
johnsomYeah, no I just wanted you to try it out.00:12
johnsomIt's a trade off.  So, sqlalchemy (my fav) is the root of our slow get_all00:13
johnsomThat and our model stuff.00:13
rm_workit doesn't break or make the times LONGER :P00:13
johnsomSo, without that, sqlalchemy makes like 10 calls to the DB for EACH LB.00:13
johnsomBasically because we are building a octavia data model of the LB to them use for the API type object.00:14
johnsomThat change tells sqlalchemy, for that query, pull it all over at once.00:14
johnsomMy 400 LB system went from ~3 seconds to 0.43 with that change00:15
johnsomThe trade off is we are pulling move data over to the api process at one time, so the memory usage will be higher.00:15
johnsomI looked at the query, SA will page it in 1000 row chunks.00:16
johnsomHmm, actually I should check what happens with over 1000 LBs00:16
*** gongysh has quit IRC00:17
rm_workhmmm00:18
rm_workshouldn't that be OK?00:18
rm_workit isn't going to be THAT MUCH memory, is it?00:18
rm_workI mean ALL of the data for 1000 LBs should be what, like 1mb? lol00:19
rm_work(of course it is never that simple...)00:19
rm_worksuch objects, very overhead, wow00:19
johnsomOh, that is interesting....00:21
johnsomSA is limiting it to 1000 LBs00:21
johnsomPeriod00:21
johnsomI have 1,202 LBs, but I only get 1000 in a get_all00:22
johnsomMaybe that is the pagination filter...00:23
johnsomYeah, ok, that was our limit.  Cool00:24
*** aojea has joined #openstack-lbaas00:32
rm_workyeah00:33
*** aojea has quit IRC00:36
openstackgerritMichael Johnson proposed openstack/octavia master: Force get_all to only make one DB call  https://review.openstack.org/48236300:44
johnsomrm_work ^^^^ There is your patch00:44
rm_workkk00:45
rm_worki'll have it deployed to prod soonish00:52
rm_workturnaround is about ... an hour00:52
rm_workeh maybe 30m00:52
johnsomWell, it's mostly to help you out, so I have no rush.  It dropped my 1000 LB query from 3.5 seconds to 0.5, so I'm happy with it00:53
rm_workk00:55
rm_workyeah it seems fine00:55
rm_worksoon i'll have testing data for prod00:56
rm_workthough possibly delayed by a trip to the store00:56
rm_workso I can review it tomorrow00:56
rm_workbut preliminarily, i'm seeing average request times ~0.9-1.001:12
rm_workwhich is less than the previous minimum of 1.3501:12
rm_workand according to API logs, 0.6 of that was network latency anyway01:12
rm_workso yes01:12
rm_workseems to be an improvement01:12
rm_workI'm going to +201:12
rm_workas soon as tests finish01:12
rm_workseems in general more consistent01:17
rm_workjohnsom: haven't seen any long calls yet either01:17
rm_workerr i lied01:17
*** tongl has quit IRC01:17
rm_workfirst few calls were long01:17
rm_worki assume loading up policy? :/01:17
johnsomAnd/or connecting to the DB01:18
rm_workahhh yeah01:18
rm_workjohnsom: out of 5900 (6000, with the first 100 removed because they were weird):01:24
rm_work   Min. 1st Qu.  Median    Mean 3rd Qu.    Max.01:24
rm_work 0.7330  0.8550  0.9240  0.9994  1.1090  6.977001:24
rm_worksame thing, but for the previous 13000 datapoints:01:25
rm_work   Min. 1st Qu.  Median    Mean 3rd Qu.    Max.01:25
rm_work  1.319   1.854   2.104   2.219   2.343  34.86101:25
johnsomHmm,  I don't see those max pops01:26
rm_workso yeah, marked improvement01:26
rm_worki assume it's environmental01:26
rm_worklike DB slowness01:26
rm_workerr, network slowness to the DB01:26
rm_worktransient network weirdness01:27
rm_workwhich was multiplied by the huge number of queries happening01:27
johnsomThe one downer is we are going direct to SA module and not through oslo db, but I didn't see a way to do it01:27
rm_workerr01:29
rm_workbut you only touched like one line i thought01:29
rm_workand it was just to add a thing?01:29
rm_workoh01:30
rm_workyou just mean for specifying this option01:30
rm_workmeh01:30
rm_workwe're still *using* oslo.db01:30
johnsomYeah01:30
johnsomA quick search for that in openstack showed it used, but everyone was native SA.  So, whatever01:31
rm_workugh01:35
rm_workok 12k queries finished01:35
rm_workgot some more bad data01:35
rm_work   Min. 1st Qu.  Median    Mean 3rd Qu.    Max.01:35
rm_work  0.729   0.856   0.920   1.018   1.110  33.30501:35
rm_workstill BETTER but01:35
rm_workhold on01:35
rm_work      50%       90%       99%     99.9%    99.99%01:37
rm_work 0.920000  1.271000  1.632010  6.730674 31.97388001:37
rm_workpercentiles01:38
rm_workthat didn't paste great, but01:38
rm_workso yeah that was 12k, going to do *120k* and go to the store01:42
rm_workwill post results later tonight / tomorrow01:42
rm_workWOW what happened to zuul01:43
rm_workyour change was queued until JUST NOW01:43
rm_workactually mostly still queued01:43
rm_worklooks like the nodes all dropped off and then just got SLAMMED01:43
rm_workah yep looks like infra noticed nothing had been building for a while and just fixed it recently :)01:46
rm_workprobably we'll end up needing to recheck later01:46
rm_workonce this calms down01:47
*** aojea has joined #openstack-lbaas02:32
*** aojea has quit IRC02:37
*** sanfern has joined #openstack-lbaas02:49
*** yamamoto has joined #openstack-lbaas02:58
*** links has joined #openstack-lbaas03:41
*** wasmum has quit IRC04:14
*** wasmum has joined #openstack-lbaas04:28
korean101hi guys.04:32
korean101i use LBv2 agent (not octavia) in newton (CentOS 7)04:33
korean101but i got a this ERROS (WARNING neutron_lbaas.drivers.haproxy.namespace_driver [-] Stats socket not found for loadbalancer 9329ba09-b017-422f-bca8-1c3ca044d66a)04:33
korean101periodically04:33
korean101any clues?04:33
johnsomkorean101 That is normal.  During some states the haproxy process isn't running.04:36
korean101johnsom: but lbaas namespace doesn't show04:37
korean101lbaas-loadbalancer-create is OK04:38
korean101johnsom: | 71514b72-01f5-4403-8d87-23ed6a806730 | test-lb | 10.0.0.12   | ACTIVE              | haproxy  |04:38
korean101johnsom: but nothing about lbaas in network node04:39
johnsomHmm, well the warning message above means the agent could not connect to the haproxy stats socket.  Usually that is because haproxy is not running, reloading, or a listener has not yet been deployed04:39
korean101johnsom: OH! listener... 1 minutes...04:42
korean101johnsom: sorry about my quick temper04:45
*** cpuga has joined #openstack-lbaas04:51
*** cpuga has quit IRC04:52
*** cpuga has joined #openstack-lbaas04:53
*** cpuga has quit IRC04:58
rm_work> summary(times_new)05:07
rm_work   Min. 1st Qu.  Median    Mean 3rd Qu.    Max.05:07
rm_work  0.722   0.854   0.913   1.056   1.090  42.24405:07
rm_work> quantile(times_new, c(.5,.9,.99,.999,.9999))05:07
rm_work     50%      90%      99%    99.9%   99.99%05:07
rm_work 0.91300  1.24700  1.79804 12.56131 31.1892705:07
rm_workjohnsom: ^^ 120000 requests, 12 parallel threads (10000 each)05:07
johnsomGood, bad, otherwise?05:12
*** sanfern has quit IRC05:15
*** sanfern has joined #openstack-lbaas05:16
*** aojea has joined #openstack-lbaas05:33
*** amotoki_away is now known as amotoki05:35
*** armax has quit IRC05:39
*** dlundquist has quit IRC05:40
*** dlundquist has joined #openstack-lbaas05:40
*** rcernin has joined #openstack-lbaas05:41
*** dlundquist has quit IRC05:51
*** dlundquist has joined #openstack-lbaas05:51
*** dlundquist has quit IRC06:00
*** sbalukoff_ has quit IRC06:00
*** sbalukoff_ has joined #openstack-lbaas06:02
*** dlundquist has joined #openstack-lbaas06:02
*** pcaruana has joined #openstack-lbaas06:04
*** aojea has quit IRC06:07
*** aojea has joined #openstack-lbaas06:07
*** aojea has quit IRC06:12
*** aojea has joined #openstack-lbaas06:12
*** aojea has quit IRC06:16
rm_workjohnsom: i mean, it's ... similar to the previous data :P meaning, my dataset is probably big enough to be reliably accurate06:17
rm_workbut that i still get really bad queries sometimes :/06:18
rm_workand this is with only 20 LBs06:18
rm_worki wonder if it sometimes has to renegotiate a connection to the DB? and that takes a while?06:18
openstackgerrithuangshan proposed openstack/neutron-lbaas master: Repair using neutron CLI Gets the list of lbaas-l7policy Closes-Bug: #1702589  https://review.openstack.org/48241606:22
openstackbug 1702589 in octavia "Failed to get list of neutron-l7policy neutronclient CLI" [Undecided,Incomplete] https://launchpad.net/bugs/1702589 - Assigned to huangshan (huangshan)06:22
*** basilAB has quit IRC06:23
*** vaishali has quit IRC06:24
*** gtrxcb has joined #openstack-lbaas06:25
*** basilAB has joined #openstack-lbaas06:28
*** vaishali has joined #openstack-lbaas06:29
*** gcheresh_ has joined #openstack-lbaas06:40
*** kobis has joined #openstack-lbaas06:40
*** sanfern has quit IRC06:48
*** kobis has quit IRC06:51
*** isantosp has joined #openstack-lbaas06:53
*** tesseract has joined #openstack-lbaas07:02
*** fnaval has quit IRC07:05
*** fnaval has joined #openstack-lbaas07:06
korean101is any config to remove empty qlbaas-e0b32195-cbad-4e89-bb9f-b45d11354a6c namespaces?07:07
korean101lots of empty lbaas namespaces remains07:08
*** aojea has joined #openstack-lbaas07:15
*** kobis has joined #openstack-lbaas07:26
*** openstackgerrit has quit IRC07:33
*** gtrxcb has quit IRC07:46
*** fnaval has quit IRC07:59
*** diltram has quit IRC08:06
*** diltram has joined #openstack-lbaas08:17
*** sanfern has joined #openstack-lbaas09:03
*** links has quit IRC09:37
*** links has joined #openstack-lbaas09:39
*** links has quit IRC09:44
*** links has joined #openstack-lbaas09:49
*** yamamoto has quit IRC10:03
*** links has quit IRC10:13
*** yamamoto has joined #openstack-lbaas11:04
*** yamamoto has quit IRC11:11
*** yamamoto has joined #openstack-lbaas11:30
*** openstackgerrit has joined #openstack-lbaas11:35
openstackgerritSanthosh Fernandes proposed openstack/octavia master: Option to enable provisioning status to be sync with neutron db  https://review.openstack.org/47838511:35
*** atoth has joined #openstack-lbaas11:45
*** links has joined #openstack-lbaas11:46
*** links has quit IRC11:59
*** slaweq has joined #openstack-lbaas12:11
slaweqhello, I have a question about neutron-lbaas and Ubuntu package12:11
slaweqwhen I'm installing it on host with neutron-server it creates new file /etc/neutron/neutron_lbaas.conf which should be added to neutron-server process with "--config-file" option, am I right?12:12
slaweqand if Yes why it's not added?12:13
*** yamamoto has quit IRC12:38
*** openstackgerrit has quit IRC12:47
*** openstackgerrit has joined #openstack-lbaas13:06
openstackgerritGerman Eichberger proposed openstack/octavia master: ACTIVE-ACTIVE Topology: Initial Distributor Driver Mixin  https://review.openstack.org/31300613:06
*** yamamoto has joined #openstack-lbaas13:25
*** cpuga has joined #openstack-lbaas13:31
*** cpuga has quit IRC13:38
*** cpuga has joined #openstack-lbaas13:38
*** kobis has quit IRC13:54
*** reedip_ has joined #openstack-lbaas14:05
*** fnaval has joined #openstack-lbaas14:06
*** yamamoto has quit IRC14:24
*** fnaval has quit IRC14:28
*** armax has joined #openstack-lbaas14:33
*** gcheresh_ has quit IRC14:34
*** armax has quit IRC14:37
*** rcernin has quit IRC14:38
*** yamamoto has joined #openstack-lbaas14:40
*** rcernin has joined #openstack-lbaas14:40
openstackgerritBernard Cafarelli proposed openstack/octavia master: DIB: drop custom mirror elements  https://review.openstack.org/48258714:44
*** armax has joined #openstack-lbaas14:45
*** openstackgerrit has quit IRC14:48
*** slaweq has quit IRC14:53
*** fnaval has joined #openstack-lbaas14:56
*** openstackgerrit has joined #openstack-lbaas14:58
openstackgerritsumitjami proposed openstack/neutron-lbaas master: fixed health monitor setting during tempest test,  https://review.openstack.org/48023314:58
xgerman_https://review.openstack.org/313006 passed - ship it!15:16
*** rcernin has quit IRC15:18
*** catintheroof has joined #openstack-lbaas15:28
openstackgerritJason Niesz proposed openstack/octavia master: blueprint: l3-active-active  https://review.openstack.org/45300515:29
*** yamamoto has quit IRC15:58
*** kobis has joined #openstack-lbaas15:59
*** aojea has quit IRC16:03
*** amotoki is now known as amotoki_away16:14
*** armax has quit IRC16:17
*** sshank has joined #openstack-lbaas16:27
*** blogan has quit IRC16:45
*** yamamoto has joined #openstack-lbaas16:59
*** fnaval_ has joined #openstack-lbaas17:01
*** sshank has quit IRC17:02
*** fnaval has quit IRC17:04
*** yamamoto has quit IRC17:04
*** armax has joined #openstack-lbaas17:17
*** tongl has joined #openstack-lbaas17:34
*** cpuga has quit IRC17:39
*** cpuga has joined #openstack-lbaas17:39
*** aojea has joined #openstack-lbaas17:44
*** aojea has quit IRC17:49
*** yamamoto has joined #openstack-lbaas18:02
*** yamamoto has quit IRC18:07
*** kobis has quit IRC18:10
*** sshank has joined #openstack-lbaas18:11
*** tesseract has quit IRC18:25
*** sshank has quit IRC18:44
*** aojea has joined #openstack-lbaas18:45
*** reedip_ has quit IRC18:49
*** aojea has quit IRC19:04
*** sanfern has quit IRC19:04
*** sanfern has joined #openstack-lbaas19:07
rm_worko/19:18
*** kobis has joined #openstack-lbaas19:20
xgerman_o/19:20
*** gcheresh_ has joined #openstack-lbaas19:26
xgerman_rm_Work how do you feel about us releasing an amphora image for others to download: https://governance.openstack.org/tc/resolutions/20170530-binary-artifacts.html19:28
rm_workhmmmm19:28
rm_workactually that's interesting19:28
rm_workI think it might be useful19:28
rm_worklike "here's the Pike image"19:28
xgerman_so do I19:28
rm_workprobably would save a lot of confusion19:28
xgerman_yep19:28
xgerman_since yesterday I am on docker-hub19:29
xgerman_johnsom is worried if we don;t automate it like with a weekly job they become stale, but we could have some addition to the diskimage script to publish after build19:30
rm_work^^ yeah i would worry too19:31
rm_workbut, this would only be for releases right?19:31
rm_workor for changes to stable/*19:31
*** aojea has joined #openstack-lbaas19:31
rm_workotherwise, they SHOULD become stale -- you want the one for your version of octavia19:31
rm_workif you're running master, you should build your own19:32
xgerman_yeah, we should limit to the release19:32
xgerman_agree, master you are on your own19:32
johnsomI am just thinking of security fixes to the base OS.  I mean what is our goal here?  A reference image?  That could be build daily and should work with the stables, right?19:35
xgerman_ideally a reference image but I can see us just providing demo images to get you off the ground19:36
xgerman_mabe something for tomorrow’s agenda19:36
*** armax has quit IRC19:38
*** aojea has quit IRC19:39
*** slaweq has joined #openstack-lbaas19:40
*** kobis has quit IRC19:45
rm_workjohnsom: I was assuming when we cut a release for Pike and it moves to stable/pike we also build an image based on that19:52
rm_worksince ... that won't change frequently19:52
rm_workand we could be nice and go back and make an image for ocata19:53
rm_workand newton19:53
rm_workbased on the stable/*19:53
rm_workdoesn't that make sense?19:53
*** jniesz has joined #openstack-lbaas19:53
johnsomWell, I'm mostly thinking about the OS security patches.  Do we want to have images out there that have kernel issues, etc.19:53
johnsomThe agent should be compatible with an older controller right?19:54
*** aojea has joined #openstack-lbaas19:58
*** slaweq has quit IRC20:12
*** armax has joined #openstack-lbaas20:13
*** krypto has joined #openstack-lbaas20:17
*** cpuga has quit IRC20:29
*** cpuga_ has joined #openstack-lbaas20:29
*** cpuga_ has quit IRC20:34
*** gcheresh_ has quit IRC20:55
rm_workah true20:57
rm_worktheoretically we have kept it compatible I believe20:57
rm_workand yeah I guess we'd have to refresh it for ubuntu changes20:57
rm_workso yeah maybe a weekly build20:57
rm_workof just the latest stable?20:58
rm_workand say "use this for any version"?20:58
rm_workthen we REALLY commit to being fully backwards compatible at all times20:58
*** catintheroof has quit IRC21:04
*** pcaruana has quit IRC21:23
*** aojea has quit IRC21:26
*** aojea has joined #openstack-lbaas21:33
*** aojea has quit IRC21:33
*** aojea has joined #openstack-lbaas21:35
*** fnaval_ has quit IRC21:42
*** fnaval has joined #openstack-lbaas21:51
*** fnaval has quit IRC21:52
*** fnaval has joined #openstack-lbaas21:52
*** fnaval has quit IRC21:53
*** fnaval has joined #openstack-lbaas21:56
*** sshank has joined #openstack-lbaas21:56
*** jniesz has quit IRC22:04
*** krypto has quit IRC22:07
*** aojea has quit IRC22:12
johnsomrm_work FYI, the new OSC client plugin has been released.  1.1.0 (had to bump the middle due to a g-r update)22:44
rm_workk22:45
*** cpuga has joined #openstack-lbaas23:02
*** cpuga has quit IRC23:12
rm_workjohnsom: i'm trying to look into whether our indexes are bad23:39
rm_workjohnsom: but i'm not sure with sqlalchemy we explicitly set indexes?23:40
johnsomI don't think we have added any indexes beyond those that are automatically generated for the key columns23:40
johnsomAll of the primary key columns are automatically indexed.23:41
johnsomWe are probably missing an index on the project_id column.  Let me look.23:42
johnsomYeah, we might at some point need to add them for the project_id column.23:43
johnsomDo you think you have had enough churn to need to run an optimize on the mysql table????23:46
*** sshank has quit IRC23:54
rm_workno23:58
rm_worki'm looking at slow query logs right now to see if that will shed any light23:58
rm_workjohnsom: yeah i saw the same thing, i think project_id might be universally worth indexing23:58
rm_workespecially given it's used when doing LIST queries23:58
rm_workright?23:58
johnsomNot with admin, no23:59
rm_worklike23:59
rm_workselect * from <object_table> where project_id = <myproject>23:59
rm_workis basically the query23:59
rm_workah yes but with user accounts it is23:59
rm_workright?23:59
johnsomYes23:59
rm_worki would imagine it's worth an index <_<23:59
rm_workhonestly i'm tempted to index basically everything we could ever query on23:59
johnsomDon't23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!