Wednesday, 2019-02-27

*** yamamoto has quit IRC00:00
*** abaindur has quit IRC00:02
*** abaindur has joined #openstack-lbaas00:04
*** henriqueof has quit IRC00:15
*** abaindur has quit IRC00:49
*** abaindur has joined #openstack-lbaas00:55
*** Swami has quit IRC01:08
*** yamamoto has joined #openstack-lbaas01:09
*** yamamoto has quit IRC01:13
*** takamatsu_ has quit IRC01:39
*** takamatsu_ has joined #openstack-lbaas01:45
*** Dinesh_Bhor has joined #openstack-lbaas02:30
*** rcernin has quit IRC02:32
*** hongbin has joined #openstack-lbaas02:38
*** hongbin has quit IRC02:43
*** takamatsu_ has quit IRC02:53
*** yamamoto has joined #openstack-lbaas02:57
*** psachin has joined #openstack-lbaas03:01
*** abaindur has quit IRC03:01
*** yamamoto has quit IRC03:02
*** yamamoto has joined #openstack-lbaas03:15
*** yamamoto has quit IRC03:25
*** yamamoto has joined #openstack-lbaas03:25
*** ramishra has joined #openstack-lbaas03:57
*** abaindur has joined #openstack-lbaas05:34
*** ramishra has quit IRC05:43
*** ramishra has joined #openstack-lbaas05:45
*** ramishra has quit IRC05:55
*** ramishra has joined #openstack-lbaas06:03
*** ivve has joined #openstack-lbaas06:08
openstackgerritomkar telee proposed openstack/neutron-lbaas master: Feature correction: L7Policy/Rule for A10Networks  https://review.openstack.org/63957106:20
*** Dinesh_Bhor has quit IRC06:58
*** ramishra has quit IRC06:58
*** Dinesh_Bhor has joined #openstack-lbaas07:01
*** ramishra has joined #openstack-lbaas07:01
*** Dinesh_Bhor has quit IRC07:12
*** ivve has quit IRC07:17
*** takamatsu_ has joined #openstack-lbaas07:21
*** yamamoto has quit IRC07:25
openstackgerritVlad Gusev proposed openstack/octavia stable/rocky: Fix grenade job to clone Octavia from base branch  https://review.openstack.org/63934907:28
*** ivve has joined #openstack-lbaas07:34
*** gcheresh has joined #openstack-lbaas07:43
*** ccamposr has joined #openstack-lbaas07:45
*** yamamoto has joined #openstack-lbaas07:59
*** Dinesh_Bhor has joined #openstack-lbaas08:10
*** yboaron_ has joined #openstack-lbaas08:13
*** pcaruana has joined #openstack-lbaas08:13
*** Dinesh_Bhor has quit IRC08:14
*** yboaron_ has quit IRC08:18
*** yamamoto has quit IRC08:23
*** pcaruana has quit IRC08:28
*** takamatsu_ has quit IRC08:31
*** pcaruana has joined #openstack-lbaas08:42
*** abaindur has quit IRC08:46
*** pcaruana has quit IRC08:51
openstackgerritVlad Gusev proposed openstack/octavia stable/rocky: Enable debug for Octavia services in grenade job  https://review.openstack.org/63959908:54
*** pcaruana has joined #openstack-lbaas08:58
*** pcaruana|afk| has joined #openstack-lbaas09:01
*** pcaruana has quit IRC09:03
*** ivve has quit IRC09:11
*** takamatsu has joined #openstack-lbaas09:15
*** ivve has joined #openstack-lbaas09:26
*** sapd1 has quit IRC09:33
*** takamatsu has quit IRC09:44
*** yamamoto has joined #openstack-lbaas10:09
*** yamamoto has quit IRC10:14
*** yamamoto has joined #openstack-lbaas10:16
Adri2000johnsom: redeploying horizon from scatch (made easy by openstack-ansible as horizon is running in a dedicated lxc container I could just drop/recreate) fixed my issue :) thanks again10:18
*** salmankhan has joined #openstack-lbaas10:32
*** Dinesh_Bhor has joined #openstack-lbaas10:57
*** yamamoto has quit IRC10:58
*** yamamoto has joined #openstack-lbaas11:00
*** Dinesh_Bhor has quit IRC11:00
*** gcheresh_ has joined #openstack-lbaas11:10
*** gcheresh has quit IRC11:10
*** sapd1 has joined #openstack-lbaas11:11
*** gcheresh_ has quit IRC11:22
*** takamatsu has joined #openstack-lbaas11:24
*** ivve has quit IRC12:00
*** ramishra has quit IRC12:04
*** ramishra has joined #openstack-lbaas12:26
*** gcheresh_ has joined #openstack-lbaas12:54
*** ivve has joined #openstack-lbaas12:57
*** yamamoto has quit IRC13:00
*** pcaruana|afk| has quit IRC13:09
*** celebdor has joined #openstack-lbaas13:18
*** yamamoto has joined #openstack-lbaas13:36
*** yamamoto has quit IRC13:41
*** yamamoto has joined #openstack-lbaas13:42
*** henriqueof has joined #openstack-lbaas13:45
*** yamamoto has quit IRC14:00
*** yamamoto has joined #openstack-lbaas14:03
*** yamamoto has quit IRC14:03
*** fnaval has quit IRC14:20
*** psachin has quit IRC14:27
*** yamamoto has joined #openstack-lbaas14:38
openstackgerritVlad Gusev proposed openstack/octavia stable/rocky: Fix grenade job to clone Octavia from base branch  https://review.openstack.org/63934914:38
*** Adri2000 has quit IRC14:43
*** yamamoto has quit IRC14:43
openstackgerritVlad Gusev proposed openstack/octavia stable/rocky: Fix grenade job to clone Octavia from base branch  https://review.openstack.org/63934914:46
*** fnaval has joined #openstack-lbaas14:47
*** pcaruana has joined #openstack-lbaas14:57
*** gcheresh_ has quit IRC15:02
*** yamamoto has joined #openstack-lbaas15:05
*** yamamoto has quit IRC15:09
*** cbrumm_ has quit IRC15:30
*** cbrumm_ has joined #openstack-lbaas15:32
*** dmellado has quit IRC15:42
*** dmellado has joined #openstack-lbaas15:43
*** sapd1 has quit IRC15:46
*** gcheresh_ has joined #openstack-lbaas15:47
openstackgerritVlad Gusev proposed openstack/octavia stable/rocky: Fix grenade job to clone Octavia from base branch  https://review.openstack.org/63934915:58
*** s10 has joined #openstack-lbaas15:59
*** ivve has quit IRC16:01
s10Should the octavia-grenade job in Octavia stable/rocky become non-voting? It's fails for 2.5 month because something in stable/queens.16:03
johnsomI think it is being worked on and it needs to get fixed. I am inclined to leave it for now.16:05
*** takamatsu has quit IRC16:09
*** gcheresh_ has quit IRC16:09
*** s10 has quit IRC16:16
*** ramishra has quit IRC16:25
*** dmellado has quit IRC16:32
*** yamamoto has joined #openstack-lbaas16:32
*** dmellado has joined #openstack-lbaas16:34
openstackgerritVlad Gusev proposed openstack/octavia master: WIP: Add support for the oslo_middleware.http_proxy_to_wsgi  https://review.openstack.org/63973616:34
*** yamamoto has quit IRC16:37
cgoncalvesI still need to test it but likely the patch that broke stable/queens grenade job was https://review.openstack.org/#/c/624804/16:43
cgoncalveshttp://logs.openstack.org/49/639349/5/check/octavia-grenade/461ebf7/logs/screen-o-cw.txt.gz?level=WARNING#_Feb_27_08_32_43_98667416:44
*** pcaruana has quit IRC16:57
*** rtjure has quit IRC17:00
*** ccamposr has quit IRC17:00
*** ccamposr has joined #openstack-lbaas17:01
*** celebdor has quit IRC17:23
openstackgerritMichael Johnson proposed openstack/octavia master: Add 2 new fields into Pool API for support re-encryption  https://review.openstack.org/61444717:25
johnsomcgoncalves Someone in the horizon meeting today said you were working on this: https://storyboard.openstack.org/#!/story/200510117:26
johnsomI created a story for you17:26
*** ccamposr has quit IRC17:27
*** trown is now known as trown|lunch17:28
*** dims has quit IRC17:35
*** ivve has joined #openstack-lbaas17:36
openstackgerritMichael Johnson proposed openstack/octavia master: Pool support sni cert for backend re-encryption  https://review.openstack.org/61443217:38
openstackgerritMichael Johnson proposed openstack/octavia master: Add 2 new fields into Pool API for support re-encryption  https://review.openstack.org/61444717:39
*** takamatsu has joined #openstack-lbaas17:41
*** yamamoto has joined #openstack-lbaas17:41
*** yamamoto has quit IRC17:46
*** dims has joined #openstack-lbaas17:48
*** takamatsu has quit IRC18:01
*** yamamoto has joined #openstack-lbaas18:07
*** yamamoto has quit IRC18:12
*** trown|lunch is now known as trown18:25
openstackgerritMichael Johnson proposed openstack/octavia master: Add 2 new fields into Pool API for support re-encryption  https://review.openstack.org/61444718:34
*** salmankhan has quit IRC18:40
*** ivve has quit IRC18:43
*** takamatsu has joined #openstack-lbaas18:52
*** yamamoto has joined #openstack-lbaas19:32
*** yamamoto has quit IRC19:37
openstackgerritGerman Eichberger proposed openstack/octavia master: Fix parallel plug vip  https://review.openstack.org/63899219:42
*** celebdor has joined #openstack-lbaas19:54
johnsom#startmeeting Octavia20:00
openstackMeeting started Wed Feb 27 20:00:03 2019 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.20:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
*** openstack changes topic to " (Meeting topic: Octavia)"20:00
openstackThe meeting name has been set to 'octavia'20:00
johnsomHi folks20:00
colin-hi20:00
nmagnezio/20:00
johnsom#topic Announcements20:00
*** openstack changes topic to "Announcements (Meeting topic: Octavia)"20:00
cgoncalveshi20:01
johnsomThe TC elections are on. You should have received an e-mail with your link to the ballot.20:01
*** henriqueof has quit IRC20:01
johnsomThe octavia-lib feature freeze is now in effect.20:01
johnsomI have also released version 1.1.0 for Stein with our recent updates.20:01
colin-nice20:02
johnsomAnd the most important, NEXT WEEK IS FEATURE FREEZE FOR EVERYTHING ELSE20:02
johnsomAs usual, we are working against the priority list:20:03
johnsom#link https://etherpad.openstack.org/p/octavia-priority-reviews20:03
johnsomAny other announcements today?20:03
johnsom#topic Brief progress reports / bugs needing review20:04
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)"20:04
johnsomI have mostly been focused on the TLS patch chains.  The TLS client authentication patches have now merged. They work well in my testing.20:04
johnsomI'm currently working on the backend re-encyrption chain. I hope I can finish that up today, give it a test, and we can get that merged too.20:05
johnsomIf all goes well, I might try to help the volume backed storage patch and see if we can get it working for Stein. I created a test gate, but the patch fails...20:06
johnsomAny other updates?20:07
cgoncalvesI have been working on multiple fronts20:07
xgermano/20:07
cgoncalves1. RHEL 8 DIB and amphora support (tempest tests passing)20:07
cgoncalves#link https://review.openstack.org/#/c/623137/20:07
colin-appreciate the oslo merge, rebuilt and running at that point in master now20:07
cgoncalves#link https://review.openstack.org/#/c/638581/20:07
cgoncalves2. Allow ERROR'd load balancers to be failed over20:08
cgoncalves#link #link https://review.openstack.org/#/c/638790/20:08
cgoncalves3. iptables-based active-standby tempest test20:08
cgoncalves#link https://review.openstack.org/#/c/637073/20:08
cgoncalves4. general bug fix backports20:08
johnsomCool, thank you for working on the backports!20:09
xgerman+120:09
cgoncalvesstable/rocky grenade job is sadly still broken. I apologize for not having invested much time on it yet20:09
johnsomThat is next on the agenda, I wanted to check in on that issue.20:10
johnsom#topic Status of the Rocky grenade gate20:10
*** openstack changes topic to "Status of the Rocky grenade gate (Meeting topic: Octavia)"20:10
johnsomI just wanted to get an update on that. I saw your note earlier about a potential cause.20:11
cgoncalvesright20:11
cgoncalves#link https://review.openstack.org/#/c/639395/20:11
johnsomAre you actively working on that or is it an open item?20:11
cgoncalves^ this backport allow us now to see what's going on wrong creating a member20:11
cgoncalvesthat is where the grenade job is failing on20:11
cgoncalvesthe error is: http://logs.openstack.org/49/639349/5/check/octavia-grenade/461ebf7/logs/screen-o-cw.txt.gz?level=WARNING#_Feb_27_08_32_43_98667420:11
cgoncalvesthe rocky grenade job started failing between Dec 14-17 if I got that right20:12
cgoncalvesso I'm thinking if https://review.openstack.org/#/c/624804/  is what introduced the regression20:13
cgoncalvesthe member create call fails still on queens, not rocky20:13
xgermanwith all those regressions looks like we are lacking gates20:13
cgoncalvesxgerman, speaking of that, your VIP refactor patch partially broke active-standby in master :P20:14
xgermanI put up a fix20:14
johnsomYeah, not sure how the scenario tests pased but grenade is not.20:14
cgoncalvesxgerman, I don't see it. we can chat about that after grenade20:14
johnsomxgerman It looks like in my rush I forgot to switch it off of amphorae....20:15
johnsomlol20:15
xgermanyeah, two small changes and it came up on my devstack20:15
cgoncalvesxgerman, ah, I see it now. you submitted a new PS to Michael's change20:16
xgermanyep20:16
johnsomCool, I just rechecked my act/stdby patch which is setup to test taht20:16
cgoncalves#link https://review.openstack.org/#/c/638992/20:16
johnsom#link https://review.openstack.org/#/c/58468120:16
johnsomOk, so cgoncalves you are actively working on the grenade issue?20:17
cgoncalvesjohnsom, I will starting actively tomorrow, yes20:17
johnsomOk, cool. Thanks.  Just wanted to make sure we didn't think each other was looking at it, when in reality none of us were....20:18
johnsom#topic Open Discussion20:18
*** openstack changes topic to "Open Discussion (Meeting topic: Octavia)"20:18
johnsomI have one open discussion topic, but will open the floor up first to other discussions20:18
cgoncalvesI'm sure you'll be looking at it too, at least reviewing ;)20:18
johnsomOther topics today?20:19
johnsomOk, then I will go.20:19
colin-would like to soicit guidance20:19
colin-very briefly20:19
johnsomSure, go ahead colin-20:19
colin-an increaing number of internal customers are asking about the performance capabilities of the VIPs we create with octavia, and we're going to endeavor to measure that really carefully in terms of average latency, connection concurrency, and throughput (as these all vary dramatically based on cloud hw)20:20
johnsomYes, I did a similar exercise last year.20:21
colin-so, aside from economies of scale with multiple tcp/udp/http listeners, does anyone have advice on how to capture this information really effectively with octavia and its amphorae?20:21
colin-and i'm hoping to use this same approach to measure the benefits of various nova flavors and haproxy configruations later in stein20:21
openstackgerritVlad Gusev proposed openstack/octavia master: Add support for the oslo_middleware.http_proxy_to_wsgi  https://review.openstack.org/63973620:23
johnsomYeah, so I setup a lab, had three hosts for traffic generation, three for content serving. one for the amp20:23
johnsomI used iperf3 for the TCP (L4) tests and tsung for the HTTP tests20:23
openstackgerritVlad Gusev proposed openstack/octavia master: Add support for the oslo_middleware.http_proxy_to_wsgi  https://review.openstack.org/63973620:23
johnsomI wrote a custom module for nginx (ugh, but it was easy) that returned static buffers.20:23
colin-did you add any monitoring/observability tools for visualizing?20:24
openstackgerritVlad Gusev proposed openstack/octavia master: Add support for the oslo_middleware http_proxy_to_wsgi  https://review.openstack.org/63973620:24
colin-or was shell output sufficient for your purposes20:24
johnsomI did one series where traffic crossed hosts, one with everything on one host (eliminates the neutron issues).20:24
johnsomtsung comes with reporting tools20:24
colin-oh ok20:25
johnsomI also did some crossing a neutron router vs. all L220:25
johnsomThen it's just a bunch of time tweaking all of the knobs20:25
colin-good feedback, thank you20:25
johnsomFor the same-host tests, iperf3 with 20 parallel flows, 1vcpu, 1GB ram, 2GB disk did ~14gbps20:26
johnsomBut of course your hardware, cloud config, butterflys flapping wings is Tahiti, all impacts what you get.20:27
johnsomcaveat, caveat, caveat.....20:27
colin-yeah indeed. if anyone else has done this differently or tested different hardware NICs this way please lmk! that's all i had20:28
johnsomYeah, get ready to add a ton of ******20:28
johnsomfor all the caveats20:28
johnsomI can share the nginx hack code too if you decide you want it.20:29
johnsomOk, so we have this issue where if people kill -9 the controller processes we can leave objects in PENDING_*20:30
xgermanalso are you running the vip on an overlay? Or dedicated vlan, etc.20:30
johnsomI have an idea for an interim solution until we do jobboard/resumption.20:31
xgermanjohnsom: that type of thing was supposed to get fixed when we adopt job-board20:31
johnsomlol, yeah, that20:31
xgermanour task-(flow) engine should have a way to deal with that20:31
xgermanthat’s why we went with an engine20:31
johnsomIt does, in fact multiple ways, but that will take some development time to address IMO20:32
johnsomSo, as a short term, interim fix I was thinking that we could have the processes create a UUID unique to it's instance, write that out to a file somewhere, then check it on startup and mark anything it "owned" as ERROR.20:33
johnsomThoughts?  Comments?20:33
cgoncalvesif $time.now() > $last_updated_time+$timeout -> ERROR?20:33
johnsomThe hardest part is where to write the file....20:33
johnsomIt would require a DB schema change, which we would want to get in before feature freeze (just to be nice for upgrades, etc.). So thought I would throw the idea out now.20:34
johnsomI think the per-process UUID would be more reliable than trying to do a timeout.20:36
openstackgerritVlad Gusev proposed openstack/octavia master: Add support for the oslo_middleware http_proxy_to_wsgi  https://review.openstack.org/63973620:36
cgoncalveshmmm20:37
cgoncalveswhat then flipping status to PENDING_UPDATE? maybe only valid to certain resources20:37
johnsomThe only downside is we don't have a /var/lib/octavia on the controllers today, so it's an upgrade/packaging issue20:38
cgoncalvesand not backportable20:38
johnsomRight, the "don't do that" still applies to older releases20:39
johnsomI didn't follow the PENDING_UPDATE comment20:39
cgoncalvesnah, never mind. it prolly doesn't make any sense anyway xD (I was thinking along the same lines of allowing ERROR'd LBs to be failed over)20:40
johnsomIt would have to flip them to ERROR because we don't know where in the flow they killed it20:40
johnsomYeah, maybe a follow on could attempt to "fix" it, but that is again logic to identify where it died. Which is starting the work on jobboard/resumption.20:41
cgoncalvesthinking of a backportable solution, wouldn't timeouts suffice?20:41
johnsomI don't like that approach for a few reasons. We seem to have widely varying performance in the field, so picking the right number would be hard, sort of making it an hour or something, which defeats the purpose of a timely cleanup20:42
xgermanmmh, people would likley be happy if we just flip PENDING to ERROR with the housekeeper after a while20:43
johnsomI mean we already have flows that timeout after 25 minutes due to some deployments, so it would have to be longer than that.20:43
xgermansome operators tend to trade resources for less work… so there’s that20:43
johnsomYeah, the nice thing about the UUID too is it shames the operator for kill -920:44
johnsomWe know exactly what happened20:44
xgermanor for having servers explode20:44
xgermanor poweswitch istakes20:44
cgoncalvesalso more and more clouds run services in containers, so docker restart would basically mean kill -920:44
xgermanyep20:44
johnsomYep, k8s is horrible20:45
cgoncalvesyou don't need k8s to run services in containers ;)20:45
colin-stop, my eyes will roll out of my head20:45
cgoncalvesI mean openstack services!20:45
xgermanyeah, we should rewrite octavia as a function-as-a-service20:45
johnsomI know, but running the openstack control plane in k8s means lots of random kills20:46
colin-indeed20:46
xgermanso how difficult is job board? did we ever look into the effort?20:46
johnsomAnyway, this is an option, yes, may not solve all of the ills.20:46
johnsomYeah, we did, it's going to probably be a cycles worth of effort to go full job board.20:47
johnsomThere might be a not-so-full job board that would meet our needs too, but that again is going to take some time.20:47
xgermanI would rather start on the “right” solution then do crudges20:48
cgoncalvesI was unaware of jobboards until now. does it sync state across multiple controller nodes?20:48
johnsomnot really, but accomplishes the same thing.20:48
cgoncalvesasking because if octavia worker N on node X goes down, worker N+1 on node X+1 takes over20:49
johnsomSo first it enables persistence of the flow data. I uses a set of "worker" processes. The main jobboard assigns and monitors the workers completion of each task20:49
johnsomRight, effectively that is what happens.20:50
cgoncalveswithout a syncing mechanism, how would octavia know which pending resources to ERROR?20:50
xgermando we need a zookeeper for jobboard. Yuck!20:50
johnsomMuch of the state is stored in the DB20:50
cgoncalvesok20:50
colin-jobboard = ?, for the uninitiated20:50
johnsomYeah, so there was a locking requirement I remember from the analysis. I don't think zookeeper was the only option, but maybe20:50
colin-is this a work tracking tool?20:50
colin-ah, disregard20:51
xgermanhttps://docs.openstack.org/taskflow/ocata/jobs.html20:51
johnsom#link https://docs.openstack.org/taskflow/latest/user/jobs.html20:52
johnsomAnyway, I didn't want to go deep on the future solution.20:52
johnsomWhat I am hearing is we would prefer to leave this issue until we have resources to work on the full solution and that an interim solution is not valuable20:52
xgerman#vote?20:53
cgoncalvesI still didn't get why timeouts wouldn't be a good interim (and backportable) solution20:53
johnsomWhat would you pick as a timeout?20:53
cgoncalveswhat ever is in the config file20:54
johnsomWe know some clouds complete tasks in less than a minute, others it takes over 2020:54
cgoncalvesif load creation: build timeout + heartbeat timeout20:54
cgoncalvesotherwise, just heartbeat timeout. no?20:54
johnsomSo 26 minutes?20:54
cgoncalvesbetter than forever and ever20:55
cgoncalvesand not being able to delete/error20:55
johnsomI don't think we can backport this even if it has a timeout really20:55
johnsomThe timeout would be a new feature to the housekeeping process20:56
cgoncalvesno API or DB schema changes. no new config option20:56
johnsomThe other thing that worries me about timeouts is folks setting it and not understanding the ramifications20:56
colin-yeah that's tricky, i too don't want to leave them (forever) in the state where they can't be deleted20:56
cgoncalvesit would be a new periodic in housekeeping20:56
*** henriqueof has joined #openstack-lbaas20:57
xgermanyeah, I am hunted by untuned tieouts almost every day20:57
colin-xgerman: thanks for the link20:58
johnsomYep. I think it breaks the risk of regression and self-contained rules20:58
johnsomAnd certainly the "New feature" rule20:58
johnsomWell, we are about out of time.  Thanks folks.20:59
cgoncalves"Fix an issue where resources could eternally be left in a transient state" ;)20:59
johnsomIf you all want to talk about job board more, let me know and I can put it on the agenda.20:59
cgoncalvesI will certainly read more about it20:59
johnsomI just think it's a super dangerous thing in our model to change the state out from under other processes21:00
johnsom#endmeeting21:00
*** openstack changes topic to "Discussions for Octavia | Stein priority review list: https://etherpad.openstack.org/p/octavia-priority-reviews"21:00
openstackMeeting ended Wed Feb 27 21:00:25 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/octavia/2019/octavia.2019-02-27-20.00.html21:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/octavia/2019/octavia.2019-02-27-20.00.txt21:00
openstackLog:            http://eavesdrop.openstack.org/meetings/octavia/2019/octavia.2019-02-27-20.00.log.html21:00
johnsomMaybe I will propose the code and you all can vote on the patch.21:01
*** ivve has joined #openstack-lbaas21:03
cgoncalvesnice21:04
cgoncalvesI am not against jobboard at all. I was just considering of a quick and interim fix we could also backport21:05
* cgoncalves disconnects21:05
johnsomYeah, jobboard is the end goal, but a lot of work. I was going for an interim solution that reliably improves the situation for Stein forward21:05
*** abaindur has joined #openstack-lbaas21:12
openstackgerritBrian Haley proposed openstack/neutron-lbaas master: Update neutron quota_driver path  https://review.openstack.org/63982921:13
*** ivve has quit IRC21:14
openstackgerritVlad Gusev proposed openstack/octavia master: Add support for the oslo_middleware http_proxy_to_wsgi  https://review.openstack.org/63973621:18
*** yamamoto has joined #openstack-lbaas21:20
*** yamamoto has quit IRC21:25
*** abaindur has quit IRC21:37
*** celebdor has quit IRC21:58
rm_workAh right, meeting :/22:09
rm_workHad an internal meeting exactly overlap22:09
*** trown is now known as trown|outtypewww22:15
openstackgerritGerman Eichberger proposed openstack/octavia master: Fix parallel plug vip  https://review.openstack.org/63899222:56
*** celebdor has joined #openstack-lbaas22:58
*** yamamoto has joined #openstack-lbaas23:00
*** rcernin has joined #openstack-lbaas23:06
*** sapd1 has joined #openstack-lbaas23:06
*** celebdor has quit IRC23:53

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!