Wednesday, 2018-12-05

johnsomBy allowing them to specify the subnet it implies the network. It's just more specific.00:00
johnsomnetwork means octavia has to *GUESS* which subnet the user really wants00:00
rm_workok, but by not allowing them to specify a network-only, it breaks cases involving provider-networks00:00
johnsomHmmm, is that true?  Interesting use case00:01
rm_workyes00:01
johnsomI mean I know technically we have to have a subnet of some sort on the network00:01
rm_workyeah but it can be detected00:01
johnsomI don't think you can plug ports without it00:01
johnsomHmmm, we might have some bugs for a VIP without a neutron subnet defined on it.00:02
johnsomI think there are definitely some assumptions in the code around that.00:03
johnsomWhich are probably bugs....00:03
johnsomrm_work: Like this: https://github.com/openstack/octavia/blob/master/octavia/api/v2/controllers/load_balancer.py#L12200:04
johnsomAh, I guess that will work as long as someone doesn't specify the IP, which...  Would kind of seem valid even if they did in the provider network idea00:05
rm_worknewer amp image with older octavia controlplane -- that SHOULD work, right?00:09
rm_workspecifically looking at rocky amp with queens controller00:09
johnsomyes00:10
rm_workthe issue though is i'm worried about the 1.5->1.8 change00:10
rm_workbecause we normally would check and generate the appropriate config00:10
rm_workbut i think we ADDED that in the newer controller code00:10
rm_workdidn't we?00:10
johnsomThat should  be "hidden" in the amphora and abstracted by the agent api00:10
rm_workwell, the controller asks the amp API for the version00:10
rm_workahhh yeah and the queens controller still knows how to make a 1.8 config00:11
rm_workyeah k00:11
johnsomWell, we actually hack the config file as we write it out inside the amp00:11
rm_workerr, do we?00:12
rm_worki didn't think the agent touched it00:12
johnsomHmm, maybe that was a different issue as that doesn't make a ton of sense. But in general, it should owrk00:12
rm_workthought we just laid it down00:12
rm_workregarding VIP bugs, yeah, one of the guys here has a patch for one of those bugs I think00:13
rm_workhe was here a while back and posted it (because i remember reading it in-channel previously)00:13
johnsomhttps://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/agent/api_server/listener.py#L12700:14
*** aojea has quit IRC00:15
rm_workalso, i think senlin might be using our statuses incorrectly00:16
johnsomAre they polling?00:16
rm_workneed to look at it, but from what peeps are saying, they might be using our operating_status in a way they shouldn't00:16
rm_worklike "operating status ERROR/DEGRADED -> failed! -> delete LB and try again"00:16
rm_workwhich happens if any one member didn't respond quick enough00:17
rm_workon spin-up00:17
rm_workI need to dig into their stuff for a bit first, but just letting you know it's on my radar00:17
rm_worki'll let you know if i find it's actually a problem00:17
rm_workbut poke me if you see anyone else mention anything :P00:17
johnsomOh geez, yeah, that is very wrong00:18
rm_workneed to make sure that's an upstream thing and not downstream or just user-error00:18
rm_workbut yeah i'll get back to you00:18
openstackgerritMichael Johnson proposed openstack/octavia-tempest-plugin master: Add v2 two-node scenario test  https://review.openstack.org/60516300:21
rm_workone more thing00:23
rm_workwhen we boot amps, we don't really have a way for an operator to provide some custom metadata to nova during the boot, AFAICT00:23
rm_workthat seems like something we could do, right? just allow an operator to specify some key-value pairs for us to pass through to nova as metadata?00:24
johnsomWe can easily add that with flavors00:24
rm_workah, flavors would cover that?00:24
johnsomYeah, well, we would add a default in the config file, then allow override in the flavor00:24
rm_workdidn't know flavors would dive that deep into the nova stuff00:24
rm_workbut makes sense00:24
rm_workright now we just... don't pass custom metadata, IIRC?00:25
johnsomIt doesn't yet (I have only implemented topology so far), but this is the type of thing00:25
rm_workso, yeah, ok00:25
johnsomCorrect, no nova "metadata" today00:25
*** blake has quit IRC00:30
*** pbandark has quit IRC00:38
rm_workah yeah, just traced it through, we can EITHER pass metadata, or use personality files for the agent config00:45
rm_workthere's no doing both00:45
rm_workwas hoping we could just edit the user data template file to include the metadata we wanted00:45
rm_workbut it's an either/or00:45
johnsomWell, user_data is tiny, like 16k or something. Also, I think the nova client calls things "metadata" that aren't their metadata, so be careful on that.00:49
*** aojea has joined #openstack-lbaas00:56
*** aojea has quit IRC01:00
*** aojea has joined #openstack-lbaas01:08
*** aojea has quit IRC01:13
*** hongbin has joined #openstack-lbaas01:36
openstackgerritsapd proposed openstack/python-octaviaclient master: Support REDIRECT_PREFIX for openstack client  https://review.openstack.org/60591401:56
*** aojea has joined #openstack-lbaas01:56
*** witek has quit IRC02:00
*** witek has joined #openstack-lbaas02:00
openstackgerritMichael Johnson proposed openstack/octavia-tempest-plugin master: Add v2 two-node scenario test  https://review.openstack.org/60516302:06
*** abaindur has quit IRC02:07
*** abaindur has joined #openstack-lbaas02:08
*** aojea has quit IRC02:19
openstackgerritsapd proposed openstack/python-octaviaclient master: Support REDIRECT_PREFIX for openstack client  https://review.openstack.org/60591402:20
*** abaindur has quit IRC02:20
*** hongbin has quit IRC03:07
openstackgerritsapd proposed openstack/python-octaviaclient master: Support REDIRECT_PREFIX for openstack client  https://review.openstack.org/60591403:07
*** aojea has joined #openstack-lbaas03:11
*** ramishra has joined #openstack-lbaas03:23
*** aojea has quit IRC03:37
*** yamamoto has joined #openstack-lbaas03:38
*** yamamoto has quit IRC03:42
*** hongbin has joined #openstack-lbaas04:24
*** yamamoto has joined #openstack-lbaas04:32
openstackgerritMichael Johnson proposed openstack/octavia master: Fix devstack plugin for multi-node jobs  https://review.openstack.org/62167704:33
openstackgerritMichael Johnson proposed openstack/octavia master: Adds flavor support to the amphora driver  https://review.openstack.org/62132304:45
*** hongbin has quit IRC05:19
*** aojea has joined #openstack-lbaas05:22
*** aojea has quit IRC05:27
*** yamamoto has quit IRC05:34
*** zufar has joined #openstack-lbaas05:42
zufarHi all, I have question about octavia in openstack queens05:42
johnsomHi, what is your question?05:43
zufarare all instance that need the load balancer service must connect to the `lb-mgmt-net` or another network we define in `amp_network` in /etc/octavia/octavia.conf?05:44
*** dmellado has quit IRC05:46
johnsomThe lb-mgmt-net is used for the controllers (worker, health manger, housekeeping) to communicate with the amphora (and the other way as well). No tenant traffic goes over this network, it is only for management.  The lb-mgmt-net you create in neutron is set in the octavia.conf "amp_boot_network_list" setting.05:47
*** dmellado has joined #openstack-lbaas05:48
zufarHi jonsom, thank you for the explanation. So when we create loadbalancer service example with this command `openstack loadbalancer create --name test-lb --project admin --vip-subnet-id internal --enable` the LB instance automatic attach network from internal subnet and from lb-mgmt-net right?05:51
zufarso LB Instance automatically have 2 ip, 1 is from internal network and 1 is from lb-mgmt-net?05:51
johnsomYes, lb-mgmt-net is attached when the amphora is booted, then, when a user specifies the "vip-subnet-id" we hot-plug that network into the amphora. This vip-subnet is inside a network namespace inside the amphora, so isolated from the lb-mgmt-net05:52
johnsomYes, at the completion of that command, the amphora instance will have two ports plugged and two IP addresses assigned. Just note that they are in different namespaces inside the amphora.05:53
*** yamamoto has joined #openstack-lbaas05:54
zufarthank you johnsom, its clear now05:54
johnsomGreat! Happy to help!05:54
*** dmellado has quit IRC05:55
*** dmellado has joined #openstack-lbaas05:57
*** dmellado has quit IRC06:02
*** dmellado has joined #openstack-lbaas06:14
*** pcaruana has joined #openstack-lbaas07:10
*** zufar has quit IRC07:15
*** takamatsu has joined #openstack-lbaas07:21
*** yboaron has joined #openstack-lbaas07:33
*** ccamposr has joined #openstack-lbaas07:39
nmagnezijohnsom, cgoncalves, fyi: https://github.com/CentOS/sig-cloud-instance-images/issues/13307:40
openstackgerritCarlos Goncalves proposed openstack/octavia master: Fix centos7 gate for centos 7.6 release  https://review.openstack.org/62171908:48
openstackgerritMerged openstack/octavia-dashboard master: Change openstack-dev to openstack-discuss  https://review.openstack.org/62202108:50
openstackgerritMerged openstack/octavia-lib master: Change openstack-dev to openstack-discuss  https://review.openstack.org/62244208:52
*** celebdor has joined #openstack-lbaas08:55
*** aojea_ has joined #openstack-lbaas09:03
*** aojea_ has quit IRC09:07
*** rcernin has quit IRC09:38
cgoncalveswe might not actually need https://review.openstack.org/#/c/622590/09:43
cgoncalveshttps://review.rdoproject.org/r/#/c/17673/ added golang to the RDO dep repo and RDO repo is enabled in devstack already09:43
*** witek has quit IRC09:50
openstackgerritYing Wang proposed openstack/neutron-lbaas-dashboard master: Display error when entering a non-integer  https://review.openstack.org/62114110:10
openstackgerritYing Wang proposed openstack/neutron-lbaas-dashboard master: Display error when entering a non-integer  https://review.openstack.org/62114110:14
openstackgerritCarlos Goncalves proposed openstack/octavia-tempest-plugin master: Update default provider to amphora  https://review.openstack.org/62291710:16
*** celebdor has quit IRC10:20
*** salmankhan has joined #openstack-lbaas10:27
*** salmankhan1 has joined #openstack-lbaas10:31
*** salmankhan has quit IRC10:32
*** salmankhan1 is now known as salmankhan10:32
*** celebdor has joined #openstack-lbaas10:36
openstackgerritCarlos Goncalves proposed openstack/octavia master: Enable debug for Octavia services in grenade job  https://review.openstack.org/62231911:13
*** salmankhan1 has joined #openstack-lbaas11:27
*** salmankhan has quit IRC11:28
*** salmankhan1 is now known as salmankhan11:28
velizarxHi folks. I'm seeing a lot of warning messages in load with text: "Amphora <uuid> health message was processed too slowly: 10.7944991589s! The system may be overloaded or otherwise malfunctioning. This heartbeat has been ignored and no update was made to the amphora health entry. THIS IS NOT GOOD." But what is strange all of this messages about one amphora. Other amphorae work fine without warnings. Does this message talks about11:29
velizarx overloaded on the amphora? or problem on health manager? I don't understand how the problem may concern only one amphora.11:29
cgoncalvesvelizarx, hi. the message regards the time that the health manager took to process the heatbeat message sent from the amphora11:32
cgoncalvesvelizarx, could it be that all other amps are sending messages to other health manager instances and that specific amp to a different health manager instance?11:33
cgoncalvesor that the health manager receives and processes first all other amps and by the time it processes that amp heartbeat it is overloaded11:34
velizarxcgoncalves, Hm, interesting idea but it really strange what this problem occurs only with one amphora on all health managers. And other checks in the previous loop work fine. For example, see part of log: https://pastebin.com/xgqsaYQx11:43
velizarxcan it depend on the response body this amphora?11:44
cgoncalvesI don't see how, unless you've changed the response body11:47
cgoncalvesis that amp running the same image as other amps?11:47
cgoncalvesI have no clue. I am just trying help to troubleshoot11:48
*** yamamoto has quit IRC11:53
*** yamamoto has joined #openstack-lbaas11:53
*** yamamoto has quit IRC11:57
velizarxcgoncalves, yes, image the same. Other question: this message can be a reason to run 'octavia-failover-amphora-flow' for trying to restore loadbalancer?12:00
cgoncalvesvelizarx, yes, since the health manager is not able to process the message under 10 seconds it doesn't update the record and after 6 total attempts it triggers amp failover12:03
velizarxcgoncalves, Ok. Thank you for your help. I will dig the logs further.12:03
cgoncalvessorry I can't help you much, maybe others may know12:04
*** salmankhan has quit IRC12:22
*** khomesh has joined #openstack-lbaas12:22
*** salmankhan has joined #openstack-lbaas12:30
*** ramishra has quit IRC12:51
*** yamamoto has joined #openstack-lbaas12:54
*** ramishra has joined #openstack-lbaas13:05
*** yamamoto has quit IRC13:06
*** aojea has joined #openstack-lbaas13:45
*** velizarx has quit IRC13:47
*** khomesh has quit IRC13:51
*** velizarx has joined #openstack-lbaas13:58
*** bzhao__ has quit IRC14:18
johnsomvelizarx We have found that the message you are seeing is tied closely to the database. It could be that the record for that amp is somehow slower to access in your database. You could attempt maintenance on your database to see if that resolves the issue.  Also note, there is a patch pending for Queens (already merged in Rocky and Stein) that will significantly improve the database performance and reduce the chance14:58
johnsomof that issue occurring.14:58
velizarxjThank you johnsom. Yes, after analyzing monitoring data I saw that in the same time was problems in database (locks in Octavia's database). I think it is not Octavia's problem, because I saw requests from PMM and now you confirmed it.15:06
openstackgerritMichael Johnson proposed openstack/octavia master: Adds flavor support to the amphora driver  https://review.openstack.org/62132315:21
*** velizarx has quit IRC15:26
xgermanYeah the DB is our achilles' heel —15:32
openstackgerritMichael Johnson proposed openstack/octavia master: Adds flavor support to the amphora driver  https://review.openstack.org/62132315:40
johnsomWell, until the patch finally lands...15:41
johnsomHappy day, finally got a zull v3 multi-node test to work!  Now to polish it up...15:43
openstackgerritMichael Johnson proposed openstack/octavia master: Adds flavor support to the amphora driver  https://review.openstack.org/62132316:05
*** aojea has quit IRC16:14
*** salmankhan has quit IRC16:14
*** pcaruana has quit IRC16:18
*** salmankhan has joined #openstack-lbaas16:19
*** aojea has joined #openstack-lbaas16:19
*** aojea has quit IRC16:20
*** aojea has joined #openstack-lbaas16:25
*** fnaval has joined #openstack-lbaas16:30
*** aojea has quit IRC16:30
*** aojea has joined #openstack-lbaas16:37
*** velizarx has joined #openstack-lbaas16:38
*** pbourke has joined #openstack-lbaas16:57
pbourkehi, in the guide it states16:58
pbourke"Add appropriate routing to / from the ‘lb-mgmt-net’ such that egress is allowed, and the controller (to be created later) can talk to hosts on this network."16:58
pbourkedont suppose anyone has an example of how to do this?16:58
johnsomHi, yes, I recently sent an e-mail pointing to some of those. Let me find a link16:58
johnsomhttp://lists.openstack.org/pipermail/openstack-discuss/2018-December/000584.html16:59
*** yamamoto has joined #openstack-lbaas17:04
pbourkejohnsom: thanks!17:07
*** yamamoto has quit IRC17:08
*** velizarx has quit IRC17:25
*** salmankhan has quit IRC17:26
*** salmankhan has joined #openstack-lbaas17:34
*** roukoswarf has joined #openstack-lbaas17:40
roukoswarfdoes anyone have any insight on how providers are doing on implementing the new drivers? i have some a10s and we will be doing a new deployment, wondering what the best route is.17:41
rm_workjohnsom: ohhhh nice, zuulv3 multinode!!! grats17:42
rm_workjohnsom: how did work on ipv6 tempest testing go while i was out?17:42
johnsomYeah, it's been an adventure, but I finally figured out the incantation required.17:43
rm_workdid we make any progress on that? IIRC we were going to split out whole new test runs17:43
johnsomThe IPv6 patches are all up for review. Both tests and code fixes.17:43
johnsomIf you missed it, we have a review backlog: https://etherpad.openstack.org/p/octavia-priority-reviews17:43
rm_worknoice17:43
rm_workyeah as soon as stuff settles down, i'm going to try to get back to reviewing17:43
johnsom+117:44
*** celebdor_ has joined #openstack-lbaas17:48
*** celebdor has quit IRC17:50
*** aojea has quit IRC17:53
*** aojea has joined #openstack-lbaas17:53
*** aojea has quit IRC17:59
*** salmankhan has quit IRC18:07
xgermansweet18:17
*** aojea has joined #openstack-lbaas18:35
*** aojea has quit IRC19:07
*** abaindur has joined #openstack-lbaas19:30
*** aojea has joined #openstack-lbaas19:52
johnsom#startmeeting Octavia20:00
openstackMeeting started Wed Dec  5 20:00:03 2018 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.20:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
*** openstack changes topic to " (Meeting topic: Octavia)"20:00
openstackThe meeting name has been set to 'octavia'20:00
johnsomHi folks!20:00
xgermano/20:00
cgoncalveso/20:00
johnsom#topic Announcements20:00
*** openstack changes topic to "Announcements (Meeting topic: Octavia)"20:00
johnsomThe only real announcement I have today is that the Berlin summit videos are now posted.20:01
nmagnezio/20:01
johnsom#link https://www.openstack.org/videos/summits/berlin-201820:01
xgermanyeah - so you all can see what cgoncalves and I were up to20:01
johnsomYou all did a pretty good job.  Thanks for presenting for us!20:02
johnsomAny other announcements today?20:02
cgoncalvesyou taught us well20:02
xgerman+120:02
johnsomThere is some discussion of moving projects to a different release model, but I don't think that impacts us for the most part.20:03
johnsomOk, moving on then...20:03
johnsom#topic Brief progress reports / bugs needing review20:03
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)"20:03
colin-o/ sorry i'm late20:04
johnsomI have been working on flavors. My latest patch implements the core of flavors for the amphora driver. The first flavor "option" is for the topology.20:04
johnsomI have one more patch to add some API stuff, then it's down to the CLI, Tempest plugin, and finally adding additional flavor options.20:04
johnsomSo, good progress.20:05
johnsomI have also figured out the last zuulv3 incantation I needed to get zuulv3 native multi-node gates running. I should have that wrapped up this week.20:05
cgoncalvesgreat progress, thank you!20:06
colin-am still looking into nova-lxd integration for containerized amphora but am waiting on a nova upgrade before i proceed. expect to have something to report on that after kubecon next week20:06
johnsomShort summary, you can't assign IPs for the hosts via devstack any longer. There is an ansible variable that tells you which IP the hosts got....20:06
johnsomcolin- Excellent, excited that someone has time to look at that again.20:07
colin-still very early days but am hopeful for it. would be really cool20:07
johnsomYes!20:08
johnsomAny other updates?  I know xgerman updated our testing web server to support UDP. This will enable tempest test for the UDP work added in Rocky.  Thanks!20:08
xgermanYep.20:09
johnsom#topic CentOS 7 gate20:09
*** openstack changes topic to "CentOS 7 gate (Meeting topic: Octavia)"20:09
johnsomI put a few gate related topic items on the agenda today.  Starting with CentOS 720:10
johnsomTo be frank, this gate has been unstable for a while and with the recent release of CentOS 7.6 it is broken again.20:10
xgerman#vote non voting20:11
johnsomI will throw shame towards the person that pulled packages out in a dot release.....20:11
johnsomlol, xgerman is ahead of me, but yes, I would like to propose we drop the CentOS 7 gate back down to non-voting.20:11
johnsomComments?20:11
cgoncalvesafaik golang is still present in rhel 7.6 repo20:11
johnsomIf that is the case, why are the gate runs failing because it can't find it?20:12
cgoncalves+1 until it gets fixed20:12
cgoncalvesgate runs centos, not rhel20:12
nmagneziYup, stability should be a priority here. +1 from me as well20:13
cgoncalveshttp://paste.openstack.org/show/736720/20:13
johnsomOk, that is a majority of the folks that raised their hand as attending today, so I will post a patch to drop it to non-voting.20:14
cgoncalvesoh, rhelosp-ceph-3.0-tools provides it20:14
colin-sounds reasonable yeah20:14
johnsomReally the issues have been around the package repos and mirrors. This may be another one of those issues.20:14
colin-now that it's a safe majority i'll chime in ;)20:14
xgermanLol20:14
johnsom#topic Grenade gate20:15
*** openstack changes topic to "Grenade gate (Meeting topic: Octavia)"20:15
johnsomSo the fun continues...  The grenade gate mysteriously started failing recently.20:15
cgoncalvesI spent some time today checking what's going on there, no luck20:15
johnsomMy quick look at this was apache is doing something wrong and the API calls aren't making it to us.20:15
johnsomOk, thanks for the update. This is one I don't really want to make non-voting. At least until we can see why it is failing.20:16
cgoncalvesI see it in keystone but then it returns 40420:16
cgoncalveshttp://logs.openstack.org/19/622319/3/check/octavia-grenade/3ee64ee/logs/apache_config/octavia-wsgi.conf.txt.gz20:17
johnsomYeah, that is a pretty simple file...20:17
johnsomI will try to spend some time this afternoon debugging this. Any assistance is very welcome.20:18
johnsomIt is an important gate to show we can do upgrades...20:18
xgermanLet me know how I can help20:18
cgoncalvesfind the root cause :)20:19
johnsomxgerman Another set of eyes would be welcome20:19
johnsom#link http://logs.openstack.org/38/617838/5/check/octavia-grenade/3479905/20:19
johnsomThis is a simple patch that is failing the grenade gate (all of them are now, so not likely this patch)20:19
johnsomThanks folks for taking a look at that.20:20
johnsom#topic Multi-node gate(s) and scenario gates20:20
*** openstack changes topic to "Multi-node gate(s) and scenario gates (Meeting topic: Octavia)"20:20
johnsomOk, next on the agenda, since I have multi-node gates running I wanted to run an idea by you all.20:20
johnsomWe have a lot of gates..... (nlbaas retirement is coming!)20:21
johnsomSince these multi-node gates will run the scenario test suite, how do you feel about dropping our current two scenario gates and instead using the multi-node version as our primary scenario tests?20:21
johnsomOr do folks see value in the single node scenario gates?20:22
johnsomJust throwing it out as an idea20:23
cgoncalveswhich two scenario gates? octavia-v2-dsvm-scenario and ocavia-dsvm-scenario?20:23
cgoncalvesor move octavia-v2-dsvm-scenario{,-py3} to multi-node?20:23
johnsomoctavia-v2-dsvm-scenario and octavia-v2-dsvm-py35-scenario20:24
nmagneziWell a multi-node is more true to life. The question is if whether or not we have something to gain by keeping a single node alongside it20:24
johnsomYeah, the only downside I see is multi-node has more moving parts, so could be more likely to have failures not related to the patch being tested....]20:25
cgoncalvesright. can we make it non-voting for a while and then decide based on success/failure rate?20:25
nmagneziI assume we'll start with it as non-voting so we can get an indication of stability20:25
nmagneziJust trying to think about the longer term20:25
johnsomYep20:26
johnsomI was just trying to reduce the number of duplicate runs of the same tests and save some precious nodepool instances.20:26
* johnsom Glares at triple-o's nodepool usage20:26
nmagnezihaha20:27
cgoncalvestripleo fine folks doing fine work20:28
johnsomOk, I will simply replace the current (broken) zuul v2 octavia-v1 gates with non-voting zuul v3 octavia-v2 gates for now and we can re-consider later.20:28
nmagneziSo I think everyone will agree that multi node tests bring a ton of value. Maybe we can just start and see how it plays out? (I know, we'll consume a lot more nodes in the meantime)20:28
johnsomI wasn't criticizing triple-o folks, just the heavy nodepool usage....20:28
cgoncalvesyes but we could also run tests faster and in parallel20:29
cgoncalvesjohnsom, I know ;)20:29
johnsomYeah, right now we have the tempest concurrency capped at 2. With the multi-node we might be able to raise that.20:30
cgoncalvesok, so we're dropping octavia-v1 jobs \o/20:30
xgermanof course, once n-lbaas gets removed20:30
cgoncalvesjohnsom, 2? I think at 120:30
johnsom#link https://github.com/openstack/octavia-tempest-plugin/blob/master/zuul.d/jobs.yaml#L13220:31
cgoncalvesoh, 2 yes20:31
johnsomWe could also be greedy as spin up a third instance with nova on it...20:31
xgermanI wouldn’t call that greedy… OSA is no light foot neither20:32
* johnsom Hears maniacal laughter from somewhere....20:32
openstackgerritMerged openstack/neutron-lbaas master: use neutron-lib for _model_query  https://review.openstack.org/61778220:32
johnsomOh so very true...20:32
*** Swami has joined #openstack-lbaas20:32
cgoncalvesshould we also suffix py2-based jobs with -py2 and drop suffix -py3? make python3 officially first class citizen20:32
johnsomOk, I think I have the answer/path forward I was looking for.20:33
xgermank20:33
johnsomcgoncalves Yes. I can do that.20:33
johnsom#topic Open Discussion20:33
*** openstack changes topic to "Open Discussion (Meeting topic: Octavia)"20:33
johnsomWe should also decide how/when we do bionic....20:34
johnsomDo we want parallel xenial and bionic for a bit, or just make the jump?20:34
cgoncalvesI was typing that but decided not to put more burn on you20:34
xgermanparallel with bionic non-voting20:34
cgoncalves*burden20:35
johnsomWe are not yet "bionic" native for the amp, but the compatibility mode seems to work fine.20:35
johnsomWell, that is why I get the lousy thankless title of PTL....  Grin. Don't even get the t-shirt.  lol20:36
cgoncalvesnoted20:36
johnsomOr put another way. We can decide we want to do it, and it will get done at some point in the future*.20:36
johnsom* Maybe20:36
xgermanjohnsom: could be worse… TC20:37
johnsomgrin20:37
johnsomTC gets catering20:37
johnsomlol20:37
xgermanevery time I hang with them they go to food trucks20:37
johnsomOh, I mean, great title with a bunch of perks. Please sign up20:37
johnsomSorry, any topics for open discussion today?20:38
xgermanbeen there, done that :-)20:38
cgoncalvesQ: why can't I find octavia-v2-dsvm-scenario-ubuntu-bionic in openstack health page?20:38
cgoncalvesI wanted to check failure rate20:38
johnsomProbably because we don't have one20:39
colin-oh i watched the berlin project update btw, nice job cgoncalves and xgerman (and slide composer johnsom )20:39
colin-looking forward to the stein stuff after seeing it bundled in that way20:39
cgoncalvesoh, maybe it only lists gate jobs20:39
johnsomoh, we do20:39
cgoncalvescolin-, thanks!20:40
johnsom#link http://zuul.openstack.org/builds?project=openstack%2Foctavia&job_name=octavia-v2-dsvm-scenario-ubuntu-bionic20:40
cgoncalvesI was looking at http://status.openstack.org/openstack-health/#/?searchProject=octavia-v2-dsvm-scenario&groupKey=build_name20:40
johnsomYou need to set the drop down to job instead of project20:41
cgoncalvesI have &groupKey=build_name which is job20:41
cgoncalvesanyway, bionic looks mostly green20:42
johnsomSeems ok-ish. I need to look at that and setup bionic-on-bionic. I think that is amp image only right now20:42
johnsomYeah, I have another patch I was playing with to figure out what all was needed for zuul to do a bionic nodepool instance20:43
johnsomI got it working at the PTG20:43
johnsomJust need to clean it up20:43
johnsomcgoncalves BTW, I saw your comment on the multi-node patch. I learned (the hard way) that if you override anything in a subsection, it *replaces* the whole section in zuul land, so I have to pull in a bunch of vars from parent jobs to make it work.20:45
cgoncalvesthere's no zuul_extra_vars or alike option? :(20:46
johnsomThe only part that technically isn't required in our repo right now is the nodeset, but I think there is value to just having it there for future layouts and clarity. I will probably create an octavia-controller group20:46
cgoncalvesok20:46
johnsomI thought it was supposed to just be additive and override if the same name matches, but that override is at the group level, not the variable level... sigh.20:47
johnsomSo "host-vars:      controller:        devstack_localrc:" ends up replacing all of the "devstack_localrc:" vars from the parent20:47
johnsomA bummer.20:48
johnsomHowever, that said, you are welcome to play around with it and find a better way. There might be one.  I'm just happy it works now.20:48
johnsomOther items today?20:49
johnsomOh, I will be taking a work day off on Friday, so will be around with limited bandwidth.20:50
johnsomOk, thanks folks. Have a great week!20:50
nmagnezio/20:50
johnsom#endmeeting20:51
*** openstack changes topic to "Discussions for Octavia | Stein priority review list: https://etherpad.openstack.org/p/octavia-priority-reviews"20:51
openstackMeeting ended Wed Dec  5 20:51:03 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:51
openstackMinutes:        http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-12-05-20.00.html20:51
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-12-05-20.00.txt20:51
openstackLog:            http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-12-05-20.00.log.html20:51
colin-ttyl20:51
openstackgerritMichael Johnson proposed openstack/octavia master: Make the CentOS 7 scenario gate non-voting  https://review.openstack.org/62307021:27
johnsomCores ^^^^^ please vote to get the gates moving again.21:27
nmagnezijohnsom, +2, hopefully it'll get fixed soon21:32
xgerman+2/A21:36
*** yboaron has quit IRC21:36
rm_workoops, missed the meeting T_T21:44
xgermanyeah, was wondering if I should rm_work a few times21:46
rm_worki was at lunch21:49
rm_workbecause the meeting is at noon :P21:50
rm_workworst meeting time21:50
nmagnezirm_work, you can always join as rm_mobile.. :D21:52
*** rcernin has joined #openstack-lbaas22:03
rm_workhard to type when my fingers are covered with shawarma grease :P22:05
*** hyang has joined #openstack-lbaas22:11
xgermanyou are laying off the soylent?22:17
colin-tell me about it rm_work i almost always miss it for that reason22:17
colin-although i know others are the same timezone and don't complain, so i shouldn't22:18
colin-shawarma sounds good22:18
*** hyang has quit IRC22:18
xgermanalso most phone have voice input, “hey, siri…"22:28
rm_workcolin-: pfft, i will always complain. if everyone thinks the same way, everyone would just suffer in silence. my complaints are a form of public service. :P22:29
rm_workxgerman: well, i'm in sunnyvale, so i don't have access to my supply22:29
johnsomI just ate my hot-and-spicy pork during the meeting. It's not like it's a video meeting....22:31
johnsomMy favorite Korean place, that had to close because some stupid chain sandwich shop took their lease, re-opened as a food truck. I am happy.22:32
rm_workmmmmmmm I need some good Korean food22:40
rm_workreally want to get some 떡볶이22:40
rm_workhmmmm, might have to go here for dinner tho: https://www.octavia-sf.com/22:41
rm_workthen stop by here after? :P https://www.octaviawellness.com/22:42
johnsomHmm, not sure they have 떡볶이 at the cart22:42
rm_workshould ask for me :P if they do I'll be sure to stop by when I eventually make it over there22:43
johnsomLol, you can give them a sticker22:43
rm_workYeah! Though, I'm almost out! T_T22:43
rm_workneed to get another run printed22:43
johnsomShe would probably make it for you if we gave her a heads up22:43
johnsomI know they have squid and pork, plus some other less interesting things22:44
johnsomFor the hot-and-spicy plates22:44
rm_workmmmmmm22:45
*** rcernin has quit IRC22:52
*** rcernin has joined #openstack-lbaas22:52
*** aojea has quit IRC22:53
*** aojea has joined #openstack-lbaas22:54
*** aojea has quit IRC22:58
openstackgerritMichael Johnson proposed openstack/octavia master: Fix the grenade gate  https://review.openstack.org/62310223:21
johnsomThat *might* be the answer23:21
johnsomUgh, dual job failure race. hmmm23:23
*** fnaval has quit IRC23:26

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!