Wednesday, 2018-04-04

openstackgerritAdam Harwell proposed openstack/octavia master: Fix revert method for batch member update  https://review.openstack.org/55865500:00
rm_work^^00:00
rm_workthe testing around this is admittedly a little thin :(00:00
*** kberger has joined #openstack-lbaas00:08
*** KeithMnemonic has quit IRC00:12
*** kberger has quit IRC00:14
xgerman_ok, the lb delete flow is a mess and the allowed address pair driver needs to be rewritten — I am still confused why we delete the vrrp ports (twice) before we delete the VMs and on the failover flow/amp delete flow we delete the VM first and then deal with the ports.00:21
johnsomI disagree with you on the mess part00:22
xgerman_sure, there isn’t much the flow can do without rewriting the allowed address pairs… I see just too many loops over amphora on too many places which are hard to parallwlize00:24
johnsomAAP certainly is not good. The rest I can probably explain anything that is confusing00:24
xgerman_so why do we delete the VMs last and not first like in failover00:25
johnsom"needs optimization" != "mess"00:25
xgerman_well, we also need to have the same ste transitions as in delete amp with throwin the amp in peding delete00:25
xgerman_something we don’t so in lb delete00:25
xgerman_yeah, but my main beef is the order of things00:26
xgerman_and I can’t see where we delete the plugged ports in the failover/amp delete flow00:26
*** yamamoto has joined #openstack-lbaas00:28
johnsomFirst question: failover has to do it early to free up a compute slot on resource constrained clouds. For LB delete it doesn't matter the order.00:29
xgerman_well, we have to call neutron/nova more often to deallocate ports instead of just killing the vm00:29
johnsomAlso, you don't delete the VIP ports in failover, you transfer the port as you need to not drop the VIP address.  Delete would be bad00:30
xgerman_yes, I saw that but I didn;t see what happens to the plugged member subnet ports00:31
johnsomAlso, killing the VM does not necessarily mean ports go away. In fact, only the lb-mgmt-net port should go away on a nova delete.00:31
xgerman_but we would get the deallocated in one swoop00:32
*** voelzmo has quit IRC00:32
*** yamamoto has quit IRC00:34
johnsomI'm really not following you. And I am super suspect of re-ordering anything. Most of those flows are very intentional00:34
xgerman_ok, we have a certain order in failover for decommsioning an amp; I would expect that we can use a almost idendical flow to decommsion it on lb delete00:36
johnsomBut we keep stuff in failover, in delete we don't00:36
johnsomWe also prioritize traffic termination on delete.00:37
xgerman_yes, I have seen our many calls to disentangle the network from the vm00:38
xgerman_instead we can admin down the ports which is likley faster00:39
xgerman_what I like to see is that we can delete amps (and their ports) in parallel and the outcome of one amp delete  doesn’t effect the other00:40
johnsomAnyway, bring a darn strong argument if you think order needs to change.00:40
johnsomYeah, optimizing is different than reordering things.00:41
xgerman_I like to reuse the same code paths — I don’t get it why we need to decommision the amps differently on failover then lb delete. We can admin down the ports if we want to stop traffic but we don;t need to do this disentagnle dance before amp delete00:43
johnsomOk, I am signing off for the evening.  Timeouts reviewed, it works, but agree we need more docs.00:43
xgerman_and I think it will be easier when the vm is gone00:44
xgerman_but signing off as well00:44
*** atoth has quit IRC00:53
*** AlexeyAbashkin has joined #openstack-lbaas00:58
*** voelzmo has joined #openstack-lbaas00:59
*** KeithMnemonic has joined #openstack-lbaas01:01
*** AlexeyAbashkin has quit IRC01:03
*** KeithMnemonic has quit IRC01:06
*** harlowja has quit IRC01:19
*** longkb__ has joined #openstack-lbaas01:28
*** yamamoto has joined #openstack-lbaas01:30
*** voelzmo has quit IRC01:33
*** yamamoto has quit IRC01:35
*** voelzmo has joined #openstack-lbaas02:00
openstackgerritMerged openstack/octavia master: Allow members to be set as "backup"  https://review.openstack.org/55263202:14
*** annp has quit IRC02:24
*** annp has joined #openstack-lbaas02:25
*** yamamoto has joined #openstack-lbaas02:32
*** voelzmo has quit IRC02:34
*** yamamoto has quit IRC02:37
*** threestrands has quit IRC02:47
*** threestrands has joined #openstack-lbaas02:47
*** threestrands has quit IRC02:47
*** threestrands has joined #openstack-lbaas02:47
*** threestrands has quit IRC02:48
*** threestrands has joined #openstack-lbaas02:49
*** threestrands has quit IRC02:49
*** threestrands has joined #openstack-lbaas02:49
*** voelzmo has joined #openstack-lbaas02:58
*** voelzmo has quit IRC03:32
*** yamamoto has joined #openstack-lbaas03:33
eanderssonIn neutron-lbaas (and octavia) when you add for example a member03:36
eanderssonit starts at PENDING right? and when it's been created it moves into Active?03:37
johnsomRight, it is an async API03:38
eanderssonhttps://github.com/slinn0/terraform-provider-openstack/blob/master/openstack/resource_openstack_lb_member_v2.go#L26203:38
*** yamamoto has quit IRC03:38
eanderssonThe reason I am asking is because of that statement.03:39
eanderssonIt sounds like Terraform just assumes that as long as the initial request succeded, we are fine?03:39
eanderssonWhich obviously isn't always true.03:39
johnsomMaybe it is old neutron-lbaas code that didn’t have status on objects, only via the status API.03:40
eanderssonIs that an Octavia only thing?03:40
eanderssonOr neutron-lbaas v1?03:40
johnsomYeah, the result code is 201 Accepted, so always said it was an async call03:40
eanderssonprovisioning_status would be the interesting portion here right?03:41
eanderssonWe don't care if the member is up (the server may not have been created yet)03:41
johnsomOctavia always had the statuses, it was added to neutron-lbaas late.  For a while it only had the status API03:41
johnsomRight03:42
johnsomI am not at a computer, but pretty sure member provisioning status was added to neutron lbaas03:43
eanderssonLooks like I may have been looking at an old version of the terraform provider.03:44
johnsomAh, ok.  There had been work there, at least recently to get terraform updated03:45
rm_workeandersson: so when are you going to let JudeC out of his cage? :P04:01
rm_workI feel like you guys are hiding him away over there or something04:01
eanderssonhaha he is stuck in go land now :D04:01
rm_worklol04:01
rm_workafter i went to all that work to train him on Octavia... T_T04:01
eanderssontrying to get him back into python land again04:02
*** voelzmo has joined #openstack-lbaas04:02
rm_worki mean, Golang is cool04:02
rm_workjust that he left a ton of stuff over here pending :P04:02
rm_workjohnsom: not sure what to put for the timeouts <_< I'll noodle THAT.04:04
rm_worki mean, descriptively04:04
johnsomOk, if you ask nicely I might cook something up tomorrow04:05
rm_work:D04:05
rm_workjust a rebase for now04:08
openstackgerritAdam Harwell proposed openstack/octavia master: Expose timeout options  https://review.openstack.org/55545404:08
rm_workjohnsom: not sure what you want from me on https://review.openstack.org/#/c/555471/ T_T04:10
*** voelzmo has quit IRC04:12
*** pcaruana has joined #openstack-lbaas04:27
*** pcaruana has quit IRC04:30
*** yamamoto has joined #openstack-lbaas04:35
*** threestrands has quit IRC04:38
*** links has joined #openstack-lbaas04:39
*** links has quit IRC04:40
*** yamamoto has quit IRC04:40
*** links has joined #openstack-lbaas04:46
eanderssonjohnsom, actually operating_status is only in the status call for neutron-lbaas v2 :'(04:49
eanderssonHas anyone started working on the provider support for Octavia yet? :D04:50
*** yamamoto has joined #openstack-lbaas04:51
*** harlowja has joined #openstack-lbaas05:16
*** harlowja has quit IRC05:20
*** PagliaccisCloud has quit IRC05:40
*** PagliaccisCloud has joined #openstack-lbaas05:45
*** voelzmo has joined #openstack-lbaas06:31
*** velizarx has joined #openstack-lbaas06:31
*** voelzmo has quit IRC06:35
*** pcaruana has joined #openstack-lbaas07:01
*** imacdonn has quit IRC07:04
*** imacdonn has joined #openstack-lbaas07:04
*** voelzmo has joined #openstack-lbaas07:08
*** ivve has joined #openstack-lbaas07:11
*** velizarx has quit IRC07:17
*** ivve has quit IRC07:18
*** voelzmo has quit IRC07:22
*** rcernin has quit IRC07:23
*** sapd1 has joined #openstack-lbaas07:24
*** velizarx has joined #openstack-lbaas07:24
*** tesseract has joined #openstack-lbaas07:26
*** Alex_Staf has joined #openstack-lbaas07:28
longkb__o/ I am newcomer in octavia. I want to contribute code into Octavia project, but I don't know how to restart Octavia services07:31
longkb__Could anyone guide me how to debug Octavia project?07:32
longkb__There are a lot of service and I don't know which service should be stopped: http://paste.openstack.org/show/718350/07:36
*** AlexeyAbashkin has joined #openstack-lbaas07:44
*** voelzmo has joined #openstack-lbaas07:59
*** bcafarel has joined #openstack-lbaas08:02
*** voelzmo has quit IRC08:31
*** voelzmo_ has joined #openstack-lbaas08:31
*** voelzmo_ has quit IRC08:49
*** voelzmo has joined #openstack-lbaas08:50
*** links has quit IRC08:53
*** links has joined #openstack-lbaas08:57
*** sapd1 has quit IRC09:03
*** salmankhan has joined #openstack-lbaas09:28
openstackgerritlidong proposed openstack/python-octaviaclient master: Update the outdated links  https://review.openstack.org/54811209:28
*** voelzmo has quit IRC09:33
*** voelzmo has joined #openstack-lbaas09:34
*** voelzmo has quit IRC09:38
*** HW_Peter has joined #openstack-lbaas10:06
*** yamamoto_ has joined #openstack-lbaas10:08
*** annp_ has joined #openstack-lbaas10:09
*** cgoncalv1s has joined #openstack-lbaas10:11
*** pcaruana|afk| has joined #openstack-lbaas10:12
*** gokhan__ has joined #openstack-lbaas10:12
*** dayou_ has joined #openstack-lbaas10:15
*** Alex_Staf has quit IRC10:15
*** yamamoto has quit IRC10:15
*** annp has quit IRC10:15
*** cgoncalves has quit IRC10:15
*** pcaruana has quit IRC10:15
*** sapd has quit IRC10:15
*** gokhan_ has quit IRC10:15
*** dayou has quit IRC10:15
*** HW-Peter has quit IRC10:15
*** sapd has joined #openstack-lbaas10:18
*** Alex_Staf has joined #openstack-lbaas10:22
*** cgoncalv1s is now known as cgoncalves10:23
*** salmankhan has quit IRC10:28
*** pcaruana|afk| has quit IRC10:28
*** pcaruana|afk| has joined #openstack-lbaas10:28
*** salmankhan has joined #openstack-lbaas10:32
*** rcernin has joined #openstack-lbaas10:44
*** annp_ has quit IRC10:46
*** longkb__ has quit IRC11:04
*** velizarx has quit IRC12:06
*** velizarx has joined #openstack-lbaas12:08
*** atoth has joined #openstack-lbaas12:13
*** velizarx has quit IRC12:35
*** velizarx has joined #openstack-lbaas12:37
cgoncalvesjohnsom: hey. do you have plans to get a new tag on stable/queens? we've backported a significant amount of bugfixes recently12:52
*** salmankhan has quit IRC13:15
*** fnaval has joined #openstack-lbaas13:19
*** links has quit IRC13:35
*** mugsie_ is now known as mugsie13:35
*** mugsie has quit IRC13:36
*** mugsie has joined #openstack-lbaas13:36
*** salmankhan has joined #openstack-lbaas13:37
*** links has joined #openstack-lbaas13:49
*** rcernin has quit IRC14:32
*** dosaboy_ is now known as dosaboy14:47
xgerman_cgoncalves: usually we cut around the milestones14:48
cgoncalvesxgerman_: projects cannot tag asynchronously? can't find any stable release schedule14:57
xgerman_yeah, they can - just saying what we normally do. We can cut any time…14:58
cgoncalvesok, so I was asking if there's an ETA for next cut14:59
*** links has quit IRC15:00
dmelladoo/ hi folks15:01
dmelladocgoncalves: weren't you on pto? :P15:01
dmelladowanted to sync with you regarding the nightly image and the services for q- => neutron-15:02
johnsomeandersson yes, in fact I have been working on provider support in Octavia starting last week. Two patches up for review, more to come15:03
johnsomcgoncalves: yes, I am overdue. I was waiting for things to settle down on that branch, I will look at this today15:04
johnsomThere is no release schedule for stable15:04
johnsomlongkb_ On devstack our services start with o-, so to restart all of octavia the command is “sudo systemctl restart devstack@o-*”15:06
*** velizarx has quit IRC15:16
pratrikjohnsom: when running "poweroff" directly on amphora, the instance gets deleted. Is this the expected behaviour ?15:17
xgerman_pratrik: yes. If you like to stop the LB from processing load you can set the admin state to offline15:18
xgerman_if you power down the system thinks the vm is in error and replaces it with a new one15:18
xgerman_when using Octavia you should leave the individual vms alone and only control things through the Octavia API15:19
pratrikOh, hehe ok ok, I do understand.15:19
pratrikI was just surpriced when my amp dissapeared15:19
pratrikBut a side from that, no new instance was created either.. So I guess something is off anyways15:20
xgerman_mmh, that’s odd15:20
xgerman_you can look at the octavia-health-manager logs to see why that didn’t happen15:20
pratrikI powered off the master (it that has anything to do with it)15:21
xgerman_na, we should replace that, too15:21
pratrikah ok15:22
pratrikI think I got it15:22
pratrikI think my "internal patch" brokeit :(15:22
pratrik(hides in shame)15:22
xgerman_no worries - we have all been there15:22
pratrik:)15:23
pratrikThanks for the explanation15:23
*** salmankhan has quit IRC15:25
*** salmankhan has joined #openstack-lbaas15:25
*** tesseract has quit IRC15:38
*** tesseract has joined #openstack-lbaas15:41
*** links has joined #openstack-lbaas15:43
*** Alex_Staf has quit IRC15:44
*** pcaruana|afk| has quit IRC15:45
*** ianychoi__ is now known as ianychoi15:50
*** salmankhan has quit IRC15:56
*** atoth has quit IRC16:00
*** salmankhan has joined #openstack-lbaas16:03
cgoncalvesdmellado: I was on PTO Monday and will be tomorrow :)16:05
cgoncalvesjohnsom: great, thanks!16:05
dmelladooh, so just coming back for one day then fleeing? too bad!16:05
*** links has quit IRC16:05
cgoncalvesdmellado: I was in yesterday (yesterday was tuesday in my calendar :P)16:05
dmelladojohnsom: cgoncalves btw, for now, and note that I highlight the now16:06
dmelladowe could revert the patch as it seems that there are some more issues under the hood with the neutron-* thing16:06
dmelladocgoncalves: well, but in spring two days become one almost xD16:06
johnsomdmellado It doesn't seem to be breaking our gates, do you know the timeline on getting neutron side fixed and back on track?16:07
cgoncalvesdmellado: meh :/ in case you're aware of bug trackers and/or reviews could you please link them in the story nmagnezi created?16:07
johnsomI know some folks use those scripts, but our gates don't16:08
cgoncalves... yet16:08
dmelladocgoncalves: well, basically what happens is that the main devstack tempest job16:08
dmelladouses q-*16:08
dmelladoso we're just overriding there16:08
dmelladoI'll add these findings on nmagnezi story, no worries16:09
dmelladojohnsom: I'm not sure about when upstream qa will be back there16:10
dmelladothere were still some issues on neutron-legacy and neutron-lib IIRC16:10
dmelladobut let's see :)16:11
johnsomYeah, I saw the revert over there16:11
johnsomOk, thanks16:11
cgoncalvesdmellado: I see. thanks!16:11
dmelladothanks to you, folks!16:11
cgoncalvesdmellado: and what about the nightly image?16:11
dmelladoxD just writing it, saved me the time16:11
dmelladoso, is that now being created?16:12
dmelladohow would I be able to use it as a part of my jobs? :P16:12
cgoncalves(that reminds me that it would probably make more sense to create a post job instead of periodic)16:12
cgoncalveswget http://tarballs.openstack.org/octavia/test-images/test-only-amphora-x64-haproxy-centos-7.qcow2 :P16:12
dmelladooh, so it's there! :P16:13
dmelladothanks!16:13
dmelladojohnsom: will you be at Vancouver by any chance?16:13
johnsomYes, I will be16:14
dmelladoawesome, looking forward to see you there then16:14
johnsomWait, is RH finally inviting us to a party?  grin16:14
dmelladolol there is an RDO party if you're up to that xD16:14
* cgoncalves will not go, neither to the summit or RH party :(16:14
*** atoth has joined #openstack-lbaas16:15
dmelladocgoncalves: I don't pity you after seeing your pictures! :D16:15
cgoncalvesdmellado: blasphemy!16:16
dmelladoxD16:19
*** links has joined #openstack-lbaas16:20
*** velizarx has joined #openstack-lbaas16:37
*** AlexeyAbashkin has quit IRC16:38
*** shan5464_ is now known as shananigans17:05
*** links has quit IRC17:08
*** velizarx has quit IRC17:09
*** salmankhan has quit IRC17:13
*** atoth has quit IRC17:56
*** yamamoto_ has quit IRC17:58
*** yamamoto has joined #openstack-lbaas17:59
eanderssonjohnsom, awesome - let me know when they are ready for review18:03
eanderssonI got another question about good old legacy neutron-lbaas18:04
johnsomeandersson Comments are always welcome: https://review.openstack.org/558013 and https://review.openstack.org/55832018:04
johnsomSure, what is up?18:05
eanderssonIf a lb/pool/member api call fails... is it supposed to return a 500 error?18:05
johnsomNo18:05
eanderssonI would assume that it would be async, so 20X and then go into a failed state18:05
johnsomThat was easy18:05
eandersson:D18:05
johnsomYeah, it should never give you a 500. If it does, it's a bug of some sort18:06
eanderssonGuessing this is the code that handles that https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/drivers/brocade/driver_v2.py#L12618:06
eanderssonbut some 3rd party vendor drivers are now wrapping their exceptions18:06
*** yamamoto_ has joined #openstack-lbaas18:06
johnsomLet me blow the dust off the neutron-lbaas code and take a look here18:07
*** AlexeyAbashkin has joined #openstack-lbaas18:08
eanderssonThanks man appreciate it!18:08
eanderssonThe a10 driver for example does not catch expections in their own driver and bubble them up https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/drivers/a10networks/driver_v2.py#L8618:08
johnsomIs it the brocade driver that gave you the 500? or A10?18:09
johnsomCan you pastebin the neutron service log for the 500?  There is likely a traceback18:09
*** yamamoto has quit IRC18:10
*** atoth has joined #openstack-lbaas18:10
rm_workpratrik: you still here?18:10
rm_workpratrik: i left you a lot of notes yesterday about what you are trying to do (which I have put a lot of work into already)18:11
*** AlexeyAbashkin has quit IRC18:12
johnsomYeah, I think under the neutron-lbaas model, the brocade looks right, where a10 is ...18:12
rm_worklolol yes that seems right18:12
rm_work"where a10 is..."18:12
rm_workjohnsom: today, https://review.openstack.org/#/c/557548/ ? :P18:14
johnsomThe octavia driver also uses the failed_completion exception.18:14
*** tesseract has quit IRC18:14
johnsomrm_work Yeah, was going to help with the docs first, but my morning went a bit sideways...  Had to help the OSA folks for a bit.18:14
johnsomOne of the things I found was they are also hitting the OVH/KVM issue18:15
rm_workwut18:15
rm_workyou mean, a kvm issue18:15
rm_workthey aren't using OVH right?18:15
johnsomYeah, tempest was booting cirros, they had enabled KVM accel, OVH hosts cause the same hardware error we see18:15
johnsomYeah, upstream gates18:15
rm_workhmm18:15
rm_workah.18:16
rm_worknot RAX OSA18:16
rm_workupstream OSA18:16
johnsomNo, upstream OSA18:16
*** voelzmo has joined #openstack-lbaas18:16
johnsomBut, yes, the reason I was looking over there was internal related...18:17
johnsomeandersson Does that answer your question about the 500?18:18
eanderssonI thnik so yea18:18
johnsomThe fix would be to wrap those in the A10 driver.  Either the brocade way or with a decorator like the octavia driver.18:19
*** Swami has joined #openstack-lbaas18:21
eanderssonYep - that is what I was thinking as well18:22
eanderssonThanks johnsom !18:22
cgoncalveseandersson: hi. I have a question for you :)18:26
cgoncalveseandersson: can you share how much was the performance increase with https://review.openstack.org/#/c/477698/ ?18:26
eanderssonIt depends heavily on the number of load balancers, pools, members etc you have18:27
cgoncalvesa LB show command can take ages depending on how many children objects it has18:27
eanderssonbut we saw a pretty significant increase18:27
cgoncalvesin one of my tests I saw improvements from ~12 seconds to ~9 seconds18:27
cgoncalvesbut still it fails poorly at scale18:28
cgoncalvesI mean neutron-lbaas, not your patch :)18:28
eanderssonYea - the problem is that it fetches everything ;'(18:29
eanderssonbut yea we saw about the same level of improvements18:29
cgoncalvesok, thanks for confirming18:30
eanderssonI noticed that upgrading to sqlalchemy 1.2.x it would drop another ~3s18:30
eanderssonbut never dared to roll that out to production18:30
cgoncalvesoh, good to know!18:32
cgoncalveshehe understandably :)18:32
* rm_work doesn't understand18:35
* rm_work runs master in prod18:36
cgoncalvesrm_work: master is yesterday for you. you run ahead of master18:41
johnsomYeah, a lot of the neutron-lbaas DB stuff is "less than optimal".18:42
johnsomAKA Use octavia instead18:42
rm_workyes18:44
rm_work(to both)18:44
cgoncalvesjust thinking that there are operators deploying neutron-lbaas in production *today*...18:45
rm_workyeah, that's unfortunate :(18:46
rm_workthough to be fair probably most of them don't have a choice because we haven't finished the provider stuff yet18:46
* cgoncalves knows of a couple in the process of going to prod with haproxy driver18:47
johnsomI would cast shame on any company assisting in that endeavor....18:50
* johnsom Notes we are a partner with a bunch of those and are probably doing the same18:50
* rm_work gets out his ShameCaster500018:51
*** velizarx has joined #openstack-lbaas18:53
cgoncalves'xD18:55
*** AlexeyAbashkin has joined #openstack-lbaas18:58
johnsomcgoncalves Have a minute to look at something for me?  Need an outsider (from the patch) opinion19:01
*** AlexeyAbashkin has quit IRC19:02
xgerman_johnsom so if we merged something to master before queens was cut do I need ot cherry pick by hand or can I still use the tool?19:04
xgerman_to get that to Pike19:04
johnsomYou just cherrypick from master to queens, queens to pike.  Skipping a stable with a backport is not good.19:06
xgerman_but we never backported that to queens19:06
xgerman_it’s part of queens19:06
johnsomSo cherrypick from queens19:07
*** velizarx has quit IRC19:07
xgerman_but it will have the same commit-id then in master - just confused?19:07
rm_workjust go to the patch in gerrit19:07
rm_workand click "cherry-pick"19:07
rm_workit will work19:08
rm_workdon't worry about what release it's in19:08
xgerman_I did that and johnsom gave me a -219:08
*** velizarx has joined #openstack-lbaas19:08
rm_work<_<19:08
rm_worklink19:08
*** velizarx has quit IRC19:08
xgerman_https://review.openstack.org/#/c/558892/19:08
johnsomThe back link showed you cherry picked from master so I assumed you skipped backporting it to queens.19:08
rm_workwas it IN queens or not19:08
xgerman_it is in Queens19:09
johnsomYeah, I don't know, that's why I blocked it19:09
rm_workyeah feb 819:09
rm_workthat was in queens19:09
rm_workso it should be fine19:09
xgerman_that’s what Ithought19:09
johnsomI just looked at this: https://review.openstack.org/#/q/Ifcfdba1d28aa732d0f7b3b0a8bffb58cee0ab7cc19:09
rm_workyeah i mean ALL patches are in master at some point :P19:10
rm_workneed to check when they were merged19:10
xgerman_lol19:10
*** harlowja has joined #openstack-lbaas19:10
rm_workwho is on still?19:11
rm_workeandersson?19:11
johnsomWell, true...19:11
rm_worktrying to get kevin fox to comment on the timeout patch19:12
rm_worksince he was the issue reporter19:12
rm_workbut seems like i never catch him, lol19:12
johnsomYeah, I changed my vote with my reason. I think it is good.19:14
johnsomOk, bite to eat before the meeting.19:14
rm_workoh right19:15
rm_workmeeting19:15
rm_worki was gonna add an agenda item19:16
rm_workor two19:16
*** atoth has quit IRC19:16
*** jniesz has joined #openstack-lbaas19:19
xgerman_sure19:19
cgoncalvesjohnsom: sure19:27
cgoncalveshttps://review.openstack.org/#/c/541494/ is included in stable/queens19:29
johnsomcgoncalves On patch http://logs.openstack.org/54/555454/6/check/build-openstack-api-ref/f080dd7/html/v2/index.html#id34 do the four timeout descriptions at the bottom make sense to you? Are they clear enough?19:41
cgoncalvesjohnsom: scrolling to #id34 doesn't work. which one is it?19:44
johnsomcgoncalves Listener create, request box19:45
*** velizarx has joined #openstack-lbaas19:48
*** velizarx has quit IRC19:48
cgoncalvesjohnsom: 'frontend client' is user<->listener?19:49
johnsomYes19:49
cgoncalvesjohnsom: ok, than I understand the descriptions19:51
cgoncalvesand since I'm at newbie-level I think others will understand too19:51
johnsom+1 ok cool.  I changed my vote to +2 because as I looked at it to write something more it seemed pretty clear already for an api-reference19:52
johnsomThanks!19:52
cgoncalvesyour welcome19:52
johnsomrm_work Let me know when you add your agenda items so I refresh. Sometimes I don't get the notifications.19:57
cgoncalvesare all those 4 timeouts expected to be available by all/most providers? checking if it's not haproxy-only19:59
johnsomYes, from the providers I have used they all support those19:59
cgoncalvesexcellent19:59
johnsomWould like feedback of course19:59
johnsom#startmeeting Octavia20:00
openstackMeeting started Wed Apr  4 20:00:03 2018 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.20:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
*** openstack changes topic to " (Meeting topic: Octavia)"20:00
openstackThe meeting name has been set to 'octavia'20:00
johnsomHi folks20:00
cgoncalveshi20:00
johnsom#topic Announcements20:00
*** openstack changes topic to "Announcements (Meeting topic: Octavia)"20:00
johnsomCheck your e-mail for a foundation survey about the PTGs20:00
nmagnezio/20:01
johnsomThe foundation sent out a survey about the PTGs that I know some of us are happy to give feedback. So check your e-mail in case you missed your link to the survey20:01
johnsomAlso of note, The "S" release will be "Stein"20:01
xgerman_o/20:01
johnsomSolar won the vote, but had a legal conflict, so Stein it is...20:02
johnsomI have a number of other OpenStack activity type things later on the agenda, but other than those any announcements I missed?20:02
johnsomRocky MS-1 is in two weeks20:03
johnsom#topic Brief progress reports / bugs needing review20:03
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)"20:03
johnsomOk, moving on. It's been a busy bit for me.  I did another spin of the tempest plugin, got more feedback I hope I can address today. After that I started on the provider driver work.  I have a base class and a noop driver up for review (marked WIP as I am still making some adjustments, but feedback is still welcome).20:05
johnsomLots-O-reviews too.  Reviewed a bunch of great dashboard stuff (L7 support!) (sorry it took this long to get some reviews on it) and some other recent patches with new features for Rocky.20:06
johnsomAnyone else have updates?20:06
johnsomSpring break vacations?20:06
rm_workI have a couple of things up still20:06
rm_worki think maybe timeouts will merge soon? but, Usage API could use some attention20:07
rm_worknot sure everyone is aware of the work i'm trying to do there20:07
cgoncalvesrm_work: you couldn't help yourself :D20:07
rm_workbut would like to get people's general approval on the concept and how it's organized20:07
johnsomThis is an open comment section for everyone to share what they are working on with the team20:07
rm_workcgoncalves: usually that's true ;)20:07
cgoncalvesjohnsom: thanks a lot (!) for the work on tempest and providers. really looking forward to having them merged20:07
eanderssonrm_work, sup?20:08
johnsomThere is still a bunch of work to do on tempest, so if we have volunteers...20:09
rm_workeandersson: lol20:09
eandersson:D20:09
johnsomHow is the grenade gate going BTW?20:09
rm_workeandersson: wanted you to review something, maybe it's fine now20:09
eanderssonI c - let me know =]20:10
cgoncalvesjohnsom: no updates from my side. I know you reviewed it. I've been busy with finalizing octavia integration in tripleo and containerizing neutron-lbaas20:10
cgoncalvesplus kolla20:10
johnsomOk, looking forward to that too!20:10
nmagneziwe are getting there :-)20:10
johnsomI would love to be able to declare an upgrade tag in Rocky20:11
johnsomMaybe even two....20:11
johnsom#topic Other OpenStack activities of note20:11
*** openstack changes topic to "Other OpenStack activities of note (Meeting topic: Octavia)"20:11
johnsomThese are just some things going on in the wider OpenStack I think you all might want to know about. Sorry if it's duplicate from the dev mailing list (let me know if this is of value or not).20:12
johnsomNo more stable Phases welcome Extended Maintenance20:12
johnsom#link https://review.openstack.org/54891620:12
cgoncalvesjohnsom: first 2 upgradability tags are achievable with grenade as-is, I think20:12
johnsom#link https://review.openstack.org/#/c/552733/20:12
johnsomcgoncalves I think there  is still a bug or two there, thus the comments20:12
johnsomBut close!20:13
johnsomBased on packager feedback at the PTG and other forums, there is a change in how stable branches are going to be handled. It is still up for review if you want to comment.20:14
johnsomA quick note on recent IRC trolling/vandalism20:14
johnsom#link http://lists.openstack.org/pipermail/openstack-dev/2018-April/129024.html20:14
johnsomJust an FYI that work is being done to try to help with the IRC spammers20:14
johnsoma plan to stop syncing requirements into projects20:14
johnsom#link http://lists.openstack.org/pipermail/openstack-dev/2018-March/128352.html20:15
johnsomThe way global requirements are handled is changing.  You have probably seen the lower constraint gates, but you will see less proposal bot GR updates.20:15
johnsomAnd finally, there are some upstream package changes coming we should be aware of in case they break things.20:16
johnsomPip and PBR are both doing major releases in the next few weeks.20:16
johnsomReplacing pbr's autodoc feature with sphinxcontrib-apidoc20:17
johnsom#link http://lists.openstack.org/pipermail/openstack-dev/2018-April/128986.html20:17
rm_workoh yeah, pip10 is a big one right?20:17
johnsomWe probably need to investigate that work to update our docs gates.  Looking for volunteers there too.20:17
johnsomYeah, the big pip 10 is out in like beta/rc or something. It bit the ansible project since they were pulling from git.  It will hit the rest of us in a week or two I think. There have been some mailing list threads on that to.20:18
johnsomEven one of the pip 9 dot releases broke the Ryu project and neutron-agents alread20:19
johnsomy20:19
johnsomSo, FYIs. It might get bumpy here soon.20:19
cgoncalvescan we, if makes sense at all, to add non-voting/experimental pip10 jobs?20:19
johnsomProbably, yes. If someone has cycles to do that.  I think infra/qa is working on some experimental gates to see what is coming.20:20
johnsomSadly I don't think I can carve off that time right now. But would support others20:22
johnsom#topic Octavia deleted status vs. 40420:22
*** openstack changes topic to "Octavia deleted status vs. 404 (Meeting topic: Octavia)"20:22
cgoncalvesI'm not up to speed on pip10 or have much time this week, but would try next week if not too late20:22
johnsomOk, sure. I think the plan is to land it the week of the 14th. So, you can help with cleanup... grin  Maybe we will be fine. I just wanted to give a heads up of places to look in case things break.20:23
johnsomI know we don't import pip modules, so we are ahead of the game there!20:24
johnsomOk, 404 carry over from the last few meetings.20:24
johnsom#link https://review.openstack.org/#/c/545493/20:24
johnsomThis is still open20:24
johnsomargh, looks like it has a gate issue.20:25
johnsomAny other updates about the libraries or other comments on this?20:25
johnsomThough that failure, only 8 minutes it, must be a devstack failure. Our code wouldn't be running yet.20:26
johnsomOk, moving on then.20:27
johnsom#topic Open Discussion20:27
*** openstack changes topic to "Open Discussion (Meeting topic: Octavia)"20:27
johnsomOther topics for today?20:27
nmagnezione small topic we started last week20:27
nmagnezi#link https://etherpad.openstack.org/p/storyboard-issues20:27
johnsomRight!20:27
nmagneziI plan to contact the storyboard team next week (we are in a holiday this week)20:28
johnsomThey just finished their meeting I think20:28
nmagneziso please, if you didn't add your stuff just yet, please do it.20:28
nmagneziI guess I can just ping ppl in the channel :)20:28
nmagnezisee how that goes..20:28
johnsomYeah, or next week when you are back in office plan to join the meeting20:29
johnsomI saw they either have or are in the process of moving a few more projects20:29
*** sapd has quit IRC20:30
johnsomOther topics today?20:30
rm_workoh i had a thng20:30
rm_work(a thing20:30
rm_work*a thing20:30
rm_work)20:30
*** sapd has joined #openstack-lbaas20:30
* rm_work dies20:31
johnsomI feel like I should...20:31
*** yamamoto_ has quit IRC20:31
rm_workumm yeah, so20:31
* johnsom revives rm_work with potion of health20:31
rm_workthanks ^_^20:31
*** sapd has quit IRC20:31
rm_workpratrik brought up the concept of AZs again20:31
rm_work(I wish they'd stick around or read scrollback so we could chat about it)20:32
*** sapd has joined #openstack-lbaas20:32
rm_workbut anyway, it seems more than just GD are doing multi-az stuff in the nova scheduler20:32
rm_workand it seems to be done in a compatible way20:32
xgerman_yep, we should add support20:33
rm_workso I'm wondering if I cleaned up and split out the work I did around multi-AZ / AZ-anti-affinity, if people think it would be possible to merge20:33
johnsomSo upstreaming it into nova?20:33
johnsomgrin20:33
rm_workthe reason i didn't do this in the past is that enabling it requires custom nova patches20:33
rm_worklol20:33
cgoncalvesGD?20:33
rm_workGoDaddy20:33
rm_workI would love to get nova to properly support this but i think that may be a losing battle that would stretch over multiple years20:33
xgerman_mmh, can’t we just make it so that you cna supply a list of AZ when you create an LB and we out them there20:34
rm_workso what i'm talking about is octavia support, assuming people use approximately the same compatible custom nova scheduler stuff20:34
xgerman_mmh, how would we test that in the gate?20:34
rm_workthat is a good question20:34
rm_worki guess that might be the closest to a firm answer, actually20:34
johnsomYeah, and AZs would be octavia driver specific...20:35
rm_workuntil it's gate testable, it'd be hard to run it20:35
rm_workyes, that also20:35
rm_workit'd be specific to the amphora driver20:35
xgerman_well, as I said we could introduce an AZ parameter when you create an LB20:35
rm_work*amphora provider20:35
johnsomI'm open to the idea for sure, it make sense in some cases.20:35
xgerman_instead of having obe per installation20:35
rm_workxgerman_: the sort of thing i do now is to have octavia transparently support multi-az, and handle anti-affinity transparently20:36
rm_workthe user shouldn't need to know anything about this20:36
johnsomI just worry about putting a bunch of code in to become a nova scheduler, that each implementation wants to do a different way (some have cross AZ subnets, some don't, etc.).20:36
rm_workright, the other complication is networking20:36
johnsomI mean our dream would be nova server groups that are AZ aware (I think)20:37
rm_workyes20:37
rm_workso, i honestly don't care that much20:37
xgerman_and the network spanning AZs20:37
xgerman_or we cop out and tell them to use kosmos20:37
rm_workbut we got some interest20:37
rm_workand i have the code ready20:37
rm_workit just needs to be split out from my monolith patch20:37
rm_workthis is a preliminary query about whether it's worth my time20:38
johnsomRight. So, I guess since you know what code you have, is it small enough, simple enough, and configuration enabled that we could add it, but at the operator's own risk enable it?20:38
xgerman_yep, like “unstable”20:39
johnsomI'm not a fan of adding it to the API for users to enter AZs, I think that gets a bit too driver specific / how does a user know the right answer?20:39
johnsomBut something via flavors or config...20:40
johnsomnetsplit?20:41
xgerman_approval20:42
johnsomIt got quiet20:42
xgerman_we need flavors for so many things - someone ought to write the code for it20:42
johnsomHa, trying.....20:42
rm_workyeah so20:42
johnsomWell, to be fair, there are parts of it in a patch already. I'm just adding the glue20:42
rm_workthat would essentially be it20:43
cgoncalvesrm_work: have you probed the nova team about such feature? others may also want that, you never know20:43
rm_workjust an "experimental feature" that can be enabled by using config a certain way20:43
rm_workmay or may not be gate-able, other than making sure it doesn't break the normal paths20:43
* rm_work shrugs20:43
rm_workit would be neat to not have to carry this patch myself20:44
rm_workbut20:44
johnsomYeah, I think if it's not like 1000's of lines of code, lots-o-warnings, etc.20:44
rm_workalright, i'll look at splitting it out20:44
rm_worki think i can get that done effectively20:44
xgerman_+120:44
johnsomOk, but if we see a bunch of "add nova placement API to AZ scheduler patches" I'm going to be like...  Why isn't this just in nova....20:45
rm_worklol20:45
johnsomBTW, we can't really use that code from pastebin. That isn't a way that we know the code was licensed for use in OpenStack, so I am assuming we are talking about your code and patch.....20:46
johnsomOk, other topics today?20:47
johnsomOk, thanks folks!  Have a good week (vacation)20:48
johnsom#endmeeting20:48
*** openstack changes topic to "Discussion of OpenStack Load Balancing (Octavia) | https://etherpad.openstack.org/p/octavia-priority-reviews"20:48
openstackMeeting ended Wed Apr  4 20:48:54 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:48
openstackMinutes:        http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-04-04-20.00.html20:48
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-04-04-20.00.txt20:48
openstackLog:            http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-04-04-20.00.log.html20:49
nmagnezio/20:49
openstackgerritMichael Johnson proposed openstack/octavia master: Fix calls to "DELETED" items  https://review.openstack.org/54549320:50
rm_workyep i was referring to my own code20:51
rm_worki wouldn't need to touch that20:51
rm_workit's a kind of naive implementation of what i already did in a much more robust way20:51
*** rcernin has joined #openstack-lbaas20:59
*** yamamoto has joined #openstack-lbaas21:28
*** rcernin has quit IRC21:29
*** yamamoto has quit IRC21:33
*** voelzmo_ has joined #openstack-lbaas21:42
johnsomrm_work So, https://review.openstack.org/#/c/555471/21:45
johnsomThere was even a bug in that patch we later fixed....21:45
*** voelzmo has quit IRC21:46
*** voelzmo_ has quit IRC21:47
*** voelzmo has joined #openstack-lbaas21:47
*** voelzmo has quit IRC21:52
rm_workaahh yeah we need that too21:59
*** AlexeyAbashkin has joined #openstack-lbaas21:59
rm_workjohnsom: ok, added it to the chain22:02
johnsomBut xgerman_ also thought that was a bit much....22:03
*** voelzmo has joined #openstack-lbaas22:04
*** voelzmo has quit IRC22:04
*** AlexeyAbashkin has quit IRC22:04
rm_workhe thought the part2 thing is less bug more feature22:05
rm_worki disagree22:05
rm_worksince it still doesn't add any feature ;P22:05
rm_workjust brings us closer in line to what we claimed we wanted to support and never did22:05
rm_workand still don't22:05
rm_worklol22:05
*** voelzmo has joined #openstack-lbaas22:05
johnsomIt adds "support for more than one lb per amp"22:05
rm_workthat's what i'm talking about22:06
johnsomwell, yes22:06
rm_work"part2"22:06
rm_work"just brings us closer in line to what we claimed we wanted to support and never did, and still don't support"22:06
rm_workso it's hard to claim it really adds a feature22:06
rm_worksince... it doesn't actually add a feature22:06
*** voelzmo_ has joined #openstack-lbaas22:07
*** voelzmo has quit IRC22:07
xgerman_that is some double reverse logic22:08
xgerman_sonce we don’t support this feature, adding this feature, doesn’t add a feature22:08
rm_workumm22:09
rm_workshow me how it added the feature22:09
rm_workcreate me an amp with two LBs :P22:09
rm_workif you can show me how a feature was added22:09
rm_workthen i will believe you22:09
rm_workIMO *your* logic is what seems like "double-reverse logic"22:09
rm_workwell, maybe single-reverse22:10
*** voelzmo_ has quit IRC22:11
rm_workhere's how I see it: the amp driver was always supposed to support that, so not supporting it properly at one place in the driver layer was a bug. But, Octavia itself doesn't support that feature, and still doesn't, so there is no change at the project level.22:12
rm_workSO, I see no way you could interpret this as violating backport policy22:13
rm_workeither way you look at it22:13
*** xgerman_ has quit IRC22:15
*** xgerman_ has joined #openstack-lbaas22:15
openstackgerritAdam Harwell proposed openstack/octavia master: Defend against neutron error response missing keys  https://review.openstack.org/55895122:20
rm_workjohnsom: any idea why I might have done this? lol https://review.openstack.org/#/c/435612/117/octavia/amphorae/backends/agent/api_server/keepalived.py22:21
rm_workand is that eligible for master?22:22
rm_workmy current goal is to get as much as possible out of my FLIP patch and into master22:24
rm_workI think that is good22:24
rm_worki mean, that section. going to propose it22:24
johnsomI'm not sure why you did that one. Seems like there might be other stuff with that change.22:25
rm_workyes22:26
rm_workit looks like it was necessary for making keepalived behave more sanely22:27
rm_worklike, the states go really wonky by default when you restart stuff22:27
rm_workthis makes it more predictable, which was required by my alerting stuff (so it didn't alert on invalid state changes)22:27
rm_workI think it technically is working upstream but only because we are lucky that keepalived is very resilient by nature :P22:28
rm_workhttps://review.openstack.org/#/c/435612/35..3622:28
rm_workit was introduced there22:28
rm_workalong with the rest of the alerting stuff22:28
*** yamamoto has joined #openstack-lbaas22:30
openstackgerritAdam Harwell proposed openstack/octavia master: Make keepalived initialization more predictable  https://review.openstack.org/55895422:32
*** yamamoto has quit IRC22:36
openstackgerritAdam Harwell proposed openstack/octavia master: Minor refactor of health_sender.py  https://review.openstack.org/55895522:39
*** voelzmo has joined #openstack-lbaas22:41
openstackgerritAdam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s)  https://review.openstack.org/43561222:42
openstackgerritAdam Harwell proposed openstack/octavia master: Add debug timing logging to amphora health update  https://review.openstack.org/55895622:45
*** rcernin has joined #openstack-lbaas22:45
*** fnaval has quit IRC22:58
*** voelzmo has quit IRC23:14
*** jniesz has quit IRC23:18
*** yamamoto has joined #openstack-lbaas23:32
*** fnaval has joined #openstack-lbaas23:33
*** yamamoto has quit IRC23:38
rm_workomg i just realized, I think the multi-az code I have evolved to the point where like ... a one or two line change and it won't require anything23:44
rm_workon the nova side <_<23:44
openstackgerritAdam Harwell proposed openstack/octavia master: WIP: Experimental multi-az support  https://review.openstack.org/55896223:44
rm_workpratrik: ^^23:45
rm_workpratrik: check out that patch and let me know how that works for you23:45
rm_workpratrik: should just be ... apply that, and set the config just like you did (comma-separated list of AZs)23:45
johnsomUgh, looks like a new nodepool provider, limestone, but they aren't working well.  Gate timed out, didn't even start devstack23:51
johnsomSo, the stable release is stuck at least another recheck cycle23:51
rm_workah is it them23:52
rm_worki wondered wtf is wrong with the gates the past day or two23:52
rm_worka ton of rechecks for timeouts but didn't look too deep23:52
johnsomAt least here it was... http://logs.openstack.org/92/558892/1/gate/octavia-v1-dsvm-py3x-scenario-multinode/ca6c90a/23:54

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!