Tuesday, 2018-04-03

*** annp has quit IRC00:50
*** annp has joined #openstack-lbaas00:51
*** yamamoto has joined #openstack-lbaas00:53
*** yamamoto has quit IRC00:59
openstackgerritMichael Johnson proposed openstack/octavia master: Creates provider driver base class and exceptions  https://review.openstack.org/55801300:59
openstackgerritMichael Johnson proposed openstack/octavia master: Create noop provider driver and data model  https://review.openstack.org/55832001:02
*** annp has quit IRC01:12
*** annp has joined #openstack-lbaas01:13
*** sapd has joined #openstack-lbaas01:42
*** dayou has quit IRC01:44
*** harlowja has quit IRC01:50
rm_workjohnsom: so someday soon you can get to reviews? :P01:54
rm_work<301:54
openstackgerritJacky Hu proposed openstack/octavia-dashboard master: Use pool name as hint for selecting pool id  https://review.openstack.org/55312401:54
*** yamamoto has joined #openstack-lbaas01:55
openstackgerritJacky Hu proposed openstack/octavia-dashboard master: Align model with v2 api  https://review.openstack.org/55419801:56
openstackgerritJacky Hu proposed openstack/octavia-dashboard master: Being able to change insert headers of listener  https://review.openstack.org/54999901:56
*** dayou has joined #openstack-lbaas01:58
*** yamamoto has quit IRC02:00
*** AlexeyAbashkin has joined #openstack-lbaas02:02
johnsomrm_work: ha, it doesn’t seem it was that long ago I did a bunch of reviews... I will try to take a look again tomorrow.02:06
*** AlexeyAbashkin has quit IRC02:06
johnsomHammering out the noop driver has me a bit fried tonight.02:06
johnsomHowever there are other cores coming online tonight, so...  grin02:07
*** harlowja has joined #openstack-lbaas02:31
*** longkb__ has joined #openstack-lbaas02:41
*** wolsen has joined #openstack-lbaas02:53
*** yamamoto has joined #openstack-lbaas02:56
*** yamamoto has quit IRC03:02
*** AlexeyAbashkin has joined #openstack-lbaas03:04
*** AlexeyAbashkin has quit IRC03:09
*** links has joined #openstack-lbaas03:32
*** yamamoto has joined #openstack-lbaas03:33
rm_workjohnsom: yeah true. ;) xgerman_ did review a few of mine but i am not sure i agree with any of his -1s lol03:50
xgerman_Ha, you can argue and maybe other cores give you a +203:54
*** harlowja has quit IRC03:55
*** annp has quit IRC03:57
*** annp has joined #openstack-lbaas03:58
openstackgerritwangqi proposed openstack/octavia master: Replace db function get_session by get_writer_session  https://review.openstack.org/55834804:03
openstackgerritAdam Harwell proposed openstack/octavia master: Add usage admin resource  https://review.openstack.org/55754804:25
*** yamamoto has quit IRC04:45
*** yamamoto has joined #openstack-lbaas04:46
*** ivve has quit IRC05:23
*** longkb_ has joined #openstack-lbaas05:38
*** longkb__ has quit IRC05:40
*** Alex_Staf has joined #openstack-lbaas05:54
*** longkb__ has joined #openstack-lbaas06:02
*** longkb__ has quit IRC06:06
*** longkb__ has joined #openstack-lbaas06:11
*** pcaruana has joined #openstack-lbaas06:53
*** voelzmo has joined #openstack-lbaas06:57
*** ivve has joined #openstack-lbaas06:57
*** tesseract has joined #openstack-lbaas07:00
*** velizarx has joined #openstack-lbaas07:02
*** imacdonn has quit IRC07:04
*** imacdonn has joined #openstack-lbaas07:04
*** links has quit IRC07:05
*** links has joined #openstack-lbaas07:05
*** ivve has quit IRC07:15
*** voelzmo has quit IRC07:16
*** velizarx has quit IRC07:17
*** ivve has joined #openstack-lbaas07:28
*** velizarx has joined #openstack-lbaas07:31
*** voelzmo has joined #openstack-lbaas07:38
*** ivve has quit IRC07:38
*** yamamoto_ has joined #openstack-lbaas07:42
*** yamamoto has quit IRC07:45
*** voelzmo has quit IRC07:50
*** AlexeyAbashkin has joined #openstack-lbaas07:53
*** links has quit IRC08:04
*** dmellado has quit IRC08:08
*** dmellado has joined #openstack-lbaas08:08
*** links has joined #openstack-lbaas08:14
*** voelzmo has joined #openstack-lbaas08:20
*** ivve has joined #openstack-lbaas08:23
*** voelzmo has quit IRC08:25
*** voelzmo has joined #openstack-lbaas08:31
*** gokhan_ has quit IRC09:00
*** annp has quit IRC09:04
*** annp has joined #openstack-lbaas09:05
*** gokhan_ has joined #openstack-lbaas09:05
*** salmankhan has joined #openstack-lbaas09:14
*** voelzmo has quit IRC09:16
*** voelzmo has joined #openstack-lbaas09:16
*** dmellado has quit IRC09:23
*** dmellado has joined #openstack-lbaas09:23
*** voelzmo has quit IRC09:32
*** voelzmo has joined #openstack-lbaas09:32
*** voelzmo has quit IRC09:33
*** voelzmo has joined #openstack-lbaas09:33
*** voelzmo has quit IRC09:33
*** voelzmo has joined #openstack-lbaas09:36
*** voelzmo has quit IRC09:37
*** voelzmo has joined #openstack-lbaas09:37
*** voelzmo has quit IRC09:37
*** salmankhan has quit IRC09:42
*** salmankhan has joined #openstack-lbaas09:52
*** voelzmo has joined #openstack-lbaas10:09
*** voelzmo has quit IRC10:14
*** voelzmo has joined #openstack-lbaas10:26
*** voelzmo has quit IRC10:55
*** voelzmo has joined #openstack-lbaas11:00
*** ianychoi_ has joined #openstack-lbaas11:11
*** ianychoi has quit IRC11:14
*** rcernin has quit IRC11:20
*** salmankhan1 has joined #openstack-lbaas11:27
*** salmankhan has quit IRC11:29
*** salmankhan1 is now known as salmankhan11:29
*** salmankhan has quit IRC11:31
*** velizarx has quit IRC11:31
*** salmankhan has joined #openstack-lbaas11:34
*** voelzmo has quit IRC11:36
*** salmankhan1 has joined #openstack-lbaas11:36
*** voelzmo has joined #openstack-lbaas11:37
*** voelzmo has quit IRC11:37
*** voelzmo has joined #openstack-lbaas11:37
*** salmankhan has quit IRC11:38
*** salmankhan1 is now known as salmankhan11:38
*** voelzmo has quit IRC11:38
*** voelzmo has joined #openstack-lbaas11:39
*** voelzmo has quit IRC11:39
*** yamamoto_ has quit IRC11:39
*** voelzmo has joined #openstack-lbaas11:39
*** voelzmo has quit IRC11:39
*** velizarx has joined #openstack-lbaas11:40
*** pratrik has joined #openstack-lbaas11:46
*** velizarx has quit IRC11:56
*** voelzmo has joined #openstack-lbaas12:12
*** voelzmo has quit IRC12:17
*** velizarx has joined #openstack-lbaas12:18
pratrikHi guys, I just created a small patch that makes it possible to specify multiple az's for Nova. The reasoning behind this is simply, if you have multiple az's in your openstack cluster, you may want to put them in separate az's (because of underlying storage for example). The patch is pretty straight forward and shouldn't really break anything. I've tried it out on our cluster and it works as intended. It seemed like a "bi12:18
pratrikwas hoping that someone, if the patch makes sense, would review it and maybe apply it the "official way".12:19
pratrikhttp://paste.openstack.org/show/718281/12:19
*** longkb__ has quit IRC12:21
*** voelzmo has joined #openstack-lbaas12:25
*** voelzmo has quit IRC12:29
*** voelzmo has joined #openstack-lbaas12:29
*** salmankhan has quit IRC12:32
*** salmankhan has joined #openstack-lbaas12:34
*** yamamoto has joined #openstack-lbaas12:40
*** yamamoto has quit IRC12:45
*** yamamoto has joined #openstack-lbaas13:32
*** yamamoto has quit IRC13:36
*** fnaval has quit IRC13:37
*** fnaval has joined #openstack-lbaas13:37
*** fnaval has quit IRC13:42
*** fnaval has joined #openstack-lbaas13:50
*** yamamoto has joined #openstack-lbaas14:09
*** Alex_Staf has quit IRC14:13
*** voelzmo has quit IRC14:19
*** voelzmo has joined #openstack-lbaas14:19
*** voelzmo has quit IRC14:20
*** voelzmo has joined #openstack-lbaas14:20
*** voelzmo has quit IRC14:20
*** voelzmo has joined #openstack-lbaas14:21
*** voelzmo has quit IRC14:21
*** voelzmo has joined #openstack-lbaas14:21
*** voelzmo has quit IRC14:22
*** voelzmo has joined #openstack-lbaas14:22
*** voelzmo has quit IRC14:23
*** ivve has quit IRC14:37
*** links has quit IRC14:37
*** ptoohill has quit IRC14:47
*** ptoohill has joined #openstack-lbaas14:47
*** velizarx has quit IRC14:53
cgoncalvesjohnsom: hi. am I missing something or https://github.com/openstack/octavia/blob/master/octavia/cmd/agent.py#L75-L76 are not being used anywhere?15:03
johnsomThey should be, let me have a quick look15:04
johnsomI mean, I'm 99% sure they are being used as I have looked at that endpoint over TLS recently15:05
johnsomcgoncalves http://docs.gunicorn.org/en/latest/settings.html#certfile15:08
johnsomhttps://github.com/openstack/octavia/blob/master/octavia/cmd/agent.py#L3515:08
johnsomThe "options" are passed through to the gunicorn base app15:09
cgoncalvesjohnsom: ah, gunicorn consumes those options. sorry...15:12
pratrikDoes anyone have an opinion about my patch regarding availability zones ? Does it make sense ?15:16
johnsompratrik Hi, I haven't had time to look at it yet. We have some nova AZ support already, and I'm sure rm_work would be interested in the patch. I will take a look later today.15:17
pratrikjohnsom: ok no worries, I was just curious if I was missing something. But to me it makes sense to put the amphora's in different az's.15:18
pratrikNot really sure how / if you could solve that with affinity, but I think its cleaner to use az's.15:18
johnsompratrik Yes. It is super sad nova does not do a good job with that and the AZ support.15:18
johnsompratrik Do you have the same networks available across AZs? So that Active/Standby Active/Active will work when split across AZs?15:19
pratriknova does what you tell it I suppose, and we just either 1 ) not tell nova at all (which will make nova choose), or 2 ) specify one az that both amphoras will be put in. this small patch gives you the possibility to put one amphora in each defined az.15:20
pratrik(and yes, same network must exist in both az's for it to work)15:20
johnsomOk, cool.  Yeah, we have been shying away from becoming the new nova scheduler, but that may just have to happen if AZs don't improve in nova15:21
pratrikBut maybe that should be mentioned as a comment to the parameter (that the network part needs to be configured correctly)15:22
pratrikI dont think you're the "new nova scheduler" just because you let the user define which az's to use for his amphoras.15:22
johnsomI was just commenting on our previous conversations about the state of AZs15:23
pratrikYou need to give some input to nova on how to place the vms, otherwise it will just use the default scheduler, which is fine I guess in most cases - but here you actually want to give the user a choice..15:23
pratrikAt least that makes sense to me..15:23
pratrikjohnsom: ok ok15:24
pratrikAnyway the patch is simple but does the job for us :)15:25
johnsompratrik Why not post a patch to gerrit?  We can't really use that code due to licensing, etc.15:25
pratrikjohnsom: hehe well because laziness, but yes, maybe I should do it the official way :)15:26
*** tesseract has quit IRC15:32
openstackgerritMerged openstack/python-octaviaclient master: Updated from global requirements  https://review.openstack.org/55559115:34
*** atoth has quit IRC15:41
johnsomFYI: The Vancouver summit early bird pricing ends April 4 at 11:59pm Pacific Time / April 5 6:59 UTC15:50
*** atoth has joined #openstack-lbaas15:56
*** samccann has joined #openstack-lbaas15:57
johnsomFor those of you with PTG passes for the summit, you MUST register with your unique code before May 11, 201815:57
xgerman_same for sepaker15:58
*** links has joined #openstack-lbaas16:26
*** AlexeyAbashkin has quit IRC16:31
*** bcafarel has quit IRC16:50
openstackgerritMerged openstack/octavia master: Add deadlock retry wrapper for inital quota create  https://review.openstack.org/55709416:56
openstackgerritMerged openstack/octavia master: Update API-REF for x-forwarded-port is string  https://review.openstack.org/55778116:56
*** links has quit IRC17:13
*** annp has quit IRC17:17
*** annp has joined #openstack-lbaas17:18
*** salmankhan has quit IRC17:20
rm_workpratrik: yes, I already have a patch up that handles multi-AZ (and az-antiaffinity)17:25
rm_workbut unfortunately it will never merge as-is17:25
rm_workI'd been meaning to look at splitting that out17:26
rm_workI'll take a look at what you are doing but it is probably similar17:26
rm_workpratrik: it's this huge patch which is also for a custom network driver (because in our case, different AZs are in different L2s so we need to do network routing at L3)17:27
rm_workhttps://review.openstack.org/#/c/435612/17:27
rm_workthis part looks particularly familiar :P https://review.openstack.org/#/c/435612/117/octavia/controller/worker/task_utils.py17:29
rm_workand this you might find useful: https://review.openstack.org/#/c/435612/117/octavia/controller/worker/tasks/amphora_driver_tasks.py17:30
rm_workpratrik: ^^17:30
rm_workmakes sure that your amps are split between zones always17:30
*** AlexeyAbashkin has joined #openstack-lbaas17:37
*** AlexeyAbashkin has quit IRC17:42
rm_workpratrik: and of course: https://review.openstack.org/#/c/435612/117/octavia/controller/worker/tasks/database_tasks.py17:44
rm_workactually I really need to talk to other folks, maybe in the next meeting... about whether or not people think code like this can merge (because it requires a custom nova scheduler at the moment)17:45
rm_workand also whether i want to be juggling yet more feature patches when I'm still waiting for reviews on the ones I have :(17:48
rm_workxgerman_: responded to all points on https://review.openstack.org/#/c/549263/4 and IMO there is no issue here17:56
*** sshank has joined #openstack-lbaas17:56
rm_workxgerman_: also https://review.openstack.org/#/c/555454/518:01
*** AlexeyAbashkin has joined #openstack-lbaas18:02
*** AlexeyAbashkin has quit IRC18:06
rm_workjohnsom: you here today?18:27
johnsomYes, polishing up some unit tests, then going to weigh in our patches18:27
rm_workwoo18:27
openstackgerritMichael Johnson proposed openstack/octavia master: Create noop provider driver and data model  https://review.openstack.org/55832018:37
*** isssp has joined #openstack-lbaas18:38
*** mugsie_ has joined #openstack-lbaas18:41
*** pcaruana has quit IRC18:44
*** shan5464_ has joined #openstack-lbaas18:45
*** numans_ has joined #openstack-lbaas18:46
*** numans has quit IRC18:47
*** mordred has quit IRC18:47
*** shananigans has quit IRC18:47
*** ispp has quit IRC18:47
*** mugsie has quit IRC18:47
*** samccann has quit IRC18:48
*** mordred has joined #openstack-lbaas18:55
*** AlexeyAbashkin has joined #openstack-lbaas18:55
*** dmellado has quit IRC18:56
*** voelzmo has joined #openstack-lbaas18:58
*** dmellado has joined #openstack-lbaas18:58
*** AlexeyAbashkin has quit IRC19:00
*** sshank has quit IRC19:00
*** dmellado has quit IRC19:12
*** voelzmo has quit IRC19:17
*** voelzmo has joined #openstack-lbaas19:19
*** dmellado has joined #openstack-lbaas19:21
*** harlowja has joined #openstack-lbaas19:23
*** ivve has joined #openstack-lbaas19:28
*** yamamoto has quit IRC19:30
*** dmellado has quit IRC19:31
*** yamamoto has joined #openstack-lbaas19:36
*** ivve has quit IRC19:37
*** ianychoi__ has joined #openstack-lbaas19:38
*** pcaruana has joined #openstack-lbaas19:38
*** ianychoi_ has quit IRC19:41
*** yamamoto has quit IRC19:42
openstackgerritGerman Eichberger proposed openstack/octavia master: Insert WaitForPortDetach in delete amphora flow  https://review.openstack.org/55861119:47
*** pcaruana has quit IRC19:50
*** yamamoto has joined #openstack-lbaas19:52
*** dmellado has joined #openstack-lbaas19:55
*** yamamoto has quit IRC19:57
*** yamamoto has joined #openstack-lbaas19:59
*** yamamoto has quit IRC19:59
*** yamamoto has joined #openstack-lbaas20:01
xgerman_^^ rm_work - this might be related to your troubles20:04
*** velizarx has joined #openstack-lbaas20:05
*** yamamoto has quit IRC20:06
rm_workhmmmmmmm20:10
*** velizarx has quit IRC20:10
rm_worki thought we did a wait at a deeper level too20:10
rm_workbut maybe this could be the issue20:10
rm_workthough my "try harder" patch really targets a different issue20:10
rm_workwhich is stuff that has gotten into a broken state due to a failure partway through a flow20:11
xgerman_yes, you would be able to delete with a vip port being gone crazy20:11
xgerman_I am not opposed to the try harder but let’s let it simmer for a bit… or at least unti johnsom weighted in20:12
*** yamamoto has joined #openstack-lbaas20:16
*** yamamoto has quit IRC20:21
*** yamamoto has joined #openstack-lbaas20:23
*** yamamoto has quit IRC20:23
rm_workxgerman_: yes i think this patch is correct, the wait should be there IMO20:30
xgerman_+120:30
*** sapd has quit IRC20:30
rm_workwe probably want to backport this also20:30
*** sapd has joined #openstack-lbaas20:30
*** sapd has quit IRC20:30
*** sapd has joined #openstack-lbaas20:31
xgerman_+100020:34
johnsomxgerman_ But you know that your issue is not this, it's that nova never actually does the delete.  This will just add more delay.... Plus, there is nothing later in that flow that cares if the port is there or not, so why wait there?20:42
xgerman_because we do the same in the failover flow20:43
johnsomBut failover flow cares, this delete flow doesn't.  Failover needs to move that port it is waiting on, this delete doesn't20:43
xgerman_It make sense to me to wait for all of them being detached cleanly20:43
xgerman_but we need to have the ports deleted so we can delete the sec grp20:44
johnsomThe last two steps in that flow are just database "mark things deleted".  There is no security group delete after that....20:45
*** velizarx has joined #openstack-lbaas20:45
xgerman_our flows are a mess — why would we not use tge delete amphora flow in some unordered flow and instead loop over amps in DeleteAmphoraeOnLoadBalancer20:50
xgerman_let me fix that right away20:51
johnsomI'm just saying, that is way too late in the flow. I think that wait needs to go after a detach20:52
johnsomNot that we have really seen this issue anyway20:52
*** velizarx has quit IRC20:53
*** velizarx has joined #openstack-lbaas20:55
xgerman_well, then what ports is Adam trying to delete?20:55
johnsomBe super careful there too. The flows are not a mess and there is probably a good reason for this ordering.  The flow you selected is just for admin/spares cleanup20:55
xgerman_sure, but we should reuse flows where we can20:55
xgerman_ok, will run an errand and then rework that20:57
johnsomxgerman_ Right, and we do20:57
*** sshank has joined #openstack-lbaas21:00
*** sshank has quit IRC21:00
johnsomI don't see any coorilation between Adam21:00
johnsom's patch and needing to wait for a port to detach from an amp frankly.  Ugh,  coorilation -> correlation21:01
*** velizarx has quit IRC21:07
*** yamamoto has joined #openstack-lbaas21:24
*** yamamoto has quit IRC21:29
*** rcernin has joined #openstack-lbaas21:53
*** AlexeyAbashkin has joined #openstack-lbaas22:00
*** AlexeyAbashkin has quit IRC22:05
xgerman_johnsom: well, he si deleting ports which are left over the only ports we shun into that sec grp are the vip and frontend ports22:16
rm_workhmm yeah i found the bit i was thinking of22:16
rm_workand it actually doesn't require the delay22:16
johnsomYeah, but no detach22:16
rm_workit's unrelated22:17
rm_workbecause on a LB delete, the amp deletion actually happens well after the ports are cleaned up22:17
rm_workthe ports detaching is unrelated to the compute action22:17
johnsom^^^ right22:17
rm_workhad to dig through but that explains the nagging feeling i had that we DID handle this somehow22:17
xgerman_so what ports are you then deleting?22:18
rm_workwhen we do a failover and it breaks22:18
rm_workit leaves stuff around22:18
rm_workbecause it has started doing the amp delete but it doesn't finish completely, and rolls back22:19
rm_workbut then the amp is gone and we've got a port sitting around22:19
rm_workand then the amp gets cleaned up (usually due to the OTHER bug I have a patch for, around deleted amp expiry)22:19
xgerman_ok, so this is a side effect of the fail over? In my case I often see the vip port still being in the sec grp22:19
rm_workyeah that would be a totally different fail i think22:20
rm_workah i think also it happens during a LB delete when the LB never went ACTIVE22:20
rm_workso like it got partway through provisioning, usually due to the compute node never coming up right22:20
xgerman_yeah, the LB delete looks very different from the amphora delete OHMO22:20
rm_workand so it doesn't have all the entries hooked up correctly22:20
rm_workyeah, well, it can be22:21
rm_worksince in an amp delete we are maintaining network state22:21
rm_workand just removing a compute22:21
rm_workand in LB delete we are cleaning up network, then compute22:21
xgerman_yes and no -we delete the vm and then we move to delete the vrrp ports22:23
xgerman_so IF nova is slow we would hit some issue22:23
xgerman_?22:23
johnsomLet's just agree, no more upstream patches based of that bad lab where nova is broken and lies....22:24
xgerman_I have trouble finding amps in pending delete but that’s me22:24
xgerman_also nova being slow is a real possibility we should account for22:25
xgerman_and we should delete the amps in an unordered flow instead with all those loops22:25
johnsomRight, because nova lies.  Do a nova show on one of those marked "DELETED" in our DB. At the top, the "special" X attributes. They are all in "deleting" even though status shows otherwise22:26
*** yamamoto has joined #openstack-lbaas22:26
johnsomI'm not so sure myself.22:26
johnsomI think the flows look pretty good.22:26
xgerman_yeah, if we do one or two amps — if we do 50 like with Active-Active then this would greatly benefit from being parallel22:28
johnsomI think we do need to move some serial actions up into the flow. The failover with two down is my highest interest. We just need to be super careful here.22:28
xgerman_+122:29
*** yamamoto has quit IRC22:30
*** voelzmo has quit IRC22:31
*** voelzmo has joined #openstack-lbaas22:41
*** voelzmo has quit IRC22:46
openstackgerritAdam Harwell proposed openstack/octavia master: Allow members to be set as "backup"  https://review.openstack.org/55263222:48
*** voelzmo has joined #openstack-lbaas22:48
rm_workI found something related yesterday actually22:51
rm_worki totally forgot i even added this:22:51
rm_workhttps://review.openstack.org/#/c/435612/117/octavia/controller/worker/tasks/compute_tasks.py@15722:52
rm_workhttps://www.youtube.com/watch?v=Eyp4Sdq6i0s22:53
johnsomCouldin't you do an "in"?22:53
*** voelzmo has quit IRC22:53
rm_worki just copy-pasta'd from below22:53
rm_workwe could rewrite that22:53
rm_workah but no22:54
rm_workbecause i changed the logic22:54
rm_workon the re-delete, i change the log slightly, and don't raise on a failure22:54
rm_workbut, that was my attempt at zombie-hunting22:55
rm_work(one of them)22:55
rm_workbecause... ffff nova22:55
johnsomYeah, the nasty in that lab, you can keep deleting it, but it won't go away.  Thus how the hm was at 100% cpu and eating a ton of memory, it only had 10 threads configured and a bunch of zombies...22:55
johnsomI guess that is adding additional round trips to nova even when things are fine....22:56
rm_workyeah but on an async delete22:59
rm_workso meh22:59
rm_workand it's only once22:59
rm_work(on an LB delete)22:59
*** fnaval has quit IRC23:03
*** fnaval has joined #openstack-lbaas23:10
*** yamamoto has joined #openstack-lbaas23:26
xgerman_that LB delete flow makes less and less sense. We delete the vrrp port twice?23:28
*** fnaval has quit IRC23:29
xgerman_also in the amp case we just delete the amp without detaching anyhting23:29
xgerman_also we skip the pending delete on the ap in the LB delete flow23:32
xgerman_ap=amp23:32
*** yamamoto has quit IRC23:32
xgerman_and the server show doesn’t work on DELETED servers — though they still show up on list with —all23:34
*** threestrands has joined #openstack-lbaas23:36
*** threestrands has quit IRC23:37
*** threestrands has joined #openstack-lbaas23:38
*** threestrands has quit IRC23:38
*** threestrands has joined #openstack-lbaas23:38
openstackgerritAdam Harwell proposed openstack/octavia master: Correctly validate member subnet_id in batches  https://review.openstack.org/55865323:39
rm_workjohnsom: ^^ fixed23:39
*** threestrands has quit IRC23:39
*** threestrands has joined #openstack-lbaas23:39
*** threestrands has quit IRC23:39
*** threestrands has joined #openstack-lbaas23:39
rm_workoops23:40
rm_workpep8 >_<23:40
openstackgerritAdam Harwell proposed openstack/octavia master: Correctly validate member subnet_id in batches  https://review.openstack.org/55865323:40
*** threestrands has quit IRC23:40
*** threestrands has joined #openstack-lbaas23:41
*** threestrands has quit IRC23:41
*** threestrands has joined #openstack-lbaas23:41
johnsomrm_work I think the status code is wrong23:42
rm_work?23:42
rm_work400?23:42
xgerman_yep23:42
rm_workthat's what the other ones do23:42
rm_work<_<23:42
xgerman_400 wrong parameter sounds right for me, too23:42
rm_workyes23:43
johnsomHmm, do we translate that somewhere?  Because I would expect NotFound exception to be 40423:43
xgerman_yes, that needs to be fixed23:43
rm_workerrr23:43
rm_workthe subnet not being found?23:43
rm_workthe subnet not existing is an invalid param23:43
rm_workwhich is a 40023:43
xgerman_making sure it returns a 40023:43
rm_workit's not a 404 on "can't find a member create endpoint" lol23:44
rm_worki stand by 40023:44
johnsomI mean, 400 seems like what the user should get, but the exception raised is NotFound which I would expect to be 40423:44
rm_workerrr23:44
rm_workbut it's only used in THIS context23:44
rm_workso it needs to be a 40023:44
rm_workor by definition we'd have to pick a different one23:44
rm_workthat is a 400 <_<23:44
johnsomYeah, ok, I think I get it. We put in 400 because 404 is endpoint missing. Got it23:44
johnsomJust odd23:44
rm_workyes23:45
xgerman_ok23:45
johnsomright, ok we translate it23:45
rm_workjohnsom: should i include a fix for that lifecycle revert task too?23:48
johnsomrm_work That would be great!23:49
rm_worki mean in that CR or a different one23:49
johnsomOh, maybe another one since I already +2 this one....  depends on if others have issues with that first one.23:50
johnsomThey are two different issues, but ok if they are called out in the commit message. I think their backport situation would be the same....23:51
rm_workkk. hmmm i'm not sure how this got a tuple...23:53
rm_workooooohhh k23:54
rm_worknot sure on the test for this one :/23:56
rm_worki'll post the fix so you can see it23:56
*** voelzmo has joined #openstack-lbaas23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!