openstackgerrit | Adam Harwell proposed openstack/octavia master: Fix revert method for batch member update https://review.openstack.org/558655 | 00:00 |
---|---|---|
rm_work | ^^ | 00:00 |
rm_work | the testing around this is admittedly a little thin :( | 00:00 |
*** kberger has joined #openstack-lbaas | 00:08 | |
*** KeithMnemonic has quit IRC | 00:12 | |
*** kberger has quit IRC | 00:14 | |
xgerman_ | ok, the lb delete flow is a mess and the allowed address pair driver needs to be rewritten — I am still confused why we delete the vrrp ports (twice) before we delete the VMs and on the failover flow/amp delete flow we delete the VM first and then deal with the ports. | 00:21 |
johnsom | I disagree with you on the mess part | 00:22 |
xgerman_ | sure, there isn’t much the flow can do without rewriting the allowed address pairs… I see just too many loops over amphora on too many places which are hard to parallwlize | 00:24 |
johnsom | AAP certainly is not good. The rest I can probably explain anything that is confusing | 00:24 |
xgerman_ | so why do we delete the VMs last and not first like in failover | 00:25 |
johnsom | "needs optimization" != "mess" | 00:25 |
xgerman_ | well, we also need to have the same ste transitions as in delete amp with throwin the amp in peding delete | 00:25 |
xgerman_ | something we don’t so in lb delete | 00:25 |
xgerman_ | yeah, but my main beef is the order of things | 00:26 |
xgerman_ | and I can’t see where we delete the plugged ports in the failover/amp delete flow | 00:26 |
*** yamamoto has joined #openstack-lbaas | 00:28 | |
johnsom | First question: failover has to do it early to free up a compute slot on resource constrained clouds. For LB delete it doesn't matter the order. | 00:29 |
xgerman_ | well, we have to call neutron/nova more often to deallocate ports instead of just killing the vm | 00:29 |
johnsom | Also, you don't delete the VIP ports in failover, you transfer the port as you need to not drop the VIP address. Delete would be bad | 00:30 |
xgerman_ | yes, I saw that but I didn;t see what happens to the plugged member subnet ports | 00:31 |
johnsom | Also, killing the VM does not necessarily mean ports go away. In fact, only the lb-mgmt-net port should go away on a nova delete. | 00:31 |
xgerman_ | but we would get the deallocated in one swoop | 00:32 |
*** voelzmo has quit IRC | 00:32 | |
*** yamamoto has quit IRC | 00:34 | |
johnsom | I'm really not following you. And I am super suspect of re-ordering anything. Most of those flows are very intentional | 00:34 |
xgerman_ | ok, we have a certain order in failover for decommsioning an amp; I would expect that we can use a almost idendical flow to decommsion it on lb delete | 00:36 |
johnsom | But we keep stuff in failover, in delete we don't | 00:36 |
johnsom | We also prioritize traffic termination on delete. | 00:37 |
xgerman_ | yes, I have seen our many calls to disentangle the network from the vm | 00:38 |
xgerman_ | instead we can admin down the ports which is likley faster | 00:39 |
xgerman_ | what I like to see is that we can delete amps (and their ports) in parallel and the outcome of one amp delete doesn’t effect the other | 00:40 |
johnsom | Anyway, bring a darn strong argument if you think order needs to change. | 00:40 |
johnsom | Yeah, optimizing is different than reordering things. | 00:41 |
xgerman_ | I like to reuse the same code paths — I don’t get it why we need to decommision the amps differently on failover then lb delete. We can admin down the ports if we want to stop traffic but we don;t need to do this disentagnle dance before amp delete | 00:43 |
johnsom | Ok, I am signing off for the evening. Timeouts reviewed, it works, but agree we need more docs. | 00:43 |
xgerman_ | and I think it will be easier when the vm is gone | 00:44 |
xgerman_ | but signing off as well | 00:44 |
*** atoth has quit IRC | 00:53 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 00:58 | |
*** voelzmo has joined #openstack-lbaas | 00:59 | |
*** KeithMnemonic has joined #openstack-lbaas | 01:01 | |
*** AlexeyAbashkin has quit IRC | 01:03 | |
*** KeithMnemonic has quit IRC | 01:06 | |
*** harlowja has quit IRC | 01:19 | |
*** longkb__ has joined #openstack-lbaas | 01:28 | |
*** yamamoto has joined #openstack-lbaas | 01:30 | |
*** voelzmo has quit IRC | 01:33 | |
*** yamamoto has quit IRC | 01:35 | |
*** voelzmo has joined #openstack-lbaas | 02:00 | |
openstackgerrit | Merged openstack/octavia master: Allow members to be set as "backup" https://review.openstack.org/552632 | 02:14 |
*** annp has quit IRC | 02:24 | |
*** annp has joined #openstack-lbaas | 02:25 | |
*** yamamoto has joined #openstack-lbaas | 02:32 | |
*** voelzmo has quit IRC | 02:34 | |
*** yamamoto has quit IRC | 02:37 | |
*** threestrands has quit IRC | 02:47 | |
*** threestrands has joined #openstack-lbaas | 02:47 | |
*** threestrands has quit IRC | 02:47 | |
*** threestrands has joined #openstack-lbaas | 02:47 | |
*** threestrands has quit IRC | 02:48 | |
*** threestrands has joined #openstack-lbaas | 02:49 | |
*** threestrands has quit IRC | 02:49 | |
*** threestrands has joined #openstack-lbaas | 02:49 | |
*** voelzmo has joined #openstack-lbaas | 02:58 | |
*** voelzmo has quit IRC | 03:32 | |
*** yamamoto has joined #openstack-lbaas | 03:33 | |
eandersson | In neutron-lbaas (and octavia) when you add for example a member | 03:36 |
eandersson | it starts at PENDING right? and when it's been created it moves into Active? | 03:37 |
johnsom | Right, it is an async API | 03:38 |
eandersson | https://github.com/slinn0/terraform-provider-openstack/blob/master/openstack/resource_openstack_lb_member_v2.go#L262 | 03:38 |
*** yamamoto has quit IRC | 03:38 | |
eandersson | The reason I am asking is because of that statement. | 03:39 |
eandersson | It sounds like Terraform just assumes that as long as the initial request succeded, we are fine? | 03:39 |
eandersson | Which obviously isn't always true. | 03:39 |
johnsom | Maybe it is old neutron-lbaas code that didn’t have status on objects, only via the status API. | 03:40 |
eandersson | Is that an Octavia only thing? | 03:40 |
eandersson | Or neutron-lbaas v1? | 03:40 |
johnsom | Yeah, the result code is 201 Accepted, so always said it was an async call | 03:40 |
eandersson | provisioning_status would be the interesting portion here right? | 03:41 |
eandersson | We don't care if the member is up (the server may not have been created yet) | 03:41 |
johnsom | Octavia always had the statuses, it was added to neutron-lbaas late. For a while it only had the status API | 03:41 |
johnsom | Right | 03:42 |
johnsom | I am not at a computer, but pretty sure member provisioning status was added to neutron lbaas | 03:43 |
eandersson | Looks like I may have been looking at an old version of the terraform provider. | 03:44 |
johnsom | Ah, ok. There had been work there, at least recently to get terraform updated | 03:45 |
rm_work | eandersson: so when are you going to let JudeC out of his cage? :P | 04:01 |
rm_work | I feel like you guys are hiding him away over there or something | 04:01 |
eandersson | haha he is stuck in go land now :D | 04:01 |
rm_work | lol | 04:01 |
rm_work | after i went to all that work to train him on Octavia... T_T | 04:01 |
eandersson | trying to get him back into python land again | 04:02 |
*** voelzmo has joined #openstack-lbaas | 04:02 | |
rm_work | i mean, Golang is cool | 04:02 |
rm_work | just that he left a ton of stuff over here pending :P | 04:02 |
rm_work | johnsom: not sure what to put for the timeouts <_< I'll noodle THAT. | 04:04 |
rm_work | i mean, descriptively | 04:04 |
johnsom | Ok, if you ask nicely I might cook something up tomorrow | 04:05 |
rm_work | :D | 04:05 |
rm_work | just a rebase for now | 04:08 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Expose timeout options https://review.openstack.org/555454 | 04:08 |
rm_work | johnsom: not sure what you want from me on https://review.openstack.org/#/c/555471/ T_T | 04:10 |
*** voelzmo has quit IRC | 04:12 | |
*** pcaruana has joined #openstack-lbaas | 04:27 | |
*** pcaruana has quit IRC | 04:30 | |
*** yamamoto has joined #openstack-lbaas | 04:35 | |
*** threestrands has quit IRC | 04:38 | |
*** links has joined #openstack-lbaas | 04:39 | |
*** links has quit IRC | 04:40 | |
*** yamamoto has quit IRC | 04:40 | |
*** links has joined #openstack-lbaas | 04:46 | |
eandersson | johnsom, actually operating_status is only in the status call for neutron-lbaas v2 :'( | 04:49 |
eandersson | Has anyone started working on the provider support for Octavia yet? :D | 04:50 |
*** yamamoto has joined #openstack-lbaas | 04:51 | |
*** harlowja has joined #openstack-lbaas | 05:16 | |
*** harlowja has quit IRC | 05:20 | |
*** PagliaccisCloud has quit IRC | 05:40 | |
*** PagliaccisCloud has joined #openstack-lbaas | 05:45 | |
*** voelzmo has joined #openstack-lbaas | 06:31 | |
*** velizarx has joined #openstack-lbaas | 06:31 | |
*** voelzmo has quit IRC | 06:35 | |
*** pcaruana has joined #openstack-lbaas | 07:01 | |
*** imacdonn has quit IRC | 07:04 | |
*** imacdonn has joined #openstack-lbaas | 07:04 | |
*** voelzmo has joined #openstack-lbaas | 07:08 | |
*** ivve has joined #openstack-lbaas | 07:11 | |
*** velizarx has quit IRC | 07:17 | |
*** ivve has quit IRC | 07:18 | |
*** voelzmo has quit IRC | 07:22 | |
*** rcernin has quit IRC | 07:23 | |
*** sapd1 has joined #openstack-lbaas | 07:24 | |
*** velizarx has joined #openstack-lbaas | 07:24 | |
*** tesseract has joined #openstack-lbaas | 07:26 | |
*** Alex_Staf has joined #openstack-lbaas | 07:28 | |
longkb__ | o/ I am newcomer in octavia. I want to contribute code into Octavia project, but I don't know how to restart Octavia services | 07:31 |
longkb__ | Could anyone guide me how to debug Octavia project? | 07:32 |
longkb__ | There are a lot of service and I don't know which service should be stopped: http://paste.openstack.org/show/718350/ | 07:36 |
*** AlexeyAbashkin has joined #openstack-lbaas | 07:44 | |
*** voelzmo has joined #openstack-lbaas | 07:59 | |
*** bcafarel has joined #openstack-lbaas | 08:02 | |
*** voelzmo has quit IRC | 08:31 | |
*** voelzmo_ has joined #openstack-lbaas | 08:31 | |
*** voelzmo_ has quit IRC | 08:49 | |
*** voelzmo has joined #openstack-lbaas | 08:50 | |
*** links has quit IRC | 08:53 | |
*** links has joined #openstack-lbaas | 08:57 | |
*** sapd1 has quit IRC | 09:03 | |
*** salmankhan has joined #openstack-lbaas | 09:28 | |
openstackgerrit | lidong proposed openstack/python-octaviaclient master: Update the outdated links https://review.openstack.org/548112 | 09:28 |
*** voelzmo has quit IRC | 09:33 | |
*** voelzmo has joined #openstack-lbaas | 09:34 | |
*** voelzmo has quit IRC | 09:38 | |
*** HW_Peter has joined #openstack-lbaas | 10:06 | |
*** yamamoto_ has joined #openstack-lbaas | 10:08 | |
*** annp_ has joined #openstack-lbaas | 10:09 | |
*** cgoncalv1s has joined #openstack-lbaas | 10:11 | |
*** pcaruana|afk| has joined #openstack-lbaas | 10:12 | |
*** gokhan__ has joined #openstack-lbaas | 10:12 | |
*** dayou_ has joined #openstack-lbaas | 10:15 | |
*** Alex_Staf has quit IRC | 10:15 | |
*** yamamoto has quit IRC | 10:15 | |
*** annp has quit IRC | 10:15 | |
*** cgoncalves has quit IRC | 10:15 | |
*** pcaruana has quit IRC | 10:15 | |
*** sapd has quit IRC | 10:15 | |
*** gokhan_ has quit IRC | 10:15 | |
*** dayou has quit IRC | 10:15 | |
*** HW-Peter has quit IRC | 10:15 | |
*** sapd has joined #openstack-lbaas | 10:18 | |
*** Alex_Staf has joined #openstack-lbaas | 10:22 | |
*** cgoncalv1s is now known as cgoncalves | 10:23 | |
*** salmankhan has quit IRC | 10:28 | |
*** pcaruana|afk| has quit IRC | 10:28 | |
*** pcaruana|afk| has joined #openstack-lbaas | 10:28 | |
*** salmankhan has joined #openstack-lbaas | 10:32 | |
*** rcernin has joined #openstack-lbaas | 10:44 | |
*** annp_ has quit IRC | 10:46 | |
*** longkb__ has quit IRC | 11:04 | |
*** velizarx has quit IRC | 12:06 | |
*** velizarx has joined #openstack-lbaas | 12:08 | |
*** atoth has joined #openstack-lbaas | 12:13 | |
*** velizarx has quit IRC | 12:35 | |
*** velizarx has joined #openstack-lbaas | 12:37 | |
cgoncalves | johnsom: hey. do you have plans to get a new tag on stable/queens? we've backported a significant amount of bugfixes recently | 12:52 |
*** salmankhan has quit IRC | 13:15 | |
*** fnaval has joined #openstack-lbaas | 13:19 | |
*** links has quit IRC | 13:35 | |
*** mugsie_ is now known as mugsie | 13:35 | |
*** mugsie has quit IRC | 13:36 | |
*** mugsie has joined #openstack-lbaas | 13:36 | |
*** salmankhan has joined #openstack-lbaas | 13:37 | |
*** links has joined #openstack-lbaas | 13:49 | |
*** rcernin has quit IRC | 14:32 | |
*** dosaboy_ is now known as dosaboy | 14:47 | |
xgerman_ | cgoncalves: usually we cut around the milestones | 14:48 |
cgoncalves | xgerman_: projects cannot tag asynchronously? can't find any stable release schedule | 14:57 |
xgerman_ | yeah, they can - just saying what we normally do. We can cut any time… | 14:58 |
cgoncalves | ok, so I was asking if there's an ETA for next cut | 14:59 |
*** links has quit IRC | 15:00 | |
dmellado | o/ hi folks | 15:01 |
dmellado | cgoncalves: weren't you on pto? :P | 15:01 |
dmellado | wanted to sync with you regarding the nightly image and the services for q- => neutron- | 15:02 |
johnsom | eandersson yes, in fact I have been working on provider support in Octavia starting last week. Two patches up for review, more to come | 15:03 |
johnsom | cgoncalves: yes, I am overdue. I was waiting for things to settle down on that branch, I will look at this today | 15:04 |
johnsom | There is no release schedule for stable | 15:04 |
johnsom | longkb_ On devstack our services start with o-, so to restart all of octavia the command is “sudo systemctl restart devstack@o-*” | 15:06 |
*** velizarx has quit IRC | 15:16 | |
pratrik | johnsom: when running "poweroff" directly on amphora, the instance gets deleted. Is this the expected behaviour ? | 15:17 |
xgerman_ | pratrik: yes. If you like to stop the LB from processing load you can set the admin state to offline | 15:18 |
xgerman_ | if you power down the system thinks the vm is in error and replaces it with a new one | 15:18 |
xgerman_ | when using Octavia you should leave the individual vms alone and only control things through the Octavia API | 15:19 |
pratrik | Oh, hehe ok ok, I do understand. | 15:19 |
pratrik | I was just surpriced when my amp dissapeared | 15:19 |
pratrik | But a side from that, no new instance was created either.. So I guess something is off anyways | 15:20 |
xgerman_ | mmh, that’s odd | 15:20 |
xgerman_ | you can look at the octavia-health-manager logs to see why that didn’t happen | 15:20 |
pratrik | I powered off the master (it that has anything to do with it) | 15:21 |
xgerman_ | na, we should replace that, too | 15:21 |
pratrik | ah ok | 15:22 |
pratrik | I think I got it | 15:22 |
pratrik | I think my "internal patch" brokeit :( | 15:22 |
pratrik | (hides in shame) | 15:22 |
xgerman_ | no worries - we have all been there | 15:22 |
pratrik | :) | 15:23 |
pratrik | Thanks for the explanation | 15:23 |
*** salmankhan has quit IRC | 15:25 | |
*** salmankhan has joined #openstack-lbaas | 15:25 | |
*** tesseract has quit IRC | 15:38 | |
*** tesseract has joined #openstack-lbaas | 15:41 | |
*** links has joined #openstack-lbaas | 15:43 | |
*** Alex_Staf has quit IRC | 15:44 | |
*** pcaruana|afk| has quit IRC | 15:45 | |
*** ianychoi__ is now known as ianychoi | 15:50 | |
*** salmankhan has quit IRC | 15:56 | |
*** atoth has quit IRC | 16:00 | |
*** salmankhan has joined #openstack-lbaas | 16:03 | |
cgoncalves | dmellado: I was on PTO Monday and will be tomorrow :) | 16:05 |
cgoncalves | johnsom: great, thanks! | 16:05 |
dmellado | oh, so just coming back for one day then fleeing? too bad! | 16:05 |
*** links has quit IRC | 16:05 | |
cgoncalves | dmellado: I was in yesterday (yesterday was tuesday in my calendar :P) | 16:05 |
dmellado | johnsom: cgoncalves btw, for now, and note that I highlight the now | 16:06 |
dmellado | we could revert the patch as it seems that there are some more issues under the hood with the neutron-* thing | 16:06 |
dmellado | cgoncalves: well, but in spring two days become one almost xD | 16:06 |
johnsom | dmellado It doesn't seem to be breaking our gates, do you know the timeline on getting neutron side fixed and back on track? | 16:07 |
cgoncalves | dmellado: meh :/ in case you're aware of bug trackers and/or reviews could you please link them in the story nmagnezi created? | 16:07 |
johnsom | I know some folks use those scripts, but our gates don't | 16:08 |
cgoncalves | ... yet | 16:08 |
dmellado | cgoncalves: well, basically what happens is that the main devstack tempest job | 16:08 |
dmellado | uses q-* | 16:08 |
dmellado | so we're just overriding there | 16:08 |
dmellado | I'll add these findings on nmagnezi story, no worries | 16:09 |
dmellado | johnsom: I'm not sure about when upstream qa will be back there | 16:10 |
dmellado | there were still some issues on neutron-legacy and neutron-lib IIRC | 16:10 |
dmellado | but let's see :) | 16:11 |
johnsom | Yeah, I saw the revert over there | 16:11 |
johnsom | Ok, thanks | 16:11 |
cgoncalves | dmellado: I see. thanks! | 16:11 |
dmellado | thanks to you, folks! | 16:11 |
cgoncalves | dmellado: and what about the nightly image? | 16:11 |
dmellado | xD just writing it, saved me the time | 16:11 |
dmellado | so, is that now being created? | 16:12 |
dmellado | how would I be able to use it as a part of my jobs? :P | 16:12 |
cgoncalves | (that reminds me that it would probably make more sense to create a post job instead of periodic) | 16:12 |
cgoncalves | wget http://tarballs.openstack.org/octavia/test-images/test-only-amphora-x64-haproxy-centos-7.qcow2 :P | 16:12 |
dmellado | oh, so it's there! :P | 16:13 |
dmellado | thanks! | 16:13 |
dmellado | johnsom: will you be at Vancouver by any chance? | 16:13 |
johnsom | Yes, I will be | 16:14 |
dmellado | awesome, looking forward to see you there then | 16:14 |
johnsom | Wait, is RH finally inviting us to a party? grin | 16:14 |
dmellado | lol there is an RDO party if you're up to that xD | 16:14 |
* cgoncalves will not go, neither to the summit or RH party :( | 16:14 | |
*** atoth has joined #openstack-lbaas | 16:15 | |
dmellado | cgoncalves: I don't pity you after seeing your pictures! :D | 16:15 |
cgoncalves | dmellado: blasphemy! | 16:16 |
dmellado | xD | 16:19 |
*** links has joined #openstack-lbaas | 16:20 | |
*** velizarx has joined #openstack-lbaas | 16:37 | |
*** AlexeyAbashkin has quit IRC | 16:38 | |
*** shan5464_ is now known as shananigans | 17:05 | |
*** links has quit IRC | 17:08 | |
*** velizarx has quit IRC | 17:09 | |
*** salmankhan has quit IRC | 17:13 | |
*** atoth has quit IRC | 17:56 | |
*** yamamoto_ has quit IRC | 17:58 | |
*** yamamoto has joined #openstack-lbaas | 17:59 | |
eandersson | johnsom, awesome - let me know when they are ready for review | 18:03 |
eandersson | I got another question about good old legacy neutron-lbaas | 18:04 |
johnsom | eandersson Comments are always welcome: https://review.openstack.org/558013 and https://review.openstack.org/558320 | 18:04 |
johnsom | Sure, what is up? | 18:05 |
eandersson | If a lb/pool/member api call fails... is it supposed to return a 500 error? | 18:05 |
johnsom | No | 18:05 |
eandersson | I would assume that it would be async, so 20X and then go into a failed state | 18:05 |
johnsom | That was easy | 18:05 |
eandersson | :D | 18:05 |
johnsom | Yeah, it should never give you a 500. If it does, it's a bug of some sort | 18:06 |
eandersson | Guessing this is the code that handles that https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/drivers/brocade/driver_v2.py#L126 | 18:06 |
eandersson | but some 3rd party vendor drivers are now wrapping their exceptions | 18:06 |
*** yamamoto_ has joined #openstack-lbaas | 18:06 | |
johnsom | Let me blow the dust off the neutron-lbaas code and take a look here | 18:07 |
*** AlexeyAbashkin has joined #openstack-lbaas | 18:08 | |
eandersson | Thanks man appreciate it! | 18:08 |
eandersson | The a10 driver for example does not catch expections in their own driver and bubble them up https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/drivers/a10networks/driver_v2.py#L86 | 18:08 |
johnsom | Is it the brocade driver that gave you the 500? or A10? | 18:09 |
johnsom | Can you pastebin the neutron service log for the 500? There is likely a traceback | 18:09 |
*** yamamoto has quit IRC | 18:10 | |
*** atoth has joined #openstack-lbaas | 18:10 | |
rm_work | pratrik: you still here? | 18:10 |
rm_work | pratrik: i left you a lot of notes yesterday about what you are trying to do (which I have put a lot of work into already) | 18:11 |
*** AlexeyAbashkin has quit IRC | 18:12 | |
johnsom | Yeah, I think under the neutron-lbaas model, the brocade looks right, where a10 is ... | 18:12 |
rm_work | lolol yes that seems right | 18:12 |
rm_work | "where a10 is..." | 18:12 |
rm_work | johnsom: today, https://review.openstack.org/#/c/557548/ ? :P | 18:14 |
johnsom | The octavia driver also uses the failed_completion exception. | 18:14 |
*** tesseract has quit IRC | 18:14 | |
johnsom | rm_work Yeah, was going to help with the docs first, but my morning went a bit sideways... Had to help the OSA folks for a bit. | 18:14 |
johnsom | One of the things I found was they are also hitting the OVH/KVM issue | 18:15 |
rm_work | wut | 18:15 |
rm_work | you mean, a kvm issue | 18:15 |
rm_work | they aren't using OVH right? | 18:15 |
johnsom | Yeah, tempest was booting cirros, they had enabled KVM accel, OVH hosts cause the same hardware error we see | 18:15 |
johnsom | Yeah, upstream gates | 18:15 |
rm_work | hmm | 18:15 |
rm_work | ah. | 18:16 |
rm_work | not RAX OSA | 18:16 |
rm_work | upstream OSA | 18:16 |
johnsom | No, upstream OSA | 18:16 |
*** voelzmo has joined #openstack-lbaas | 18:16 | |
johnsom | But, yes, the reason I was looking over there was internal related... | 18:17 |
johnsom | eandersson Does that answer your question about the 500? | 18:18 |
eandersson | I thnik so yea | 18:18 |
johnsom | The fix would be to wrap those in the A10 driver. Either the brocade way or with a decorator like the octavia driver. | 18:19 |
*** Swami has joined #openstack-lbaas | 18:21 | |
eandersson | Yep - that is what I was thinking as well | 18:22 |
eandersson | Thanks johnsom ! | 18:22 |
cgoncalves | eandersson: hi. I have a question for you :) | 18:26 |
cgoncalves | eandersson: can you share how much was the performance increase with https://review.openstack.org/#/c/477698/ ? | 18:26 |
eandersson | It depends heavily on the number of load balancers, pools, members etc you have | 18:27 |
cgoncalves | a LB show command can take ages depending on how many children objects it has | 18:27 |
eandersson | but we saw a pretty significant increase | 18:27 |
cgoncalves | in one of my tests I saw improvements from ~12 seconds to ~9 seconds | 18:27 |
cgoncalves | but still it fails poorly at scale | 18:28 |
cgoncalves | I mean neutron-lbaas, not your patch :) | 18:28 |
eandersson | Yea - the problem is that it fetches everything ;'( | 18:29 |
eandersson | but yea we saw about the same level of improvements | 18:29 |
cgoncalves | ok, thanks for confirming | 18:30 |
eandersson | I noticed that upgrading to sqlalchemy 1.2.x it would drop another ~3s | 18:30 |
eandersson | but never dared to roll that out to production | 18:30 |
cgoncalves | oh, good to know! | 18:32 |
cgoncalves | hehe understandably :) | 18:32 |
* rm_work doesn't understand | 18:35 | |
* rm_work runs master in prod | 18:36 | |
cgoncalves | rm_work: master is yesterday for you. you run ahead of master | 18:41 |
johnsom | Yeah, a lot of the neutron-lbaas DB stuff is "less than optimal". | 18:42 |
johnsom | AKA Use octavia instead | 18:42 |
rm_work | yes | 18:44 |
rm_work | (to both) | 18:44 |
cgoncalves | just thinking that there are operators deploying neutron-lbaas in production *today*... | 18:45 |
rm_work | yeah, that's unfortunate :( | 18:46 |
rm_work | though to be fair probably most of them don't have a choice because we haven't finished the provider stuff yet | 18:46 |
* cgoncalves knows of a couple in the process of going to prod with haproxy driver | 18:47 | |
johnsom | I would cast shame on any company assisting in that endeavor.... | 18:50 |
* johnsom Notes we are a partner with a bunch of those and are probably doing the same | 18:50 | |
* rm_work gets out his ShameCaster5000 | 18:51 | |
*** velizarx has joined #openstack-lbaas | 18:53 | |
cgoncalves | 'xD | 18:55 |
*** AlexeyAbashkin has joined #openstack-lbaas | 18:58 | |
johnsom | cgoncalves Have a minute to look at something for me? Need an outsider (from the patch) opinion | 19:01 |
*** AlexeyAbashkin has quit IRC | 19:02 | |
xgerman_ | johnsom so if we merged something to master before queens was cut do I need ot cherry pick by hand or can I still use the tool? | 19:04 |
xgerman_ | to get that to Pike | 19:04 |
johnsom | You just cherrypick from master to queens, queens to pike. Skipping a stable with a backport is not good. | 19:06 |
xgerman_ | but we never backported that to queens | 19:06 |
xgerman_ | it’s part of queens | 19:06 |
johnsom | So cherrypick from queens | 19:07 |
*** velizarx has quit IRC | 19:07 | |
xgerman_ | but it will have the same commit-id then in master - just confused? | 19:07 |
rm_work | just go to the patch in gerrit | 19:07 |
rm_work | and click "cherry-pick" | 19:07 |
rm_work | it will work | 19:08 |
rm_work | don't worry about what release it's in | 19:08 |
xgerman_ | I did that and johnsom gave me a -2 | 19:08 |
*** velizarx has joined #openstack-lbaas | 19:08 | |
rm_work | <_< | 19:08 |
rm_work | link | 19:08 |
*** velizarx has quit IRC | 19:08 | |
xgerman_ | https://review.openstack.org/#/c/558892/ | 19:08 |
johnsom | The back link showed you cherry picked from master so I assumed you skipped backporting it to queens. | 19:08 |
rm_work | was it IN queens or not | 19:08 |
xgerman_ | it is in Queens | 19:09 |
johnsom | Yeah, I don't know, that's why I blocked it | 19:09 |
rm_work | yeah feb 8 | 19:09 |
rm_work | that was in queens | 19:09 |
rm_work | so it should be fine | 19:09 |
xgerman_ | that’s what Ithought | 19:09 |
johnsom | I just looked at this: https://review.openstack.org/#/q/Ifcfdba1d28aa732d0f7b3b0a8bffb58cee0ab7cc | 19:09 |
rm_work | yeah i mean ALL patches are in master at some point :P | 19:10 |
rm_work | need to check when they were merged | 19:10 |
xgerman_ | lol | 19:10 |
*** harlowja has joined #openstack-lbaas | 19:10 | |
rm_work | who is on still? | 19:11 |
rm_work | eandersson? | 19:11 |
johnsom | Well, true... | 19:11 |
rm_work | trying to get kevin fox to comment on the timeout patch | 19:12 |
rm_work | since he was the issue reporter | 19:12 |
rm_work | but seems like i never catch him, lol | 19:12 |
johnsom | Yeah, I changed my vote with my reason. I think it is good. | 19:14 |
johnsom | Ok, bite to eat before the meeting. | 19:14 |
rm_work | oh right | 19:15 |
rm_work | meeting | 19:15 |
rm_work | i was gonna add an agenda item | 19:16 |
rm_work | or two | 19:16 |
*** atoth has quit IRC | 19:16 | |
*** jniesz has joined #openstack-lbaas | 19:19 | |
xgerman_ | sure | 19:19 |
cgoncalves | johnsom: sure | 19:27 |
cgoncalves | https://review.openstack.org/#/c/541494/ is included in stable/queens | 19:29 |
johnsom | cgoncalves On patch http://logs.openstack.org/54/555454/6/check/build-openstack-api-ref/f080dd7/html/v2/index.html#id34 do the four timeout descriptions at the bottom make sense to you? Are they clear enough? | 19:41 |
cgoncalves | johnsom: scrolling to #id34 doesn't work. which one is it? | 19:44 |
johnsom | cgoncalves Listener create, request box | 19:45 |
*** velizarx has joined #openstack-lbaas | 19:48 | |
*** velizarx has quit IRC | 19:48 | |
cgoncalves | johnsom: 'frontend client' is user<->listener? | 19:49 |
johnsom | Yes | 19:49 |
cgoncalves | johnsom: ok, than I understand the descriptions | 19:51 |
cgoncalves | and since I'm at newbie-level I think others will understand too | 19:51 |
johnsom | +1 ok cool. I changed my vote to +2 because as I looked at it to write something more it seemed pretty clear already for an api-reference | 19:52 |
johnsom | Thanks! | 19:52 |
cgoncalves | your welcome | 19:52 |
johnsom | rm_work Let me know when you add your agenda items so I refresh. Sometimes I don't get the notifications. | 19:57 |
cgoncalves | are all those 4 timeouts expected to be available by all/most providers? checking if it's not haproxy-only | 19:59 |
johnsom | Yes, from the providers I have used they all support those | 19:59 |
cgoncalves | excellent | 19:59 |
johnsom | Would like feedback of course | 19:59 |
johnsom | #startmeeting Octavia | 20:00 |
openstack | Meeting started Wed Apr 4 20:00:03 2018 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. | 20:00 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 20:00 |
*** openstack changes topic to " (Meeting topic: Octavia)" | 20:00 | |
openstack | The meeting name has been set to 'octavia' | 20:00 |
johnsom | Hi folks | 20:00 |
cgoncalves | hi | 20:00 |
johnsom | #topic Announcements | 20:00 |
*** openstack changes topic to "Announcements (Meeting topic: Octavia)" | 20:00 | |
johnsom | Check your e-mail for a foundation survey about the PTGs | 20:00 |
nmagnezi | o/ | 20:01 |
johnsom | The foundation sent out a survey about the PTGs that I know some of us are happy to give feedback. So check your e-mail in case you missed your link to the survey | 20:01 |
johnsom | Also of note, The "S" release will be "Stein" | 20:01 |
xgerman_ | o/ | 20:01 |
johnsom | Solar won the vote, but had a legal conflict, so Stein it is... | 20:02 |
johnsom | I have a number of other OpenStack activity type things later on the agenda, but other than those any announcements I missed? | 20:02 |
johnsom | Rocky MS-1 is in two weeks | 20:03 |
johnsom | #topic Brief progress reports / bugs needing review | 20:03 |
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)" | 20:03 | |
johnsom | Ok, moving on. It's been a busy bit for me. I did another spin of the tempest plugin, got more feedback I hope I can address today. After that I started on the provider driver work. I have a base class and a noop driver up for review (marked WIP as I am still making some adjustments, but feedback is still welcome). | 20:05 |
johnsom | Lots-O-reviews too. Reviewed a bunch of great dashboard stuff (L7 support!) (sorry it took this long to get some reviews on it) and some other recent patches with new features for Rocky. | 20:06 |
johnsom | Anyone else have updates? | 20:06 |
johnsom | Spring break vacations? | 20:06 |
rm_work | I have a couple of things up still | 20:06 |
rm_work | i think maybe timeouts will merge soon? but, Usage API could use some attention | 20:07 |
rm_work | not sure everyone is aware of the work i'm trying to do there | 20:07 |
cgoncalves | rm_work: you couldn't help yourself :D | 20:07 |
rm_work | but would like to get people's general approval on the concept and how it's organized | 20:07 |
johnsom | This is an open comment section for everyone to share what they are working on with the team | 20:07 |
rm_work | cgoncalves: usually that's true ;) | 20:07 |
cgoncalves | johnsom: thanks a lot (!) for the work on tempest and providers. really looking forward to having them merged | 20:07 |
eandersson | rm_work, sup? | 20:08 |
johnsom | There is still a bunch of work to do on tempest, so if we have volunteers... | 20:09 |
rm_work | eandersson: lol | 20:09 |
eandersson | :D | 20:09 |
johnsom | How is the grenade gate going BTW? | 20:09 |
rm_work | eandersson: wanted you to review something, maybe it's fine now | 20:09 |
eandersson | I c - let me know =] | 20:10 |
cgoncalves | johnsom: no updates from my side. I know you reviewed it. I've been busy with finalizing octavia integration in tripleo and containerizing neutron-lbaas | 20:10 |
cgoncalves | plus kolla | 20:10 |
johnsom | Ok, looking forward to that too! | 20:10 |
nmagnezi | we are getting there :-) | 20:10 |
johnsom | I would love to be able to declare an upgrade tag in Rocky | 20:11 |
johnsom | Maybe even two.... | 20:11 |
johnsom | #topic Other OpenStack activities of note | 20:11 |
*** openstack changes topic to "Other OpenStack activities of note (Meeting topic: Octavia)" | 20:11 | |
johnsom | These are just some things going on in the wider OpenStack I think you all might want to know about. Sorry if it's duplicate from the dev mailing list (let me know if this is of value or not). | 20:12 |
johnsom | No more stable Phases welcome Extended Maintenance | 20:12 |
johnsom | #link https://review.openstack.org/548916 | 20:12 |
cgoncalves | johnsom: first 2 upgradability tags are achievable with grenade as-is, I think | 20:12 |
johnsom | #link https://review.openstack.org/#/c/552733/ | 20:12 |
johnsom | cgoncalves I think there is still a bug or two there, thus the comments | 20:12 |
johnsom | But close! | 20:13 |
johnsom | Based on packager feedback at the PTG and other forums, there is a change in how stable branches are going to be handled. It is still up for review if you want to comment. | 20:14 |
johnsom | A quick note on recent IRC trolling/vandalism | 20:14 |
johnsom | #link http://lists.openstack.org/pipermail/openstack-dev/2018-April/129024.html | 20:14 |
johnsom | Just an FYI that work is being done to try to help with the IRC spammers | 20:14 |
johnsom | a plan to stop syncing requirements into projects | 20:14 |
johnsom | #link http://lists.openstack.org/pipermail/openstack-dev/2018-March/128352.html | 20:15 |
johnsom | The way global requirements are handled is changing. You have probably seen the lower constraint gates, but you will see less proposal bot GR updates. | 20:15 |
johnsom | And finally, there are some upstream package changes coming we should be aware of in case they break things. | 20:16 |
johnsom | Pip and PBR are both doing major releases in the next few weeks. | 20:16 |
johnsom | Replacing pbr's autodoc feature with sphinxcontrib-apidoc | 20:17 |
johnsom | #link http://lists.openstack.org/pipermail/openstack-dev/2018-April/128986.html | 20:17 |
rm_work | oh yeah, pip10 is a big one right? | 20:17 |
johnsom | We probably need to investigate that work to update our docs gates. Looking for volunteers there too. | 20:17 |
johnsom | Yeah, the big pip 10 is out in like beta/rc or something. It bit the ansible project since they were pulling from git. It will hit the rest of us in a week or two I think. There have been some mailing list threads on that to. | 20:18 |
johnsom | Even one of the pip 9 dot releases broke the Ryu project and neutron-agents alread | 20:19 |
johnsom | y | 20:19 |
johnsom | So, FYIs. It might get bumpy here soon. | 20:19 |
cgoncalves | can we, if makes sense at all, to add non-voting/experimental pip10 jobs? | 20:19 |
johnsom | Probably, yes. If someone has cycles to do that. I think infra/qa is working on some experimental gates to see what is coming. | 20:20 |
johnsom | Sadly I don't think I can carve off that time right now. But would support others | 20:22 |
johnsom | #topic Octavia deleted status vs. 404 | 20:22 |
*** openstack changes topic to "Octavia deleted status vs. 404 (Meeting topic: Octavia)" | 20:22 | |
cgoncalves | I'm not up to speed on pip10 or have much time this week, but would try next week if not too late | 20:22 |
johnsom | Ok, sure. I think the plan is to land it the week of the 14th. So, you can help with cleanup... grin Maybe we will be fine. I just wanted to give a heads up of places to look in case things break. | 20:23 |
johnsom | I know we don't import pip modules, so we are ahead of the game there! | 20:24 |
johnsom | Ok, 404 carry over from the last few meetings. | 20:24 |
johnsom | #link https://review.openstack.org/#/c/545493/ | 20:24 |
johnsom | This is still open | 20:24 |
johnsom | argh, looks like it has a gate issue. | 20:25 |
johnsom | Any other updates about the libraries or other comments on this? | 20:25 |
johnsom | Though that failure, only 8 minutes it, must be a devstack failure. Our code wouldn't be running yet. | 20:26 |
johnsom | Ok, moving on then. | 20:27 |
johnsom | #topic Open Discussion | 20:27 |
*** openstack changes topic to "Open Discussion (Meeting topic: Octavia)" | 20:27 | |
johnsom | Other topics for today? | 20:27 |
nmagnezi | one small topic we started last week | 20:27 |
nmagnezi | #link https://etherpad.openstack.org/p/storyboard-issues | 20:27 |
johnsom | Right! | 20:27 |
nmagnezi | I plan to contact the storyboard team next week (we are in a holiday this week) | 20:28 |
johnsom | They just finished their meeting I think | 20:28 |
nmagnezi | so please, if you didn't add your stuff just yet, please do it. | 20:28 |
nmagnezi | I guess I can just ping ppl in the channel :) | 20:28 |
nmagnezi | see how that goes.. | 20:28 |
johnsom | Yeah, or next week when you are back in office plan to join the meeting | 20:29 |
johnsom | I saw they either have or are in the process of moving a few more projects | 20:29 |
*** sapd has quit IRC | 20:30 | |
johnsom | Other topics today? | 20:30 |
rm_work | oh i had a thng | 20:30 |
rm_work | (a thing | 20:30 |
rm_work | *a thing | 20:30 |
rm_work | ) | 20:30 |
*** sapd has joined #openstack-lbaas | 20:30 | |
* rm_work dies | 20:31 | |
johnsom | I feel like I should... | 20:31 |
*** yamamoto_ has quit IRC | 20:31 | |
rm_work | umm yeah, so | 20:31 |
* johnsom revives rm_work with potion of health | 20:31 | |
rm_work | thanks ^_^ | 20:31 |
*** sapd has quit IRC | 20:31 | |
rm_work | pratrik brought up the concept of AZs again | 20:31 |
rm_work | (I wish they'd stick around or read scrollback so we could chat about it) | 20:32 |
*** sapd has joined #openstack-lbaas | 20:32 | |
rm_work | but anyway, it seems more than just GD are doing multi-az stuff in the nova scheduler | 20:32 |
rm_work | and it seems to be done in a compatible way | 20:32 |
xgerman_ | yep, we should add support | 20:33 |
rm_work | so I'm wondering if I cleaned up and split out the work I did around multi-AZ / AZ-anti-affinity, if people think it would be possible to merge | 20:33 |
johnsom | So upstreaming it into nova? | 20:33 |
johnsom | grin | 20:33 |
rm_work | the reason i didn't do this in the past is that enabling it requires custom nova patches | 20:33 |
rm_work | lol | 20:33 |
cgoncalves | GD? | 20:33 |
rm_work | GoDaddy | 20:33 |
rm_work | I would love to get nova to properly support this but i think that may be a losing battle that would stretch over multiple years | 20:33 |
xgerman_ | mmh, can’t we just make it so that you cna supply a list of AZ when you create an LB and we out them there | 20:34 |
rm_work | so what i'm talking about is octavia support, assuming people use approximately the same compatible custom nova scheduler stuff | 20:34 |
xgerman_ | mmh, how would we test that in the gate? | 20:34 |
rm_work | that is a good question | 20:34 |
rm_work | i guess that might be the closest to a firm answer, actually | 20:34 |
johnsom | Yeah, and AZs would be octavia driver specific... | 20:35 |
rm_work | until it's gate testable, it'd be hard to run it | 20:35 |
rm_work | yes, that also | 20:35 |
rm_work | it'd be specific to the amphora driver | 20:35 |
xgerman_ | well, as I said we could introduce an AZ parameter when you create an LB | 20:35 |
rm_work | *amphora provider | 20:35 |
johnsom | I'm open to the idea for sure, it make sense in some cases. | 20:35 |
xgerman_ | instead of having obe per installation | 20:35 |
rm_work | xgerman_: the sort of thing i do now is to have octavia transparently support multi-az, and handle anti-affinity transparently | 20:36 |
rm_work | the user shouldn't need to know anything about this | 20:36 |
johnsom | I just worry about putting a bunch of code in to become a nova scheduler, that each implementation wants to do a different way (some have cross AZ subnets, some don't, etc.). | 20:36 |
rm_work | right, the other complication is networking | 20:36 |
johnsom | I mean our dream would be nova server groups that are AZ aware (I think) | 20:37 |
rm_work | yes | 20:37 |
rm_work | so, i honestly don't care that much | 20:37 |
xgerman_ | and the network spanning AZs | 20:37 |
xgerman_ | or we cop out and tell them to use kosmos | 20:37 |
rm_work | but we got some interest | 20:37 |
rm_work | and i have the code ready | 20:37 |
rm_work | it just needs to be split out from my monolith patch | 20:37 |
rm_work | this is a preliminary query about whether it's worth my time | 20:38 |
johnsom | Right. So, I guess since you know what code you have, is it small enough, simple enough, and configuration enabled that we could add it, but at the operator's own risk enable it? | 20:38 |
xgerman_ | yep, like “unstable” | 20:39 |
johnsom | I'm not a fan of adding it to the API for users to enter AZs, I think that gets a bit too driver specific / how does a user know the right answer? | 20:39 |
johnsom | But something via flavors or config... | 20:40 |
johnsom | netsplit? | 20:41 |
xgerman_ | approval | 20:42 |
johnsom | It got quiet | 20:42 |
xgerman_ | we need flavors for so many things - someone ought to write the code for it | 20:42 |
johnsom | Ha, trying..... | 20:42 |
rm_work | yeah so | 20:42 |
johnsom | Well, to be fair, there are parts of it in a patch already. I'm just adding the glue | 20:42 |
rm_work | that would essentially be it | 20:43 |
cgoncalves | rm_work: have you probed the nova team about such feature? others may also want that, you never know | 20:43 |
rm_work | just an "experimental feature" that can be enabled by using config a certain way | 20:43 |
rm_work | may or may not be gate-able, other than making sure it doesn't break the normal paths | 20:43 |
* rm_work shrugs | 20:43 | |
rm_work | it would be neat to not have to carry this patch myself | 20:44 |
rm_work | but | 20:44 |
johnsom | Yeah, I think if it's not like 1000's of lines of code, lots-o-warnings, etc. | 20:44 |
rm_work | alright, i'll look at splitting it out | 20:44 |
rm_work | i think i can get that done effectively | 20:44 |
xgerman_ | +1 | 20:44 |
johnsom | Ok, but if we see a bunch of "add nova placement API to AZ scheduler patches" I'm going to be like... Why isn't this just in nova.... | 20:45 |
rm_work | lol | 20:45 |
johnsom | BTW, we can't really use that code from pastebin. That isn't a way that we know the code was licensed for use in OpenStack, so I am assuming we are talking about your code and patch..... | 20:46 |
johnsom | Ok, other topics today? | 20:47 |
johnsom | Ok, thanks folks! Have a good week (vacation) | 20:48 |
johnsom | #endmeeting | 20:48 |
*** openstack changes topic to "Discussion of OpenStack Load Balancing (Octavia) | https://etherpad.openstack.org/p/octavia-priority-reviews" | 20:48 | |
openstack | Meeting ended Wed Apr 4 20:48:54 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 20:48 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-04-04-20.00.html | 20:48 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-04-04-20.00.txt | 20:48 |
openstack | Log: http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-04-04-20.00.log.html | 20:49 |
nmagnezi | o/ | 20:49 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Fix calls to "DELETED" items https://review.openstack.org/545493 | 20:50 |
rm_work | yep i was referring to my own code | 20:51 |
rm_work | i wouldn't need to touch that | 20:51 |
rm_work | it's a kind of naive implementation of what i already did in a much more robust way | 20:51 |
*** rcernin has joined #openstack-lbaas | 20:59 | |
*** yamamoto has joined #openstack-lbaas | 21:28 | |
*** rcernin has quit IRC | 21:29 | |
*** yamamoto has quit IRC | 21:33 | |
*** voelzmo_ has joined #openstack-lbaas | 21:42 | |
johnsom | rm_work So, https://review.openstack.org/#/c/555471/ | 21:45 |
johnsom | There was even a bug in that patch we later fixed.... | 21:45 |
*** voelzmo has quit IRC | 21:46 | |
*** voelzmo_ has quit IRC | 21:47 | |
*** voelzmo has joined #openstack-lbaas | 21:47 | |
*** voelzmo has quit IRC | 21:52 | |
rm_work | aahh yeah we need that too | 21:59 |
*** AlexeyAbashkin has joined #openstack-lbaas | 21:59 | |
rm_work | johnsom: ok, added it to the chain | 22:02 |
johnsom | But xgerman_ also thought that was a bit much.... | 22:03 |
*** voelzmo has joined #openstack-lbaas | 22:04 | |
*** voelzmo has quit IRC | 22:04 | |
*** AlexeyAbashkin has quit IRC | 22:04 | |
rm_work | he thought the part2 thing is less bug more feature | 22:05 |
rm_work | i disagree | 22:05 |
rm_work | since it still doesn't add any feature ;P | 22:05 |
rm_work | just brings us closer in line to what we claimed we wanted to support and never did | 22:05 |
rm_work | and still don't | 22:05 |
rm_work | lol | 22:05 |
*** voelzmo has joined #openstack-lbaas | 22:05 | |
johnsom | It adds "support for more than one lb per amp" | 22:05 |
rm_work | that's what i'm talking about | 22:06 |
johnsom | well, yes | 22:06 |
rm_work | "part2" | 22:06 |
rm_work | "just brings us closer in line to what we claimed we wanted to support and never did, and still don't support" | 22:06 |
rm_work | so it's hard to claim it really adds a feature | 22:06 |
rm_work | since... it doesn't actually add a feature | 22:06 |
*** voelzmo_ has joined #openstack-lbaas | 22:07 | |
*** voelzmo has quit IRC | 22:07 | |
xgerman_ | that is some double reverse logic | 22:08 |
xgerman_ | sonce we don’t support this feature, adding this feature, doesn’t add a feature | 22:08 |
rm_work | umm | 22:09 |
rm_work | show me how it added the feature | 22:09 |
rm_work | create me an amp with two LBs :P | 22:09 |
rm_work | if you can show me how a feature was added | 22:09 |
rm_work | then i will believe you | 22:09 |
rm_work | IMO *your* logic is what seems like "double-reverse logic" | 22:09 |
rm_work | well, maybe single-reverse | 22:10 |
*** voelzmo_ has quit IRC | 22:11 | |
rm_work | here's how I see it: the amp driver was always supposed to support that, so not supporting it properly at one place in the driver layer was a bug. But, Octavia itself doesn't support that feature, and still doesn't, so there is no change at the project level. | 22:12 |
rm_work | SO, I see no way you could interpret this as violating backport policy | 22:13 |
rm_work | either way you look at it | 22:13 |
*** xgerman_ has quit IRC | 22:15 | |
*** xgerman_ has joined #openstack-lbaas | 22:15 | |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Defend against neutron error response missing keys https://review.openstack.org/558951 | 22:20 |
rm_work | johnsom: any idea why I might have done this? lol https://review.openstack.org/#/c/435612/117/octavia/amphorae/backends/agent/api_server/keepalived.py | 22:21 |
rm_work | and is that eligible for master? | 22:22 |
rm_work | my current goal is to get as much as possible out of my FLIP patch and into master | 22:24 |
rm_work | I think that is good | 22:24 |
rm_work | i mean, that section. going to propose it | 22:24 |
johnsom | I'm not sure why you did that one. Seems like there might be other stuff with that change. | 22:25 |
rm_work | yes | 22:26 |
rm_work | it looks like it was necessary for making keepalived behave more sanely | 22:27 |
rm_work | like, the states go really wonky by default when you restart stuff | 22:27 |
rm_work | this makes it more predictable, which was required by my alerting stuff (so it didn't alert on invalid state changes) | 22:27 |
rm_work | I think it technically is working upstream but only because we are lucky that keepalived is very resilient by nature :P | 22:28 |
rm_work | https://review.openstack.org/#/c/435612/35..36 | 22:28 |
rm_work | it was introduced there | 22:28 |
rm_work | along with the rest of the alerting stuff | 22:28 |
*** yamamoto has joined #openstack-lbaas | 22:30 | |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Make keepalived initialization more predictable https://review.openstack.org/558954 | 22:32 |
*** yamamoto has quit IRC | 22:36 | |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Minor refactor of health_sender.py https://review.openstack.org/558955 | 22:39 |
*** voelzmo has joined #openstack-lbaas | 22:41 | |
openstackgerrit | Adam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s) https://review.openstack.org/435612 | 22:42 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Add debug timing logging to amphora health update https://review.openstack.org/558956 | 22:45 |
*** rcernin has joined #openstack-lbaas | 22:45 | |
*** fnaval has quit IRC | 22:58 | |
*** voelzmo has quit IRC | 23:14 | |
*** jniesz has quit IRC | 23:18 | |
*** yamamoto has joined #openstack-lbaas | 23:32 | |
*** fnaval has joined #openstack-lbaas | 23:33 | |
*** yamamoto has quit IRC | 23:38 | |
rm_work | omg i just realized, I think the multi-az code I have evolved to the point where like ... a one or two line change and it won't require anything | 23:44 |
rm_work | on the nova side <_< | 23:44 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: WIP: Experimental multi-az support https://review.openstack.org/558962 | 23:44 |
rm_work | pratrik: ^^ | 23:45 |
rm_work | pratrik: check out that patch and let me know how that works for you | 23:45 |
rm_work | pratrik: should just be ... apply that, and set the config just like you did (comma-separated list of AZs) | 23:45 |
johnsom | Ugh, looks like a new nodepool provider, limestone, but they aren't working well. Gate timed out, didn't even start devstack | 23:51 |
johnsom | So, the stable release is stuck at least another recheck cycle | 23:51 |
rm_work | ah is it them | 23:52 |
rm_work | i wondered wtf is wrong with the gates the past day or two | 23:52 |
rm_work | a ton of rechecks for timeouts but didn't look too deep | 23:52 |
johnsom | At least here it was... http://logs.openstack.org/92/558892/1/gate/octavia-v1-dsvm-py3x-scenario-multinode/ca6c90a/ | 23:54 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!