*** Deific714 has joined #openstack-lbaas | 00:08 | |
*** Deific714 has quit IRC | 00:10 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Switch to ubuntu-minimal for default amphora image https://review.openstack.org/559416 | 00:47 |
---|---|---|
openstackgerrit | Adam Harwell proposed openstack/neutron-lbaas master: WIP: Test l7 proxy to octavia https://review.openstack.org/561049 | 00:49 |
*** harlowja has quit IRC | 00:52 | |
*** ianychoi_ is now known as ianychoi | 01:01 | |
openstackgerrit | Adam Harwell proposed openstack/neutron-lbaas master: WIP: Test l7 proxy to octavia https://review.openstack.org/561049 | 01:28 |
johnsom | Ha, bionic passed the gate | 01:41 |
xgerman_ | Proxy is down to 12 fails… | 02:59 |
openstackgerrit | Merged openstack/neutron-lbaas master: Fix pep8 errors https://review.openstack.org/560691 | 03:00 |
*** imacdonn has quit IRC | 03:09 | |
*** imacdonn has joined #openstack-lbaas | 03:09 | |
*** annp has joined #openstack-lbaas | 03:21 | |
*** benj_UW23UQ has joined #openstack-lbaas | 03:23 | |
*** benj_UW23UQ has quit IRC | 03:25 | |
*** H48XK3b00kworm has joined #openstack-lbaas | 03:36 | |
*** H48XK3b00kworm has quit IRC | 03:38 | |
*** sanfern has joined #openstack-lbaas | 03:46 | |
*** pppktzMPTGBW has joined #openstack-lbaas | 03:51 | |
*** pppktzMPTGBW has quit IRC | 03:53 | |
*** sanfern has quit IRC | 04:02 | |
*** bbzhao has quit IRC | 04:09 | |
*** bbzhao has joined #openstack-lbaas | 04:09 | |
*** sapd has joined #openstack-lbaas | 04:11 | |
*** zuollien has joined #openstack-lbaas | 04:11 | |
*** zuollien has quit IRC | 04:12 | |
*** harlowja has joined #openstack-lbaas | 04:44 | |
*** sapd has quit IRC | 04:44 | |
*** sapd has joined #openstack-lbaas | 04:47 | |
*** links has joined #openstack-lbaas | 05:12 | |
*** mordred has quit IRC | 05:29 | |
*** harlowja has quit IRC | 05:39 | |
*** mordred has joined #openstack-lbaas | 05:42 | |
*** sapd has quit IRC | 05:45 | |
*** dayou has quit IRC | 06:08 | |
*** dayou has joined #openstack-lbaas | 06:23 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 06:37 | |
*** slaweq has joined #openstack-lbaas | 06:54 | |
*** AlexeyAbashkin has quit IRC | 06:58 | |
*** tesseract has joined #openstack-lbaas | 07:17 | |
*** rcernin has quit IRC | 07:33 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 07:41 | |
*** AlexeyAbashkin has quit IRC | 07:46 | |
*** dulek has joined #openstack-lbaas | 08:10 | |
dulek | Hey people! We've started seeing issues with Octavia on Kuryr gates. | 08:11 |
dulek | Basically in some runs Amphorae doesn't answer: http://logs.openstack.org/51/560951/1/check/kuryr-kubernetes-tempest-octavia/4aa8cb2/controller/logs/screen-o-cw.txt.gz#_Apr_12_19_45_37_121513 | 08:11 |
dulek | Any ideas what might be the cause? I see your gates are healthy, so I'm pretty surprised. | 08:12 |
dmellado | hey dulek | 08:14 |
dmellado | so, calling cgoncalves_ | 08:14 |
dmellado | one two three | 08:14 |
dmellado | also if you say bcafarel three times it might help | 08:14 |
dmellado | bcafarel: bcafarel bcafarel | 08:14 |
dulek | :) | 08:14 |
dmellado | it's not beetlejuice but anyways | 08:14 |
openstackgerrit | Alberto Planas proposed openstack/octavia master: Update osutil support for SUSE distro https://review.openstack.org/541811 | 08:16 |
bcafarel | dmellado: nah you won't succeed planting the beetlejuice song in my head :p | 08:17 |
bcafarel | looks like the first amphora replies in the end? http://logs.openstack.org/51/560951/1/check/kuryr-kubernetes-tempest-octavia/4aa8cb2/controller/logs/screen-o-cw.txt.gz#_Apr_12_19_52_51_681103 | 08:17 |
bcafarel | oh ok failure on config upload later on | 08:19 |
bcafarel | may need confirmation from others here, but if you reach config upload stage, the amphora itself is up and replying | 08:20 |
bcafarel | but then the agent in the amphora replies 500 | 08:21 |
dulek | bcafarel: Any way to see its logs? | 08:23 |
bcafarel | dulek: not at the moment IIRC :/ gathering amphora logs is still in the todo list | 08:25 |
bcafarel | dulek: http://logs.openstack.org/51/560951/1/check/kuryr-kubernetes-tempest-octavia/4aa8cb2/controller/logs/screen-o-hm.txt.gz#_Apr_12_19_54_59_659855 | 08:25 |
bcafarel | 25s for health check, maybe the gate is missing nested kvm or something like that? | 08:26 |
dulek | bcafarel: You think it's infra VM being slow? | 08:26 |
dulek | bcafarel: I don't think it's missing that… Hm. | 08:27 |
dulek | bcafarel: If it was infra's fault you should observe that in your gate as well. | 08:27 |
dulek | Oh, but you might have longer timeouts than us. | 08:27 |
bcafarel | I admit I never checked the HM logs in gates :/ so not sure if replies that long happen often or not | 08:28 |
*** yamamoto has quit IRC | 08:33 | |
openstackgerrit | Merged openstack/neutron-lbaas master: Cap haproxy log level severity https://review.openstack.org/557647 | 08:33 |
*** bbzhao has quit IRC | 08:44 | |
*** bbzhao has joined #openstack-lbaas | 08:44 | |
*** yamamoto has joined #openstack-lbaas | 08:44 | |
*** yamamoto has quit IRC | 08:51 | |
openstackgerrit | Wei Li proposed openstack/octavia master: No need create vrrp port in TOPOLOGY_SINGLE https://review.openstack.org/559981 | 08:57 |
openstackgerrit | Wei Li proposed openstack/octavia master: No need create vrrp port in TOPOLOGY_SINGLE https://review.openstack.org/559981 | 09:08 |
*** pcaruana has joined #openstack-lbaas | 09:12 | |
*** salmankhan has joined #openstack-lbaas | 09:19 | |
*** sanfern has joined #openstack-lbaas | 09:22 | |
*** sanfern has quit IRC | 09:22 | |
*** fnaval_ has quit IRC | 09:32 | |
*** celebdor1 has joined #openstack-lbaas | 09:49 | |
*** celebdor1 is now known as apuimedo | 09:50 | |
*** salmankhan has quit IRC | 09:57 | |
*** salmankhan has joined #openstack-lbaas | 10:09 | |
*** annp has quit IRC | 10:29 | |
*** salmankhan has quit IRC | 10:31 | |
*** salmankhan has joined #openstack-lbaas | 10:31 | |
*** irenab has quit IRC | 10:42 | |
*** oanson has quit IRC | 10:44 | |
rm_work | dulek: our timeouts are very long, it can take up to like 6-8 minutes to boot one amphora in the gates without nested kvm | 11:30 |
rm_work | you will see a TON of the "timeout" messages, and they look very foreboding, but it usually eventually connects | 11:31 |
rm_work | but yeah, bcafarel's assessment seems correct... it got to creating the listener and that exploded. | 11:32 |
rm_work | do you use a custom amphora image at all, or just the same one devstack makes? | 11:32 |
*** atoth has joined #openstack-lbaas | 11:51 | |
dulek | rm_work: Currently it's the DevStack-made. | 12:03 |
dulek | rm_work: We're planning to use the one from tarballs, but that commit isn't merged yet. | 12:04 |
dulek | One more data point - the issue is transient. | 12:04 |
cgoncalves | dulek, you mean http://tarballs.openstack.org/octavia/test-images/ ? | 12:07 |
dulek | cgoncalves: I think so. | 12:08 |
dulek | rm_work: Our timeout is 15 minutes. | 12:08 |
cgoncalves | patch was merged already; images being uploaded to tarballs.o.o | 12:09 |
dulek | cgoncalves: I mean Kuryr gate patch. | 12:10 |
dulek | cgoncalves: This one: https://review.openstack.org/#/c/560313/ | 12:11 |
rm_work | yeah it's not the timeout | 12:14 |
rm_work | not sure why but my guess is something is happening in your cloud such that for the first few seconds that the amphora VM is booted, it's a little unstable (like, maybe a cloud-init latency thing, or networking peculiarities) | 12:15 |
rm_work | and so it does finally respond, but then it is still processing stuff in the background or something, so when the agent goes to set up the config (and probably the netns) it breaks | 12:16 |
rm_work | really wish our 500s were more useful | 12:16 |
cgoncalves | dulek, oh, ok. I lost the backlog so I got the conversation halfway through | 12:18 |
*** sanfern has joined #openstack-lbaas | 12:49 | |
apuimedo | dulek: did you show cgoncalves the log from the gate? | 12:54 |
dulek | apuimedo: Not from your run, but the others. | 12:54 |
*** dayou has quit IRC | 12:55 | |
*** KeithMnemonic has joined #openstack-lbaas | 13:02 | |
cgoncalves | apuimedo, dulek: I lost backlog older than 1h30. what I see from https://review.openstack.org/#/c/560313/ is a TypeError: delete_namespaced_service() takes exactly 4 arguments (3 given) | 13:04 |
cgoncalves | http://logs.openstack.org/13/560313/6/experimental/kuryr-kubernetes-tempest-octavia-centos-7/5121aa3/job-output.txt.gz#_2018-04-12_11_37_12_539059 | 13:04 |
dulek | cgoncalves: Sorry for confusing you. We're currently investigating https://review.openstack.org/560433 . | 13:08 |
dulek | cgoncalves: But I have a feeling it's apuimedo's fault on that patch, not Octavia's. | 13:08 |
apuimedo | dulek: :'( | 13:08 |
dulek | cgoncalves: And the issue you've listed is fixed now in DevStack. It's always like we get 2 or 3 gate breakages in at once. :D | 13:09 |
dulek | apuimedo: Hey, haven't I convinced you on #openstack-kuryr? | 13:09 |
cgoncalves | haha | 13:09 |
apuimedo | no | 13:19 |
apuimedo | cgoncalves: when does an LB go operating_status OFFLINE with an ACTIVE provisioning status | 13:19 |
apuimedo | ? | 13:19 |
cgoncalves | apuimedo, heart beats not received | 13:20 |
apuimedo | I see nothing on the health manager log | 13:23 |
apuimedo | (deployed by tripleo) | 13:23 |
cgoncalves | apuimedo, glad you ask :) | 13:25 |
cgoncalves | https://review.openstack.org/#/c/557483/ | 13:25 |
cgoncalves | https://review.openstack.org/#/c/557731/ | 13:25 |
cgoncalves | backported to and merged in queens already | 13:25 |
cgoncalves | apuimedo, rhbz downstream https://bugzilla.redhat.com/show_bug.cgi?id=1506644 | 13:26 |
openstack | bugzilla.redhat.com bug 1506644 in openstack-tripleo-common "Add support for configuring 'in-overcloud' resources for octavia through workflows and ansible" [High,Post] - Assigned to cgoncalves | 13:26 |
apuimedo | cgoncalves: so... without this fix... Does anything ever go into good operation? | 13:27 |
cgoncalves | apuimedo, IIRC amphorae still serve traffic | 13:28 |
*** fnaval has joined #openstack-lbaas | 13:30 | |
*** samccann has joined #openstack-lbaas | 13:30 | |
apuimedo | cgoncalves: so what does it affect? Only reporting? | 13:31 |
*** tzumainn has joined #openstack-lbaas | 13:33 | |
cgoncalves | apuimedo, I'd expect octavia to trigger failover. 1-2 people have told me it was not; I haven't confirmed | 13:33 |
*** fnaval has quit IRC | 13:34 | |
cgoncalves | rm_work, shouldn't house keeping trigger failover on LBs in operating_status=OFFLINE? | 13:34 |
apuimedo | cgoncalves: what does triggering failover mean in a non-ha environment, respawning the amphora? | 13:34 |
cgoncalves | apuimedo, yes. create new vm, configure amphora, delete failed amphora | 13:35 |
apuimedo | cgoncalves: thanks | 13:36 |
*** dayou has joined #openstack-lbaas | 13:38 | |
*** velizarx has joined #openstack-lbaas | 14:29 | |
*** fnaval has joined #openstack-lbaas | 14:42 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 14:45 | |
*** velizarx has quit IRC | 14:46 | |
*** velizarx has joined #openstack-lbaas | 14:49 | |
*** AlexeyAbashkin has quit IRC | 14:49 | |
*** velizarx has quit IRC | 14:49 | |
*** openstackgerrit has quit IRC | 14:50 | |
*** velizarx has joined #openstack-lbaas | 14:56 | |
*** links has quit IRC | 14:58 | |
*** velizarx has quit IRC | 14:59 | |
*** velizarx has joined #openstack-lbaas | 15:01 | |
*** velizarx has quit IRC | 15:06 | |
*** pcaruana has quit IRC | 15:12 | |
*** slaweq has quit IRC | 15:12 | |
*** gokhan_ has quit IRC | 15:13 | |
*** slaweq has joined #openstack-lbaas | 15:19 | |
*** openstackgerrit has joined #openstack-lbaas | 15:23 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Switch to ubuntu-minimal for default amphora image https://review.openstack.org/559416 | 15:23 |
*** dayou has quit IRC | 15:24 | |
*** slaweq has quit IRC | 15:30 | |
*** dlundquist has joined #openstack-lbaas | 15:31 | |
apuimedo | yay! | 15:32 |
apuimedo | minimal! | 15:32 |
johnsom | Yeah, working on the image a bit. Save 100MB already | 15:32 |
apuimedo | great | 15:32 |
johnsom | Plus have a bionic image and gate in the works. | 15:32 |
apuimedo | I tried it a while ago with centos minimal and ubuntu minimal | 15:33 |
apuimedo | and shaved a lot | 15:33 |
apuimedo | I didn't have time to clean up and submit though :( | 15:33 |
johnsom | Yeah, I tried minimal a year or two ago and it was super broken, but Ian has been getting things in shape. | 15:33 |
*** qwebirc75161 has joined #openstack-lbaas | 15:34 | |
cgoncalves | want it really minimal? alpine is the answer :) | 15:37 |
johnsom | cgoncalves Great, make it happen! Thanks for volunteering. Grin | 15:37 |
johnsom | Wait, did you guys buy that too? | 15:38 |
cgoncalves | lol | 15:38 |
cgoncalves | we can't afford it atm. we're saving for next RH party | 15:40 |
johnsom | Nice | 15:40 |
johnsom | Where are our invites? | 15:40 |
xgerman_ | yep, by now I think we are on the black list for RH parties | 15:42 |
xgerman_ | it looks like they go out of their way not to invite us | 15:42 |
apuimedo | cgoncalves: I had the kuryr container based in minimal | 15:43 |
cgoncalves | fwiw i know nothing this time 'xD | 15:43 |
apuimedo | but I'd tell you this. if you want really minimal | 15:43 |
apuimedo | you should do like I do for the kuryr testing container | 15:43 |
apuimedo | busybox + static built binaries for extra tools | 15:43 |
apuimedo | xD | 15:43 |
apuimedo | I think we're at 4MiB | 15:44 |
apuimedo | or something | 15:44 |
apuimedo | https://hub.docker.com/r/kuryr/demo/tags/ | 15:44 |
apuimedo | 7 | 15:44 |
*** qwebirc75161 has quit IRC | 15:44 | |
apuimedo | the curl static build was a PITA to get right | 15:44 |
johnsom | Yeah, it would be nice, but it's a bunch of work | 15:46 |
apuimedo | johnsom: just for reference apart from python and haproxy, what does it use? | 15:51 |
johnsom | keepalived | 15:51 |
apuimedo | of course | 15:53 |
johnsom | blah, bionic seems broken this morning. | 15:55 |
johnsom | The fun of testing with pre-RC bits | 15:56 |
*** salmankhan has quit IRC | 15:59 | |
cgoncalves | johnsom, shouldn't the health manager trigger failover of amps with operating_status=offline? | 15:59 |
cgoncalves | and provisioning_status=ACTIVE | 15:59 |
johnsom | No, operating status is the observed status. This can be because all of the backend servers are down, or the LB is not yet fully configured. | 16:00 |
johnsom | It's not a fault of the LB. | 16:00 |
johnsom | It could also be that we never received a health heartbeat from the amp. I.e. bad network setup | 16:01 |
*** slaweq has joined #openstack-lbaas | 16:01 | |
cgoncalves | well, what if amp is created with no members *and* health manager does not receive heartbeats? | 16:01 |
cgoncalves | right | 16:01 |
*** salmankhan has joined #openstack-lbaas | 16:01 | |
johnsom | So, in summary, operating_status offline is not necessarily a failure of the LB | 16:01 |
cgoncalves | ok, so from the moment it received the first heartbeat if it doesn't receive again it should failover? | 16:02 |
johnsom | Yes | 16:02 |
cgoncalves | ok, good | 16:02 |
cgoncalves | that's what we're observing now in tripleo envs. there were 2 misconfigurations: 1) controller_ip_list and 2) firewall :5555 | 16:03 |
johnsom | There is a bug open to enable failover of an amp that never sends it's initial heartbeat, but that is tricky and likely if it never sends, failover probably isn't going to fix it. | 16:03 |
cgoncalves | patched submitted and merged | 16:03 |
johnsom | Ok, cool | 16:03 |
cgoncalves | yeah, in that scenario it would enter a failover loop | 16:04 |
johnsom | Right, which is good or bad... It opens the non-linear back off can of worms | 16:04 |
*** dayou has joined #openstack-lbaas | 16:07 | |
*** pcaruana has joined #openstack-lbaas | 16:15 | |
*** apuimedo has quit IRC | 16:24 | |
*** links has joined #openstack-lbaas | 16:53 | |
*** bbzhao has quit IRC | 17:02 | |
*** bbzhao has joined #openstack-lbaas | 17:02 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Switch to ubuntu-minimal for default amphora image https://review.openstack.org/559416 | 17:07 |
*** slaweq has quit IRC | 17:19 | |
*** atoth has quit IRC | 17:30 | |
*** atoth has joined #openstack-lbaas | 17:32 | |
*** tesseract has quit IRC | 17:53 | |
*** links has quit IRC | 18:02 | |
*** salmankhan has quit IRC | 18:08 | |
*** atoth has quit IRC | 18:11 | |
mnaser | hi everyone | 18:12 |
mnaser | so i'm deploying octavia for a private cloud customer right now | 18:12 |
xgerman_ | yep | 18:13 |
johnsom | Nice | 18:13 |
mnaser | is there a fairly friendly way of plugging the octavia control plane to a neutron network | 18:13 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: DNM: Gate test https://review.openstack.org/561287 | 18:13 |
mnaser | the only network routable between their control plane and vms is 'public' but i'd rather avoid publics ips if i could | 18:13 |
mnaser | i remember someone had a trick of manually creating a port in openvswitch or something | 18:13 |
mnaser | like creating a vxlan network and somehow getting the control plane to get an ip out of it | 18:14 |
johnsom | Well, we use the OVS trick in devstack: https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L334 | 18:14 |
johnsom | But, you need to think about your HA strategy so you don't have a single pop out of neutron | 18:15 |
*** links has joined #openstack-lbaas | 18:15 | |
mnaser | single pop out of neutron? | 18:16 |
johnsom | Yeah, if you bridge out of OVS onto a control plane network | 18:17 |
johnsom | If you just create an interface on all of you control plane hosts, that is fine. It just means you have neutron on each of your octavia controllers | 18:18 |
mnaser | all control plane servers run neutron on them anyways | 18:18 |
johnsom | Well, then yeah, the trick we use for devstack will work for you. | 18:18 |
mnaser | so ill have an octavia network and 3 interfaces each one on a controller | 18:19 |
johnsom | Yeah, just create the lb-mgmt-net in neutron, exclude addresses for the controllers, then create the interface via ovs, bind it in the namespace for octavia or make sure the network you pick doesn't conflict with other controller stuff. | 18:21 |
*** links has quit IRC | 18:31 | |
*** apuimedo has joined #openstack-lbaas | 18:32 | |
mnaser | johnsom: yup thats the plan, okay cool | 18:35 |
mnaser | i'll keep you updated on how that rolls out | 18:35 |
mnaser | pretty excited but a little bit nervous about this being in the hands of the customers, they'll be giving it quite the workout | 18:35 |
mnaser | they dynamically reconfigure their lb's all the time, use SNI, etc | 18:35 |
*** numans has quit IRC | 18:50 | |
*** sanfern has quit IRC | 18:50 | |
*** numans has joined #openstack-lbaas | 18:52 | |
*** apuimedo has quit IRC | 18:57 | |
*** harlowja has joined #openstack-lbaas | 19:10 | |
*** numans has quit IRC | 19:25 | |
*** numans has joined #openstack-lbaas | 19:28 | |
*** slaweq has joined #openstack-lbaas | 19:43 | |
*** gokhan has joined #openstack-lbaas | 20:37 | |
johnsom | rm_work around? | 20:54 |
*** samccann has quit IRC | 21:09 | |
*** tzumainn has quit IRC | 21:15 | |
rm_work | johnsom: i am now | 21:19 |
xgerman_ | and I was about to alias myself | 21:19 |
johnsom | rm_work on your patch: https://review.openstack.org/#/c/549263/5/octavia/network/drivers/neutron/allowed_address_pairs.py | 21:19 |
johnsom | below that, line 297, I think that filter should go | 21:20 |
johnsom | thoughts? | 21:20 |
rm_work | cgoncalves / johnsom: my theory on the "initial heartbeat" thing, was that the second we first get an active connection to the amp for vip-plug (meaning the agent is up) we put in a fake heartbeat for like, 5m in the future (or something) and then if it doesn't get overwritten by real heartbeats starting, it'll failover after that | 21:22 |
rm_work | we're not in a hurry on new LBs IMO since they won't have any existing traffic | 21:23 |
rm_work | johnsom: wait, what? | 21:23 |
rm_work | 197 | 21:23 |
rm_work | *297? | 21:23 |
rm_work | that's in a totally different function than i'm touching? | 21:24 |
johnsom | Right | 21:24 |
johnsom | This is in addition to your change | 21:24 |
rm_work | oh | 21:24 |
rm_work | err | 21:24 |
rm_work | so what should go there? | 21:24 |
xgerman_ | I still think we should explore more root causes before bringing out the hammer | 21:24 |
xgerman_ | (deleting all ports on the sec-grp) | 21:25 |
johnsom | Why can't I do a "sudo systemctl restart devstack@o-hm"? The old listener process hangs around and the new one does: error: [Errno 98] Address already in use | 21:25 |
johnsom | rm_work I think we should take out the filter for "allocated" amps. I think that is too narrow a scope. Maybe it should try all of them. | 21:26 |
cgoncalves | rm_work, yeah although there's at least one corner case: misconfiguration of controller ip list and/or firewall dropping heartbeats. with what you said, it would failover over and over again | 21:26 |
xgerman_ | johnsom: +1 | 21:26 |
rm_work | johnsom: ah so | 21:27 |
xgerman_ | but we should put this in an extra patch and test independently - my 2ct | 21:29 |
rm_work | johnsom: yeah i do kinda agree, i think | 21:30 |
rm_work | just ... | 21:30 |
rm_work | all amphora | 21:31 |
rm_work | we only call deallocate VIP at the end of a delete for a LB, right? | 21:31 |
xgerman_ | as part of it: we unplug_vip, deallocate VIP, and then remove the vm | 21:32 |
xgerman_ | unplug_vip and deallocate_vip share some code | 21:32 |
rm_work | but it's 100% only on LB deletion | 21:35 |
xgerman_ | yep | 21:35 |
rm_work | then yeah, we should just do it | 21:35 |
rm_work | it reminds me of https://review.openstack.org/#/c/435612/122/octavia/controller/worker/tasks/compute_tasks.py a little bit | 21:36 |
xgerman_ | yeah, we need a couple of fail saves | 21:36 |
johnsom | Hmm, something is fishy here too. Stats isn't getting called for me | 21:37 |
rm_work | in which patch? | 21:38 |
*** slaweq has quit IRC | 21:38 | |
rm_work | cgoncalves: yep :/ | 21:38 |
johnsom | master | 21:38 |
rm_work | johnsom: wee yeah but like... where isn't stats getting called? | 21:39 |
rm_work | a tempest test you're working on? | 21:39 |
*** slaweq has joined #openstack-lbaas | 21:39 | |
johnsom | No, I just have an LB created. It never updates the stats. Tracing it back now. | 21:39 |
*** vkceggzw has joined #openstack-lbaas | 21:39 | |
johnsom | Yeah, ok stats driver doesn't load and silently fails. | 21:40 |
rm_work | <_< | 21:42 |
rm_work | >_> | 21:42 |
rm_work | <_< | 21:42 |
johnsom | ImportError: No module named octavia_controller.healthmanager.health_drivers.update_db | 21:42 |
rm_work | ugh | 21:42 |
rm_work | missed a thing in a refactor | 21:42 |
rm_work | you got a fix or should I | 21:42 |
johnsom | I will push up a fix | 21:43 |
rm_work | k | 21:43 |
*** slaweq has quit IRC | 21:44 | |
*** slaweq has joined #openstack-lbaas | 21:44 | |
*** vkceggzw has quit IRC | 21:46 | |
*** numans has quit IRC | 21:48 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Fix statistics update typo https://review.openstack.org/561360 | 21:48 |
rm_work | when did that even happen | 21:48 |
johnsom | When you made the updates drivers | 21:49 |
johnsom | So new to rocky | 21:49 |
rm_work | so a while ago | 21:49 |
rm_work | ok that's good at least | 21:49 |
rm_work | not in queens | 21:49 |
*** numans has joined #openstack-lbaas | 21:49 | |
johnsom | back to that filter, do you want to make that change? | 21:51 |
rm_work | yeah i can do that | 21:51 |
xgerman_ | can we have it in a separate patch? | 21:52 |
johnsom | I am warming to whack-a-mole | 21:52 |
rm_work | i think it's related | 21:52 |
xgerman_ | in one might cause the other | 21:52 |
rm_work | yeah basically IMO there are some places we can be more careful, maybe, but no matter how careful we are we will get screwed by nova/neutron being shitty sometimes | 21:52 |
rm_work | so we may as well just have our hammer out | 21:52 |
rm_work | xgerman_: i can do it as a second patch if you really have a problem with it being the same | 21:54 |
xgerman_ | I wouldn’t say problems — it’s just a preference to make backporting easier | 21:54 |
rm_work | but IMO it should be easier to just roll it in | 21:54 |
rm_work | even for backporting :/ | 21:54 |
rm_work | the change is related, it's all delete-flow | 21:54 |
johnsom | Yeah, I think it is related and I'm inclined to go ahead with the port delete patch | 21:55 |
xgerman_ | ok, I will review it with a fine comb so we have the appropriate logging | 21:55 |
rm_work | yeah honestly now that you point it out johnsom, i am not sure why we ever filtered that | 21:57 |
johnsom | My bad actually | 21:57 |
johnsom | I think I was trying to save time in the failover flow by not making calls for deleted stuff. Plus the AAP driver has been dumb in the past and blown up on deleting things that were already deleted. | 21:58 |
rm_work | anywho, running tests | 22:01 |
rm_work | johnsom: oh, did you see my question about the issue in my gate test | 22:01 |
rm_work | it isn't finding the method from the plugin.sh | 22:01 |
johnsom | No | 22:01 |
rm_work | are we not allowed to use those there? | 22:01 |
rm_work | ah it was in PM | 22:01 |
*** slaweq has quit IRC | 22:10 | |
openstackgerrit | Adam Harwell proposed openstack/octavia master: When SG delete fails on vip deallocate, try harder https://review.openstack.org/549263 | 22:14 |
rm_work | there you go | 22:14 |
johnsom | Thanks | 22:15 |
rm_work | also added a clarifying comment to the other bit | 22:16 |
rm_work | because no matter how many times i explain, people keep thinking it's deleting ALL the ports, but no, it really is JUST the ones we own, I swear | 22:17 |
openstackgerrit | Michael Johnson proposed openstack/neutron-lbaas master: Fix the double zuul project definition https://review.openstack.org/561026 | 22:19 |
*** fnaval has quit IRC | 22:27 | |
*** KeithMnemonic has quit IRC | 22:41 | |
*** fnaval has joined #openstack-lbaas | 23:04 | |
*** fnaval has quit IRC | 23:04 | |
openstackgerrit | Adam Harwell proposed openstack/neutron-lbaas master: WIP: Test l7 proxy to octavia https://review.openstack.org/561049 | 23:08 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Improve the error logging for zombie amphora https://review.openstack.org/561369 | 23:41 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!