Tuesday, 2017-12-12

*** amitry has joined #openstack-lbaas00:00
*** bar_ has quit IRC00:00
*** AlexeyAbashkin has quit IRC00:02
*** bar_ has joined #openstack-lbaas00:11
johnsomHmm, IRCcloud is acting strange00:29
xgerman_okey00:29
johnsomAnyway, I just tested the QoS patch and it seems to work00:29
xgerman_yeah, I did we ever get around instaling qos in devstack/00:30
*** tongl has quit IRC00:31
johnsomI just followed the networking guide to enable it in neutron, then since it messes with the ovs interface types I had to create a new subnet to use for testing.00:31
johnsomThe "don't I wish" virtual networking speed: 16.2 Gbits/sec , the "through an LB" speed: 7.79 Gbits/sec, the QoS enabled speed: 3.07 Mbits/sec00:32
xgerman_yeah, I think if we support QoS we should enable it by default ibn our devstack00:32
xgerman_maybe I need to create a story00:32
johnsomThe one downside I see is if you enable it, you can't un-enable it.00:32
xgerman_mmh, true00:33
*** sshank has quit IRC00:33
johnsomOur validation requires a uuid00:33
xgerman_it’s not like we instal BBQ every time so you can do SSL00:34
xgerman_so that’s prob, fine00:34
johnsomAh, you can pass in null via the API and remove it00:35
*** fnaval_ has joined #openstack-lbaas00:35
*** fnaval has quit IRC00:35
xgerman_I also would like to see that QoS featured in our cookbook00:35
*** fnaval_ has quit IRC00:35
johnsomComment away00:36
xgerman_not sure if that is more me making tasks00:36
rm_workactually our gates DO all install Barbican00:37
johnsomYeah, it might just be a story00:37
rm_workso we *could* make the change to install QoS support in our gates...00:37
xgerman_ok, so we should test QoS…00:37
rm_workhey, we don't TEST it00:37
rm_workbut we do install it :P00:37
xgerman_lol00:37
rm_workwe don't have scenarios for Barbican AFAIK00:37
rm_workIIRC00:37
rm_workBBQ00:37
johnsomI think that is the case00:37
rm_workbut we DO plan to add some00:38
rm_workat least, I do, and I talked to someone else who does as well00:38
johnsomIt has a functional test, but we don't test that neutron actually enables QoS00:38
xgerman_https://storyboard.openstack.org/#!/story/200140200:39
xgerman_https://storyboard.openstack.org/#!/story/200140100:40
bar_johnsom, xgerman_, I'm still testing QoS. Can you wait with the workflow?00:46
xgerman_No intention from me to workflow00:47
johnsomSure, let me know00:48
bar_johnsom, thx. btw, check out my client-side patch for qos, if you have the time.00:53
johnsomAh, good point, I still have the devstack00:54
*** SumitNaiksatam has quit IRC00:56
rm_workaugh really01:07
rm_workmy patch timed out01:08
rm_workon ... dsvm-api01:08
rm_workrechecking and now i get to wait like three hours for a merge T_T01:08
bar_rm_work, I already rechecked it01:11
rm_workah lol01:11
rm_workah well01:11
rm_worki don't think mine will trigger01:12
bzhaogood lively channel. :)01:12
rm_workyeah, I like that our channel has ... actual discussions and people :P01:13
bzhaoUDP multi keepalived process in a single namespace. T_T01:13
rm_workI still don't understand how it is a problem01:14
bzhaoyeah, I like it01:14
rm_worki didn't think it had anything to do with the application itself01:14
rm_worklike, i could throw netcat into a namespace too, i thought the app was pretty much unaware01:14
bzhaoYeah, but I remember johonsom said when I intro a new backend, he let me to care about the keepalived sync port. For now, I have no idea about it. I will show you. wair a moment.01:15
bzhaooh s/wair/wait01:16
rm_workyeah but the sync port will be in the namespace if the keepalived process is in the namespace01:16
rm_workbut ...01:16
rm_worki don't know if we care01:16
rm_workhmmm, the one thing I would be concerned about in ACTIVE/STANDBY is that keepalived procs need to stay in sync01:17
rm_workbecause one can't try to move the VIP01:17
johnsomYeah, I didn't get to looking at that, sigh.  The sync is only for the VRRP part01:17
rm_workwhile the other thinks it is still the master01:17
rm_workright01:17
rm_workand only the *main* keepalived (not in namespace?) would be doing vrrp01:17
bzhaoOh, sorry. maybe I miss something.01:17
rm_workthe other ones actually don't even do listening?01:17
rm_workthey are for health checks?01:18
rm_workso the entire process should be able to easily run inside the namespace01:18
rm_workwith no issues AFAIK01:18
rm_work*ADAIU01:18
johnsombzhao Well, I confused myself and then probably confused you in the process01:18
rm_work*AFAIU01:18
bzhaojohnsom, aha, no matter. :)01:18
johnsomIf I remember, there were two things we were looking at.01:19
johnsom1. Can we run each UDP listener as it's own process so that reload/restarts don't interrupt traffic on the others.01:19
johnsom2. We should add health check scripts that monitor the UDP keepaliveds as part of the VRRP monitoring.  We have a directory for those scripts already, so should be easy.01:20
bzhaohttp://eavesdrop.openstack.org/irclogs/%23openstack-lbaas/%23openstack-lbaas.2017-12-08.log.html   This is my left in last Friday, I'm afraid not fresh the screen. :)01:21
rm_worki think keepalived doesn't have anything to do with the actual listening? since keepalived is just used to configure LVS which does the actual loadbalancing?01:22
rm_workkeepalived does the monitoring only01:22
xgerman_yep, configures LVS and monitors01:23
johnsomRight01:23
rm_workso we can restart keepalived all day01:23
bzhaojohnsom, For 1, I think it works, but only for the version 1.3.9 in my test. For 2, I'm working on that, but hang on the current keepalived version I have no idea to run multi process.01:23
xgerman_BUT keepalived will also check the backend and take it out of rotation01:23
rm_workand it won't stop listening01:23
rm_workor interrupt the members01:23
xgerman_aka kepplaived does the healthmonitoring of the backend servers01:23
johnsomYeah, that is probably true...01:23
rm_workyeah so ... honestly it COULD just be one keepalive01:23
johnsomJust not sure 100%.01:23
rm_workwell01:24
rm_worktwo01:24
rm_workone for VRRP and one for inside the namespace for monitoring01:24
rm_worki don't know if you can have one app both inside and not01:24
xgerman_probably not — keepalived is siple software + you only have one LVS01:24
rm_workyeah01:24
rm_workso LVS needs to be running *inside* the NS, and we need one keepalived outside for VRRP and one inside for healthmonitoring01:25
rm_workit CAN be multiple inside for healthmonitoring (one each) and that is ... fine i think01:25
rm_worki don't know why that would cause problems really01:25
rm_workand it fits our model best01:25
xgerman_LVS runs in the kernel — so not sure how to make tat run inside the NS01:25
bar_johnsom, would you mind taking a look at my review: https://review.openstack.org/#/c/458308/01:26
bzhaorm_work, Currently we implement as listener keepalived 1:1.01:26
rm_workyeah which is easiest01:26
rm_workbecause it's literally the exact same as if we were doing an haproxy confog01:26
rm_work*config01:26
xgerman_+101:26
rm_workjust use a different template for the service config and the systemd config01:27
rm_workand suddenly it should "just work"01:27
bzhaorm_work, yeah. So it is possible to reload/update the config to get all lisnter config should be refresh.01:27
johnsombar_ still looking at client patch01:27
bar_xgerman_, doesn't https://storyboard.openstack.org/#!/story/2001310 overlaps with your story? (specifically, task 5863)01:30
bzhaoxgerman_, I test in a very sample namesapce env, it works. :). But for higher version keepalived.01:30
rm_worki still don't understand how it wouldn't work for any version01:30
rm_workwhat are you doing to put it in the namespace?01:30
*** sshank has joined #openstack-lbaas01:30
rm_workare you using a keepalived arg or something?01:30
bzhaorm_work, just some LVS config.01:30
rm_work... to put keepalived in the namespace?01:31
xgerman_bzhao: mine is geared toward the gate01:31
bzhaoI just test with very simple keepalived config01:31
johnsombar_ how do I un-set the QoS policy with the client?01:31
xgerman_for instance we enable barbican in the gate but not in the devstack somone might install on ntheir box01:31
bar_johnsom, You cannot01:31
bzhaoThis is my new keepalived config. http://paste.openstack.org/show/628411/01:32
bzhaoxgerman_, sorry. :)01:32
bzhaoThis is the existing keepalived config. It was still running01:32
bzhaohttp://paste.openstack.org/show/628412/01:32
bar_johnsom, nor the server side would allow that at this point.01:32
xgerman_no worries01:32
bzhaoI run keepalived like "ip netns exec amphora-haproxy keepalived -D -d -f config".01:32
bzhaoThen I saw the log in syslog like:   http://paste.openstack.org/show/628413/01:32
johnsombar_ Yes, I can un-set it by passing "null"01:33
rm_workyes01:33
bzhaoAlso, the higher version of keepalived, such as 1.3.9 It supports "net_namespace" and "instance" configuration options. Fo runing multiple keepalveds in the same namespace.01:33
rm_workbzhao: because you are trying to use a central daemon and you already have one01:33
rm_worki don't know how it would work with any version of keepalived using this method01:33
bar_johnsom, ha. I missed that.01:33
rm_workyou need to differentiate them01:33
johnsombar_ I think we are missing a few unsets in other spots too01:34
bzhaorm_work, eh, you mean I need to kill the exist first and recreate them?01:34
rm_workno01:34
bar_johnsom, what do you mean?01:34
johnsombar_ I think other commands should probably have unset.  Not a worry on this patch though01:34
rm_workbzhao: try for example running it with `-n` in a screen01:35
rm_workyou should then be able to run as many as you want01:35
rm_work`ip netns exec amphora-haproxy keepalived -n -D -d -f config`01:35
rm_workthough why do you have '-d'?01:36
bar_johnsom, If you think we're missing a valuable feature (like unset), please open a story and make sure to ping\mail me about that.01:36
rm_workbzhao: it won't fork a daemon, but that's fine, when we're running it for real we'll take care of that in the servicefile01:36
bzhaorm_work, aha, There maybe some mistakes I make, I saw the existing code to run a keepalived process with 'ip netns exec keepalived -D -d', so I thought it could run keepalived in a ns..T_T01:38
rm_workit can01:38
rm_workbut running more than ONE (in a namespace or not) means you can't use it as a daemon quite the same way01:38
rm_workbecause it's trying to control a single instance01:38
rm_workpossibly actually you could do this01:39
bzhaorm_work, Thanks for cliear.01:39
rm_work`ip netns exec amphora-haproxy keepalived -D -d -f config1 -p /var/run/keepalive1.pid`01:40
rm_work`ip netns exec amphora-haproxy keepalived -D -d -f config2 -p /var/run/keepalive2.pid`01:40
rm_workthat might work? i'm not sure01:40
rm_workbecause by default it uses `/var/run/keepalived.pid` and that means it'll try to use the same instance01:40
rm_workagain, when we make a service file for it, it'll take care of this, so it's really a non-issue01:41
bzhaoI just test with it if there is already a -D keepalived process, it won't work01:41
*** yamamoto has joined #openstack-lbaas01:41
johnsombzhao I think it is the pid file that is giving you the "already running" errors01:41
rm_workbzhao: if you specify `-p mypidfile` it should work01:42
bzhaothe pid file not exist, as the process not trully setup01:42
rm_workpossibly you also need `-r` and `-c`01:42
rm_workit should create the pid file01:42
rm_worksec, let me test this really quick01:42
johnsomMy question is though, since this is configuring lvs, we might require a special version of the kernel with lvs namespace support and/or the keepalived that is namespace aware...01:43
rm_workhmmm01:43
rm_worki think we need to have LVS start in the namespace01:43
rm_workbut yeah that's interesting01:43
rm_workso FYI the xenial version of keepalived is namespace aware...01:43
johnsomhttps://lwn.net/Articles/419692/01:43
rm_workthat was in 201001:44
rm_workif that isn't merged, I'll cry01:44
johnsomOh, I know the stuff is there, I just don't know what magic incantation of versions we need01:44
bzhaoYeah, so just want to raise it. As I know if we update the keepalived version maybe risky.01:45
rm_workbzhao: yeah i just tested it and it works01:50
bar_johnsom, can you take a look at https://review.openstack.org/#/c/458308/ now? I'm not sure $2 in my review is actually a bug or not.01:51
rm_workhttp://paste.openstack.org/show/628665/01:51
rm_workbzhao: ^^01:51
bzhaorm_work, cool, whay options follow..01:51
rm_workthat will work in a namespace or not01:52
rm_workjust add the namespace code before it01:52
rm_workit's that they can't share a pidfile and there is a default file it will look for01:52
johnsombar_ I am looking at it now, but I'm struggling to see how the code behaves that way01:52
bzhaorm_work, oh, i know, I should split the check subprocess pid either.01:53
bzhaorm_work, I jsut test -p and -r.01:54
bzhaorm_work, it won't work.01:54
*** sshank has quit IRC01:56
rm_workwell, i know this works, because it is running on my machine this way :)01:56
bzhaorm_work, then there is 1 thing need to clear. If we want to implement udp for 1 listener : 1 keepalived process. aha :)01:57
*** Swami has quit IRC01:58
rm_worki think it should be fine01:58
bar_johnsom, I would expect it to apply the qos policy specifically on the vip_port inside the ApplyQos network task, which it doesn't currently01:58
johnsombar_ Oh!  Are you looking at the VIP port or the base port?  I bet that is the issue. Test the qos through the load balancer, I bet it is limited by what was set in the openstack loadbalnacer call02:00
johnsomQoS doesn't really apply to AAP ports, they are just fake ports to track an IP.  The QoS has to be applied to the base port.02:00
bzhaoOK, thanks johnsom and rm_work for many many patience to solve my problem. I will finish the agent part coding this week with some test, i think.02:01
johnsombar_ you will notice the "VIP" AAP port is always "down"02:01
bzhaoYeah, it just apply qos on the vrrp_port on amp instances02:02
bzhaoThe vip port not bound in neutron. We just use it for allowed address pair for VIP function02:02
bar_johnsom, ok, thanks for looking into it. Not-a-bug then.02:03
bzhaoYou guys message are so quick, T_T.. the qos patch, also metioned by german, if possible I can take over the work for doc and CI.02:05
*** reedip has quit IRC02:06
*** bar_ has quit IRC02:06
*** annp has joined #openstack-lbaas02:15
*** reedip has joined #openstack-lbaas02:17
bzhaojohnsom, xgerman_, Should I post a quick new patch for Qos patch? or waiting for your kind test?02:18
johnsombzhao go for it02:20
openstackgerritZhaoBo proposed openstack/octavia master: Extend api to accept qos_policy_id  https://review.openstack.org/45830802:25
rm_workaug gonna need to recheck again >_<02:30
rm_workgate got a bad host or something i think02:30
rm_workhttp://logs.openstack.org/90/525790/6/gate/octavia-v1-dsvm-scenario/96ecd28/logs/screen-n-cpu.txt.gz#_Dec_12_02_08_21_22116302:31
rm_workwtf02:31
*** sanfern has quit IRC02:39
*** dmellado has quit IRC02:42
*** tongl has joined #openstack-lbaas02:43
*** SumitNaiksatam has joined #openstack-lbaas02:44
*** dmellado has joined #openstack-lbaas02:48
*** bbbbzhao_ has joined #openstack-lbaas02:52
openstackgerritHengqing Hu proposed openstack/octavia-dashboard master: Remove members with a pool id  https://review.openstack.org/52728602:58
openstackgerrithujin proposed openstack/neutron-lbaas master: Remove key "l7_policies" in pool dict  https://review.openstack.org/52728702:58
openstackgerrithujin proposed openstack/neutron-lbaas master: Remove key "l7_policies" in pool dict  https://review.openstack.org/52728703:00
*** ianychoi has joined #openstack-lbaas03:08
*** sanfern has joined #openstack-lbaas03:27
*** dmellado has quit IRC03:37
*** dmellado has joined #openstack-lbaas03:38
sanfernhi johnsom03:47
*** armax has quit IRC03:53
*** armax has joined #openstack-lbaas03:53
*** armax has quit IRC03:54
*** krypto has joined #openstack-lbaas04:51
*** robcresswell has quit IRC05:39
*** krypto has quit IRC05:39
*** krypto has joined #openstack-lbaas05:40
*** krypto has quit IRC05:40
*** krypto has joined #openstack-lbaas05:40
*** numans has quit IRC05:54
*** numans has joined #openstack-lbaas05:55
openstackgerritHengqing Hu proposed openstack/octavia master: Extend api to accept qos_policy_id  https://review.openstack.org/45830805:59
*** krypto has quit IRC06:05
*** SumitNaiksatam has quit IRC06:12
*** tongl has quit IRC06:13
*** krypto has joined #openstack-lbaas06:22
*** krypto has quit IRC06:22
*** krypto has joined #openstack-lbaas06:22
*** krypto has quit IRC06:28
*** krypto has joined #openstack-lbaas06:29
*** krypto has quit IRC06:29
*** krypto has joined #openstack-lbaas06:29
*** bbbbzhao_ has quit IRC06:42
*** sanfern has quit IRC06:59
*** sanfern has joined #openstack-lbaas06:59
*** rcernin has quit IRC07:21
*** dokua has joined #openstack-lbaas07:31
*** dokua has quit IRC07:36
*** kobis has joined #openstack-lbaas07:39
*** Alex_Staf has joined #openstack-lbaas07:39
*** robcresswell has joined #openstack-lbaas07:39
*** dokua has joined #openstack-lbaas07:44
*** Alex_Staf has quit IRC07:48
*** AlexeyAbashkin has joined #openstack-lbaas07:53
rm_workbah is the gate having issues08:04
*** b_bezak has joined #openstack-lbaas08:06
*** tesseract has joined #openstack-lbaas08:22
*** Alex_Staf has joined #openstack-lbaas08:22
*** rcernin has joined #openstack-lbaas08:36
*** krypto has quit IRC08:43
-openstackstatus- NOTICE: Our CI system Zuul is currently not accessible. Wait with approving changes and rechecks until it's back online. Currently waiting for an admin to investigate.08:48
*** krypto has joined #openstack-lbaas08:59
*** krypto has quit IRC08:59
*** krypto has joined #openstack-lbaas08:59
*** Alex_Staf has quit IRC09:05
-openstackstatus- NOTICE: Zuul is back online, looks like a temporary network problem.09:08
*** jappleii__ has quit IRC09:10
*** dmellado has quit IRC09:17
*** strigazi_ is now known as strigazi09:17
*** krypto has quit IRC09:17
*** krypto has joined #openstack-lbaas09:17
*** krypto has quit IRC09:17
*** krypto has joined #openstack-lbaas09:17
*** devfaz has quit IRC09:37
*** devfaz has joined #openstack-lbaas09:38
openstackgerritZhaoBo proposed openstack/octavia-tempest-plugin master: [WIP]new CI for test tempest with neutron and octavia  https://review.openstack.org/52735609:41
*** dmellado has joined #openstack-lbaas09:48
*** eN_Guruprasad_Rn has joined #openstack-lbaas09:54
*** annp has quit IRC09:56
*** dmellado has quit IRC10:07
*** dmellado has joined #openstack-lbaas10:20
*** sanfern has quit IRC10:20
*** salmankhan has joined #openstack-lbaas10:24
*** yamamoto has quit IRC10:35
*** Alex_Staf has joined #openstack-lbaas10:35
rm_worklooks like zuul still f'd10:37
rm_workwhen it comes back fully we can recheck stuff i guess T_T10:38
*** yamamoto has joined #openstack-lbaas10:48
*** dmellado has quit IRC10:50
*** Alex_Staf has quit IRC10:52
*** sticker has joined #openstack-lbaas10:59
*** dmellado has joined #openstack-lbaas10:59
bzhaoYeah, it dead all day11:08
*** AlexeyAbashkin has quit IRC11:15
*** dmellado has quit IRC11:16
*** dmellado has joined #openstack-lbaas11:18
*** dmellado has quit IRC11:20
*** dmellado has joined #openstack-lbaas11:24
nmagnezirm_work, o/11:28
openstackgerritSanthosh Fernandes proposed openstack/octavia master: [WIP] L3 ACTIVE-ACTIVE Data model impact  https://review.openstack.org/52472211:29
*** dmellado has quit IRC11:32
*** dmellado has joined #openstack-lbaas11:33
*** salmankhan has quit IRC11:34
*** AlexeyAbashkin has joined #openstack-lbaas11:40
*** salmankhan has joined #openstack-lbaas11:43
*** yamamoto has quit IRC11:54
*** yamamoto has joined #openstack-lbaas11:58
*** yamamoto has quit IRC12:03
nmagnezirm_work, around?12:17
*** sticker has quit IRC12:34
*** yamamoto has joined #openstack-lbaas12:35
*** atoth has joined #openstack-lbaas13:22
*** dokua has quit IRC14:02
*** b3nt_pin is now known as beagles14:07
*** yamamoto has quit IRC14:24
*** yamamoto has joined #openstack-lbaas14:24
*** salmankhan has quit IRC14:25
*** tesseract has quit IRC14:26
*** gcheresh has joined #openstack-lbaas14:26
*** eN_Guruprasad_Rn has quit IRC14:27
*** salmankhan has joined #openstack-lbaas14:38
-openstackstatus- NOTICE: We're currently seeing an elevated rate of timeouts in jobs and the zuulv3.openstack.org dashboard is intermittently unresponsive, please stand by while we troubleshoot the issues.14:39
*** krypto has quit IRC14:49
*** salmankhan has quit IRC14:53
*** alee has quit IRC14:55
*** salmankhan has joined #openstack-lbaas15:00
*** rcernin has quit IRC15:00
*** sanfern has joined #openstack-lbaas15:01
*** dokua has joined #openstack-lbaas15:37
*** salmankhan has quit IRC15:37
*** gcheresh has quit IRC15:45
*** tongl has joined #openstack-lbaas15:47
*** dokua has quit IRC15:52
*** dokua has joined #openstack-lbaas15:53
*** kobis has quit IRC15:53
*** salmankhan has joined #openstack-lbaas15:55
*** tongl has quit IRC15:55
*** dokua has quit IRC15:57
*** fnaval has joined #openstack-lbaas16:03
*** b_bezak has quit IRC16:04
*** b_bezak has joined #openstack-lbaas16:04
*** b_bezak has quit IRC16:09
*** armax has joined #openstack-lbaas16:26
*** AlexeyAbashkin has quit IRC16:26
*** AlexeyAbashkin has joined #openstack-lbaas16:38
*** sanfern has quit IRC16:43
*** sanfern has joined #openstack-lbaas16:44
*** AlexeyAbashkin has quit IRC16:47
sanfernhi johnsom16:50
johnsomHi Sanfern16:51
sanfernI saw your review comments , thank you for comments. I need your help to resolve these.16:51
sanfernhttps://review.openstack.org/#/c/490164/21/elements/exabgp-speaker/post-install.d/77-exabgp - I tried removing pip install statement and rebuild the image. VM didn't install exabgp16:53
sanfernI am making mkfifo -m 0644 it should be fine right16:55
sanfernshould be render exabgp.service file and remove it from the element ? It doesn't start by default16:57
johnsomHmmm, ok, so if the exabgp service isn't enabled, that is fine, you can leave it.  We just need to manage enabling it on the agent17:06
sanfernok I am disabling when I stop the service17:07
johnsomThe pip install thing, that doesn't make sense as it's in the octavia requirements.txt.  The only thing I can think is it's installing in py35 and you are looking in py27 or vs versa17:07
johnsomYeah, but I never saw it enabled17:07
johnsomYou might want to go tighter than 644, as it's a fifo, so if someone else reads from it you will not see that data.  Why not 600?17:08
sanfernok I have not tested on py35. I need to verify it.17:08
johnsomI can look at an image later and see what is up.  I just can't right now.17:09
sanfernI will retest it. tomorrow in office.17:09
sanfernhttps://review.openstack.org/#/c/491016/13/octavia/amphorae/backends/agent/api_server/exabgp.py@3617:09
johnsomOk.  Maybe I can get there this afternoon.17:10
sanfernI need to make entry in interface file17:10
*** tongl has joined #openstack-lbaas17:10
*** salmankhan has quit IRC17:31
*** salmankhan has joined #openstack-lbaas17:37
*** Dave has quit IRC17:37
*** dokua has joined #openstack-lbaas17:38
*** dokua has quit IRC17:47
openstackgerritSanthosh Fernandes proposed openstack/octavia master: [WIP] L3 ACTIVE-ACTIVE Data model impact  https://review.openstack.org/52472217:49
*** Dave has joined #openstack-lbaas17:53
sanfernjohnsom, http://paste.openstack.org/show/g82NMaPHGF1YA3iHxUbQ/ any clue to fix it ? src - https://review.openstack.org/#/c/524722/13/octavia/tests/functional/db/test_repositories.py17:53
*** fnaval has quit IRC17:53
sanfernin deleting Distributor flow, should we remove from the DB or mark for deletion ?17:54
johnsomThat error is interesting, first glance I don't see the issue.18:01
johnsomMark for deletion is best, it gives an operator a path back if something goes wrong18:01
sanfernthanks johnsom18:02
sanfernmy all repo test on that patch is failing :(18:02
openstackgerritSanthosh Fernandes proposed openstack/octavia master: Adding exabgp-speaker element to amphora image  https://review.openstack.org/49016418:07
*** AlexeyAbashkin has joined #openstack-lbaas18:14
*** AlexeyAbashkin has quit IRC18:18
*** gcheresh has joined #openstack-lbaas18:33
*** AlexeyAbashkin has joined #openstack-lbaas18:34
*** AlexeyAbashkin has quit IRC18:38
*** dokua has joined #openstack-lbaas18:39
rm_worknmagnezi: i am now...18:40
*** Swami has joined #openstack-lbaas18:41
*** dokua has quit IRC18:42
*** dokua has joined #openstack-lbaas18:42
*** dokua has quit IRC18:42
rm_workbleh zuul still f'd18:50
*** salmankhan has quit IRC19:34
*** dokua has joined #openstack-lbaas19:47
*** Alex_Staf has joined #openstack-lbaas19:47
*** Alex_Staf has quit IRC19:53
*** dokua has quit IRC20:05
-openstackstatus- NOTICE: The zuul scheduler has been restarted after lengthy troubleshooting for a memory consumption issue; earlier changes have been reenqueued but if you notice jobs not running for a new or approved change you may want to leave a recheck comment or a new approval vote20:15
*** dmellado has quit IRC20:32
*** openstackgerrit has quit IRC20:34
*** dmellado has joined #openstack-lbaas20:34
*** sbalukoff_ has quit IRC20:34
*** jmccrory has quit IRC20:35
*** sbalukoff_ has joined #openstack-lbaas20:37
*** jmccrory has joined #openstack-lbaas20:40
*** dokua has joined #openstack-lbaas21:00
*** jappleii__ has joined #openstack-lbaas21:23
*** jappleii__ has quit IRC21:24
*** jappleii__ has joined #openstack-lbaas21:25
*** jappleii__ has quit IRC21:26
*** jappleii__ has joined #openstack-lbaas21:27
*** jappleii__ has quit IRC21:27
*** jappleii__ has joined #openstack-lbaas21:28
*** AlexeyAbashkin has joined #openstack-lbaas21:34
*** AlexeyAbashkin has quit IRC21:38
*** gcheresh has quit IRC21:41
rm_workanyone actually here? am I netsplit? :P21:52
xgerman_Na21:53
dimsrm_work o/21:54
rm_workk :P thanks21:54
rm_workunnaturally quiet21:54
*** dokua has quit IRC21:56
*** sticker has joined #openstack-lbaas21:57
*** AlexeyAbashkin has joined #openstack-lbaas21:57
*** rcernin has joined #openstack-lbaas21:58
*** AlexeyAbashkin has quit IRC22:02
johnsomTrue, I got pulled into an internal issue this morning, so just now getting back to the day job.22:02
johnsomgrin22:02
*** sticker has quit IRC22:10
*** sticker has joined #openstack-lbaas22:10
nmagnezijohnsom, just replied to the providers spec22:18
johnsomThanks22:18
nmagnezijohnsom, i think it's mostly done. great job there. just few minor things IMO22:18
nmagnezinp at all :)22:18
nmagnezirm_work, someday.. we'll manage to chat here :-)22:21
rm_worklol22:23
rm_worknmagnezi: how about now? going to bed? :P22:24
nmagnezirm_work, yeah.. sorry :-)22:24
rm_worknp heh22:24
*** openstackgerrit has joined #openstack-lbaas22:46
openstackgerritMerged openstack/octavia master: Move loading the network driver into the flows  https://review.openstack.org/52579022:46
rm_workyay finally, k22:47
rm_workaaaand qos is in conflict :(22:47
rm_worksorry about that22:47
rm_workthat's part of why i was hoping that would merge quickly, and these delays were killing me22:48
openstackgerritAdam Harwell proposed openstack/octavia master: Producer/endpoint code to allow for amphora failovers  https://review.openstack.org/52530222:48
openstackgerritAdam Harwell proposed openstack/octavia master: Amphora API Failover call  https://review.openstack.org/52577822:48
openstackgerritAdam Harwell proposed openstack/octavia master: Add unit tests for neutron utils, add model/util for floating_ip  https://review.openstack.org/52535322:48
rm_workrebasinggggg22:49
*** fnaval has joined #openstack-lbaas23:06
openstackgerritAdam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s)  https://review.openstack.org/43561223:15
*** chandankumar has quit IRC23:22
*** chandankumar has joined #openstack-lbaas23:24
*** chandankumar has quit IRC23:29
*** chandankumar has joined #openstack-lbaas23:35
*** atoth has quit IRC23:35

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!