Thursday, 2017-07-27

*** sshank has quit IRC00:04
rm_workjohnsom: errr00:05
rm_workjohnsom: getting reports that member weight of "0" breaks stuff00:06
rm_workwe have 'MIN_WEIGHT = 0'00:06
johnsomYeah, 0 means drain the member, i.e. don't allow new connections to the backend server, but finish existing connections00:06
rm_workkk00:07
rm_workalso -- admin_state_down = true -> members report ERROR?00:08
rm_worki am not sure why that is the case00:09
rm_workneed to dig00:09
johnsomhttp://cbonte.github.io/haproxy-dconv/1.6/configuration.html#5.2-weight00:09
rm_workk well00:10
rm_workwhat seems to be happening is00:10
rm_workit goes to "ERROR"00:10
rm_workwhen you set it to weight 000:10
*** ssmith has joined #openstack-lbaas00:10
johnsomHmmm, yeah, ERROR could be due to the config being removed from haproxy, so health message won't acknowledge it.00:10
rm_workhmm00:11
rm_workyeah00:11
rm_workit shows up that way a lot00:11
rm_workusually when it shouldn't00:11
johnsomI wonder if we aren't handling the stats message from haproxy right for zero:00:11
johnsomNote that a zero00:11
johnsom  weight is reported on the stats page as "DRAIN" since it has the same00:11
johnsom  effect on the server00:11
rm_workhmm00:11
rm_workso it reports DRAIN back to Octavia?00:11
rm_workin our health message?00:12
rm_workor what00:12
johnsomIt may be telling us it's "DRAIN" and we may be taking that as not ONLINE00:12
rm_workyeah probably00:12
rm_work!= ONLINE -> ERROR?00:12
openstackrm_work: Error: "=" is not a valid command.00:12
johnsomNo, it's in the haproxy stats gather00:12
rm_workhmm00:12
rm_workerr yeah that's from the health messaging right?00:12
johnsomhttps://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/utils/haproxy_query.py#L10100:13
johnsomIn here somewhere00:13
johnsomThing is, we need to check what haproxy is ACTUALLY outputting for the status when weight is zero00:14
johnsomThe docs just say ...00:15
johnsomI have one up, just a sec00:15
rm_workyeah i think we need to add "DRAIN" to possible node states00:17
rm_workdunno how much that breaks n-lbaas00:18
johnsomWell, we can just make it online00:18
johnsomThat seems right to me00:18
johnsomWell, first test, with no health monitor, weight 0 still reports NO_CHECK00:23
rm_workwith HM00:24
rm_workdo with HM00:24
johnsomYeah, working on it00:24
johnsomNope, with a healthy health check it reports UP00:25
rm_workerm00:25
rm_workhmmmmmm00:25
johnsomLet me look at my member00:25
rm_workwith weight 0?00:25
* johnsom that sounds bad...00:25
rm_workor with admin_state_down=True?00:25
rm_worklol00:25
johnsomweight 0, admin up00:26
rm_workdo admin state down too00:26
johnsomhttps://www.irccloud.com/pastebin/X5oji1rw/00:26
rm_workhmmm00:26
rm_workwait a min and look again?00:26
rm_workstill doing that?00:27
rm_workoh and admin-down00:27
johnsomOk, waiting00:27
rm_workif yours exhibits NEITHER of the issues i'm seeing00:27
rm_workthen wtf00:27
rm_workgonna need to debug00:27
johnsomOk, so I think weight 0 is working correctly.00:28
johnsomHowever, note, weight 0 and an unhealthy member will report down00:28
johnsomor error or whatever we called it00:28
rm_workhmmm00:28
johnsomOk, still good, so going to admin down00:29
rm_workk...00:29
rm_workah hmmmmm00:30
rm_worki wonder00:30
johnsomhttps://www.irccloud.com/pastebin/QJBeODlm/00:30
johnsomBut I want the heartbeats to have some more time...00:30
rm_workyeah00:30
rm_workhmmmmmmmmmm00:30
rm_workthis is super weird00:30
rm_workso we get ERROR for both00:31
rm_workbut also what we see is00:31
rm_workimmediately after any member change00:31
rm_workevery member flips to ERROR for a second00:31
rm_workuntil the next healthcheck00:31
rm_workO_o00:31
johnsomYeah, not sure I understand that one00:31
rm_workor, after one healthcheck and not after the next00:31
rm_workyou don't see that ever?00:32
rm_worki need to dig in to that...00:32
johnsomI haven't watched the logs and stuff to truely see, but just observationally, no, haven't seen it00:32
johnsomhttps://www.irccloud.com/pastebin/0W9zPoqt/00:32
rm_workfff00:35
johnsomYeah, still happy.  I have nothing for you on this one00:36
*** ipsecguy has quit IRC00:36
*** ipsecguy has joined #openstack-lbaas00:37
belhararjohnsom, hey00:41
johnsomcycling admin state up / down seems to be fine00:41
johnsombelharar Hi00:41
johnsomsame with the weight00:43
belhararjohnsom, I'm trying to invoke pdb during creation of amphora from nova_driver, but it keeps failing. e.g. it raises bpbquit, but it's not consistent across the build() func00:44
belhararjohnsom, do you have any idea how can resolve it?00:44
johnsombelharar Hmm, not familiar with that issue but I know pdb has had issues with some of the OpenStack code.00:46
johnsombelharar  I would make sure to run the o-cw process in standalone mode and not under uWSGI00:46
belhararjohnsom, can you point me to where I could check that? (I'm using devstack)00:49
johnsombelharar  "ps -ef | grep octavia-uwsgi.ini" if it's there you are running under uwsgi which will make pdb very hard.00:50
belhararjohnsom, You're correct...00:51
johnsomOh, wait, I am confusing this.00:51
johnsomIgnore that00:51
johnsomI was thinking API process, you are debugging the controller worker.00:52
johnsomIt's been a long day00:52
belharar:-)00:52
rm_workyeah i was gonna say...00:53
rm_workanyway, yes, long day00:53
rm_worki wonder why we get very different results for our HMs :/00:53
rm_workneed to dig in a bit00:53
johnsombelharar  With CW, a couple of thoughts.  Change the workers to 1 and max_workers to 1 in octavia.conf, then "systemctl restart devstack@o-cw" then make sure you attach to the process listed as "ConsumerService" in ps -ef00:55
johnsomThat will probably help you get what you want by minimizing the threads, etc.00:55
johnsomYeah, I have been up 12 hours to make an early meeting, plus the pike-3 rush00:56
*** JudeC has quit IRC01:02
rm_work<_<01:08
*** armax has joined #openstack-lbaas01:16
belhararjohnsom, I use 'screen' in my devstack setup. How can I verify I have only one active octavia controller?01:18
*** harlowja has quit IRC01:19
rm_workumm johnsom01:21
johnsomSo instead of systemctl just join the screen session, crtl-c, up arrow, return to restart the process.  Then in ps -ef there should only be one consumerservice01:22
rm_workhttps://github.com/openstack/octavia/blob/master/octavia/controller/healthmanager/update_db.py#L170-L17301:24
rm_workjohnsom: ^^01:24
rm_worki don't know how yours is working01:24
rm_workoh if it gets neither of those it is "no change"01:25
rm_worki see01:25
rm_workso it comes back to "why do i get ERRORs for all members immediately after any change01:26
rm_work"01:26
*** yamamoto_ has quit IRC01:31
*** gongysh has joined #openstack-lbaas01:35
belhararjohnsom, It appears to be pretty stubborn. I continue to see two of those CS workers, plus another 'master process'.01:35
*** yamamoto has joined #openstack-lbaas01:35
belhararjohnsom, was i correct to modify it in /opt/stack/octavia/etc/octavia.conf?01:35
johnsomNo, /etc/octavia/octavia.conf01:36
johnsomThat one was the sample file from git01:36
*** ssmith has quit IRC01:37
belhararjohnsom, thx01:37
rm_workjohnsom: MAINT <-- is it that?01:38
rm_worktrying to figure out what it actually is showing01:38
johnsomWe don't use that status01:38
rm_workis that the draining status?01:38
rm_workwhat does it print for weight=0 ?01:39
rm_worki'm talking about HAProxy01:39
johnsomNo, from what I see on a running haproxy, there is no DRAIN status01:39
johnsomIt prints UP01:39
rm_workerm01:40
rm_workhmm so if the node is failing a healthcheck, and is "OFFLINE"01:40
rm_workweight 0/1 and state UP/DOWN all show as "OFFLINE"01:41
rm_worknm i lied01:41
rm_workeventually it goes to ERROR here01:41
rm_workwtf01:41
rm_workit should be "DOWN"01:41
rm_workerr, OFFLINE01:41
rm_workno?01:41
rm_workbleh spinning a new stack01:42
johnsomYeah, admin down should go to offline, but I don't think it is01:45
johnsomhttps://www.irccloud.com/pastebin/nKAHj8RJ/01:45
johnsomThere is a dump from HAPRoxy BTW.  18875c67-7818-4121-a58d-73c6f2e32f3e is admin up, weight 001:46
*** gongysh has quit IRC01:47
*** sanfern has joined #openstack-lbaas01:49
johnsomWe should probably refactor that code to use the "disabled" keyword and the MAINT state for member admin down.  Though pool admin down should also mark them OFFLINE, so maybe just fixing the controller logic is the right answer for both.01:50
johnsomhttps://bugs.launchpad.net/octavia/+bug/170682801:56
openstackLaunchpad bug 1706828 in octavia "admin_state_up == false, member still ONLINE" [Critical,Triaged] - Assigned to Michael Johnson (johnsom)01:56
johnsomOFFLINE for pools and member is just broken01:56
johnsomThe LB actually does what you want, but operating_status isn't reflecting it like you would want it to01:57
rm_workhmm02:02
johnsomrm_work Take if if you want it, I won't get to it until Tuesday at the soonest.02:03
rm_workyeah i dunno if i will either02:04
rm_workmy env does something different anyway'02:04
rm_worklol i just noticed vip_Address has a capital letter, in the client T_T02:07
johnsomHmm, I think there is a bug open that it doesn't work, maybe that is why...  I don't know if it was open for api or cli02:08
johnsomhttps://bugs.launchpad.net/octavia/+bug/169533102:11
openstackLaunchpad bug 1695331 in octavia "Octavia v2 API Requesting a VIP address is not working" [Critical,In progress] - Assigned to Pradeep Kumar Singh (pradeep-singh-u)02:11
rm_workno i think that's very different02:17
rm_workthis is just the display formatting02:17
johnsomA02:17
johnsomAh02:17
openstackgerritMichael Johnson proposed openstack/octavia master: Update release notes for work done in Pike  https://review.openstack.org/48766602:23
rm_workjohnsom: ummmmmmmmmmmmm02:27
rm_workOK I know why it's different02:27
rm_workhttp://paste.openstack.org/show/srtiQPptyYj6r4SDYi9c/02:27
rm_worknotice: DRAIN02:27
johnsomWhat version of HAPROXY are you running?02:28
rm_work1.5.?02:28
rm_workHA-Proxy version 1.5.18 2016/05/1002:28
johnsomii  haproxy                            1.6.3-1ubuntu0.102:29
rm_workthis is centos02:29
johnsomChangeLog: 2016/11/20 : 1.6.10 - BUG/MEDIUM: srv-state: properly restore the DRAIN state02:30
johnsomSo, it looks like they broke DRAIN for a while.....02:31
rm_workand we coded against that?02:31
johnsomIt's now fixed, but the ubuntu version doesn't have the fix yet.02:31
rm_workok well02:32
johnsomIt had to have been broken for a while, like sometime into 1.5 too02:32
rm_workwe can put in the fix in our code02:32
rm_workbut yeah i was expecting DRAIN02:32
rm_worknow i feel less confused02:32
rm_workyours is just broken :P02:32
rm_work(in the same way as our code)02:32
johnsomYeah, we coded to what we actually saw.  The docs, had some grey areas here02:33
rm_worklol02:33
rm_workso >_>02:33
rm_workthis is sucky02:33
johnsomYep02:33
rm_workWTB ubuntu fixing it02:33
rm_workso you want it to be "UP"?02:34
rm_workor can I add a state for members "DRAINING"?02:34
rm_worki kinda want to do this:02:34
*** sanfern has quit IRC02:35
johnsomIf you add operating status DRAINING you need to mask it when we send the event to lbaas and update lbaas02:35
rm_workugh yeah i guess so, fff02:36
rm_workk one sec02:36
johnsomI am ok with adding draining02:36
rm_worklol my computer is lagging so bad it still hasn't finished sending the review02:36
johnsomUgh, we have to figure this out too: http://logs.openstack.org/37/487137/1/gate/gate-neutron-lbaasv2-dsvm-scenario-ubuntu-xenial/b1ecb54/logs/screen-o-cw.txt.gz#_Jul_27_00_29_40_39614302:38
johnsomNo suitable network interface found02:38
johnsomStrange that I only see it on nlbaas jobs02:39
rm_workDRAIN would be UP?02:40
johnsomFor nlbaas yes, but the more I think about it, we just have to make nlbaas know DRAIN as it polls the API....02:41
openstackgerritMerged openstack/octavia master: Fix url_path valid check  https://review.openstack.org/47922002:41
rm_workso I do DRAIN->DRAINING02:42
rm_worksee this:02:42
openstackgerritAdam Harwell proposed openstack/octavia master: Properly handle DRAIN state from HAProxy health messages  https://review.openstack.org/48767102:42
rm_worki have had this ready to commit for a bit02:42
rm_workwas just trying to validate 100% I was right02:42
*** JudeC has joined #openstack-lbaas02:43
rm_workbut still02:43
rm_workwhat status does it flip to RIGHT AFTER a member change02:44
rm_workthat causes it to go to error too02:44
johnsomA few comments...02:46
rm_worklolk02:48
rm_workthat was a quick first draft02:48
rm_workugh theres a db migration02:48
rm_workbleugh02:48
rm_worki forgot we put states there02:49
johnsomWTF now: http://logs.openstack.org/66/487666/1/check/gate-octavia-python27-ubuntu-xenial/d95227d/console.html#_2017-07-27_02_27_31_56083302:49
rm_workthe second comment for a release note was a nice touch, even if probably accidental02:49
johnsomThat is just a release notes patch02:49
rm_workdid you ask in barbican?02:49
johnsomNo, this JUST happened02:50
rm_workwtf02:50
rm_workhttps://github.com/openstack/python-barbicanclient/commit/4a3c7cd23a22cb9f8fd376e3de3a6f4f6c69da0d02:51
rm_workbarbicanclient released probably just nowish02:51
rm_workoh02:51
rm_workor g-r02:51
rm_workupper-constraints?02:51
rm_workgatefix time02:52
johnsomhttps://review.openstack.org/#/c/487549/02:52
johnsomYep, g-r 7:1302:52
rm_workanyway just need to add .v102:52
rm_workyou or me02:52
rm_workor ....02:52
* rm_work looks around the room02:53
johnsomYou, I don't know what you are talking about....02:53
rm_worki linked it02:53
rm_workhttps://github.com/openstack/python-barbicanclient/commit/4a3c7cd23a22cb9f8fd376e3de3a6f4f6c69da0d02:53
rm_workthis02:53
johnsomThat is a pretty nasty breaking change.  Are they not stable?02:53
* rm_work shrugs02:54
rm_workasked02:56
rm_workanyway, fixing it, sec02:56
johnsomYou will have more patience than I right now....02:58
rm_workbly maybe03:00
rm_workbarely maybe03:00
rm_workmy compute is so f'd03:00
rm_workit's agin so bd it's missing ltters03:00
rm_work*lagging03:00
johnsomBroke heat too03:00
*** JudeC has quit IRC03:01
johnsomrm_work I need to sign off, but if you have fixes for this gate breakage, ping me and I will get back on to review....03:10
rm_workk... i will fix03:12
*** belharar has quit IRC03:21
*** aojea has joined #openstack-lbaas03:23
rm_workreverting the upper-constraints change03:27
rm_workso we can be back in business while they fix things03:27
*** aojea has quit IRC03:28
*** harlowja has joined #openstack-lbaas03:31
*** sanfern has joined #openstack-lbaas03:50
*** gongysh has joined #openstack-lbaas04:32
*** links has joined #openstack-lbaas04:33
reedipjohnsom : hi04:34
reedipjohnsom : where did you see the error in https://review.openstack.org/#/c/458308/04:34
johnsomI loaded up the patch and attempted to do an update to add a qos_policy_id04:37
johnsomI also confirmed that this does not happen on master04:37
johnsomreedip In the morning I need to feature freeze Pike04:41
reedipjohnsom : okay , so the deadline is another 12 hours, I guess04:41
johnsomSadly04:42
reediphmm , got it04:42
reediphow did you attempt to update the qos_policy_id though04:42
johnsomI did it via curl direct to the API04:42
johnsomreedip Something like: curl -X PUT -H "Content-Type: application/json" -H "X-Auth-Token: <token>" -d '{"loadbalancer": {"vip_qos_policy_id": "<uuid>"}' http://198.51.100.10:9876/v2.0/lbaas/loadbalancers/8b6fc468-07d5-4d8b-a0b9-695060e72c3104:45
johnsomAfter I fixed the other bug with the port json body04:46
reedipjohnsom : ok, means we are also lacking some sort of test case04:46
johnsomYep04:46
johnsomI'm not sure the cause of the ip_address issue, I haven't had time to dig in.  We've been reviewing like 28 patches04:47
reedipwow ... sorry I was busy with other stuff , didnt get time to jump back in04:48
*** harlowja has quit IRC04:52
*** gongysh has quit IRC05:05
*** gongysh has joined #openstack-lbaas05:10
*** gongysh has quit IRC05:13
openstackgerritMichael Johnson proposed openstack/octavia master: Spec detailing Octavia service flavors support  https://review.openstack.org/39248505:16
openstackgerritOpenStack Proposal Bot proposed openstack/octavia master: Updated from global requirements  https://review.openstack.org/48747405:27
*** gcheresh_ has joined #openstack-lbaas05:29
*** armax has quit IRC05:30
*** aojea has joined #openstack-lbaas05:31
*** armax has joined #openstack-lbaas05:31
*** armax has quit IRC05:31
*** armax has joined #openstack-lbaas05:32
*** armax has quit IRC05:32
*** armax has joined #openstack-lbaas05:33
*** armax has quit IRC05:33
*** kobis has joined #openstack-lbaas05:52
*** yuanying has joined #openstack-lbaas05:57
*** Alex_Staf has joined #openstack-lbaas06:04
*** kobis has quit IRC06:11
openstackgerritXing Zhang proposed openstack/octavia master: Fix haproxy_check_script for delete listener  https://review.openstack.org/48525406:30
*** kobis has joined #openstack-lbaas06:47
*** aojea_ has joined #openstack-lbaas07:08
*** aojea has quit IRC07:12
*** rcernin has joined #openstack-lbaas07:13
*** Alex_Staf has quit IRC07:31
rm_workreedip: you MIGHT have a little extra time if we can sell that we CAN07:39
rm_work*CAN'T merge anything today because of gate breakage07:39
rm_workstuff got a little wonky because of a barbican client change07:40
reediphehe ... got it rm_work :)07:40
rm_workeugh08:25
*** dayou1 has joined #openstack-lbaas08:25
rm_worki'm seeing another recent fail? but confused as to why it's only now failing08:25
rm_worklooks like pyroute2 has some stuff that's not OSX compatible? but it's never failed before now08:25
rm_workgrrr08:26
rm_workcan't runpy27/py35 tests08:27
*** dayou has quit IRC08:27
*** gongysh has joined #openstack-lbaas08:36
*** nmagnezi_ has joined #openstack-lbaas08:39
*** nmagnezi has quit IRC08:46
*** nmagnezi_ is now known as nmagnezi08:46
openstackgerritAdam Harwell proposed openstack/octavia master: Properly handle DRAIN state from HAProxy health messages  https://review.openstack.org/48767108:55
*** Alex_Staf has joined #openstack-lbaas08:59
openstackgerritAdam Harwell proposed openstack/octavia master: Properly handle DRAIN state from HAProxy health messages  https://review.openstack.org/48767109:02
rm_workjohnsom: there, hope you're happier with that :P though i need to see if it passes unit tests, because i can't run them locally right now :/09:03
*** redondo-mk has quit IRC09:59
*** yamamoto has quit IRC10:00
openstackgerritOpenStack Proposal Bot proposed openstack/neutron-lbaas-dashboard master: Imported Translations from Zanata  https://review.openstack.org/48780010:05
*** yamamoto has joined #openstack-lbaas10:31
*** sanfern has quit IRC10:52
*** sanfern has joined #openstack-lbaas11:59
*** catintheroof has joined #openstack-lbaas12:11
*** yamamoto has quit IRC12:14
*** atoth has joined #openstack-lbaas12:15
*** gongysh has quit IRC12:24
*** links has quit IRC12:52
*** yamamoto has joined #openstack-lbaas13:14
*** cristicalin has joined #openstack-lbaas13:15
*** yamamoto has quit IRC13:21
*** leitan has joined #openstack-lbaas13:28
*** kobis has quit IRC13:49
*** ssmith has joined #openstack-lbaas13:50
*** cpusmith has joined #openstack-lbaas13:51
*** ssmith has quit IRC13:55
*** cristicalin has quit IRC13:56
*** kobis has joined #openstack-lbaas14:23
*** armax has joined #openstack-lbaas14:39
*** mnaser has joined #openstack-lbaas14:42
*** gcheresh_ has quit IRC14:56
*** Alex_Staf has quit IRC14:57
*** sanfern has quit IRC14:58
*** sanfern has joined #openstack-lbaas14:58
openstackgerritMerged openstack/octavia master: Updated from global requirements  https://review.openstack.org/48747414:59
*** kobis has quit IRC14:59
*** rcernin has quit IRC15:03
*** links has joined #openstack-lbaas15:04
*** links has quit IRC15:07
*** kobis has joined #openstack-lbaas16:04
*** fnaval has joined #openstack-lbaas16:18
*** kobis has quit IRC16:23
*** JudeC has joined #openstack-lbaas16:28
*** bzhao has joined #openstack-lbaas16:30
*** bbzhao has quit IRC16:33
*** kobis has joined #openstack-lbaas16:43
*** kobis has quit IRC16:44
*** sshank has joined #openstack-lbaas16:45
*** tongl has joined #openstack-lbaas16:58
openstackgerritMerged openstack/neutron-lbaas-dashboard master: Make it work with devstack  https://review.openstack.org/48082317:01
openstackgerritMerged openstack/neutron-lbaas-dashboard master: Add loading and error status to detail pages  https://review.openstack.org/48196817:05
openstackgerritMerged openstack/neutron-lbaas-dashboard master: Imported Translations from Zanata  https://review.openstack.org/48780017:05
*** harlowja has joined #openstack-lbaas17:09
*** rcernin has joined #openstack-lbaas17:16
*** JudeC has quit IRC17:33
openstackgerritMerged openstack/neutron-lbaas master: consume load_class_by_alias_or_classname from neutron-lib  https://review.openstack.org/48713717:34
*** SumitNaiksatam has joined #openstack-lbaas17:38
rm_workjohnsom: fix is up17:44
rm_worki guess we need to test it against a centos image17:44
johnsomWhich?17:44
rm_workhttps://review.openstack.org/#/c/487671/17:44
rm_worki think i got all the bits17:44
rm_worktests, release note, docs17:44
johnsomOk, I will have a look.  This is kind of a bug, so I'm going to work to get Pike-3 out and then take a look17:45
openstackgerritMerged openstack/octavia-dashboard master: Add loading and error status to detail pages  https://review.openstack.org/48195417:45
openstackgerritMerged openstack/neutron-lbaas master: use synchronized decorator from neutron-lib  https://review.openstack.org/48716117:49
johnsomrm_work https://review.openstack.org/#/c/487666/17:55
openstackgerritGerman Eichberger proposed openstack/octavia master: Ignore 404 amphora error when deleting resources  https://review.openstack.org/48723217:55
*** SumitNaiksatam has quit IRC17:56
*** SumitNaiksatam has joined #openstack-lbaas17:56
openstackgerritMerged openstack/neutron-lbaas master: Updated from global requirements  https://review.openstack.org/48632518:05
rm_workxgerman_: commented on your last one18:10
*** kobis has joined #openstack-lbaas18:15
*** kobis has quit IRC18:27
tomtomtom@johnsom, @rm_work - I was able to get the octavia load balancing working after some post configuration checks and configuration changes.  Here is a list of stuff I did: 1) Place the VM (web server) network into the list of plugged_interfaces for IP namespace in amphora instance (for health checks) 2) Remove the interface from /etc/networks/interfaces.d 3) Add controller ip port list controllers, for our install 10.80.20.1418:29
tomtomtometc (in amphora instance /etc/octavia/amphora*.conf file) 4) Update the octavia listener table with the barbican tls cert url 5) ensure the cert gets copied to the amphora instance in /var/lib/octavia/certs directory (if directory is empty, it's not been done) 6) if cert does not exist, run openstack loadbalancer listener disable/enable to "force" it to update 7) delete the default route in the network namespace and add the18:30
tomtomtom8) if member is still offline ensure that monitor port for member is 80 and monitor ip address is the same as the member ip address in the member18:30
*** kobis has joined #openstack-lbaas18:31
johnsomtomtomtom Hmmm, let me try to pull this apart.18:36
johnsom3 - This is a configuration setting in octavia.conf : https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list18:36
johnsomIf that is set, it is automatically set in the amphora.conf in the amp18:36
johnsom1 & 2 - Theses should automatically be done when the member is added to the pool18:37
johnsom4 - This can be done from the API or CLI18:37
*** kobis has quit IRC18:38
johnsom5 - happens when 4 is done via the API or CLI or dashboard18:38
johnsom6 - same, happens when 418:38
tomtomtom1 & 2 right, but for some reason never got done "automatically"18:39
johnsom7 - default route in the network namespace is by default the VIP network default route.  For members, if the subnet specified when the member was created has host routes, those are honored18:39
tomtomtom3 - just saw that, hmm... I remember setting it, weird.18:39
tomtomtom4 - Tried the updated openstack cli and direct "curl" http call and it still does not get updated in the DB.18:40
tomtomtom5 - yeah it happens automatically in most cases but it's also taken the restart as mentioned.18:40
johnsomWhat OS are you using for your amphora image?18:41
tomtomtom7 - I believe you, but when I switched the other over to the namespace it picked up that default route instead.18:41
tomtomtomUsing ubuntu for amphora image.18:42
johnsomYeah, if a host route is not specified on the network, it will pick up the other default rout18:42
johnsomSo, on 4, can we get your octavia API log to look at?18:43
*** armax has quit IRC18:49
openstackgerritMerged openstack/octavia master: Update release notes for work done in Pike  https://review.openstack.org/48766618:54
*** cpusmith_ has joined #openstack-lbaas19:02
*** cpusmith has quit IRC19:06
*** SumitNaiksatam has quit IRC19:07
mnaserhi everyone!  i was hoping if i can get a +W on this -- it's useful for us to be able to do zero configuration deployments with puppet (notably, not having to manually put in a uuid), we use it in newton but i backported to ocata as it's just as useful there too -- https://review.openstack.org/#/c/487879/ and https://review.openstack.org/#/c/487883/19:13
*** gcheresh_ has joined #openstack-lbaas19:16
*** cpusmith_ has quit IRC19:19
*** cpusmith_ has joined #openstack-lbaas19:19
*** cpusmith has joined #openstack-lbaas19:20
*** cpusmith_ has quit IRC19:24
*** sshank has quit IRC19:36
*** gcheresh_ has quit IRC19:43
openstackgerritMerged openstack/neutron-lbaas master: Remove some LBaaSv1 references  https://review.openstack.org/48671619:43
openstackgerritOpenStack Proposal Bot proposed openstack/neutron-lbaas master: Updated from global requirements  https://review.openstack.org/48816919:50
*** gcheresh_ has joined #openstack-lbaas19:51
xgerman_rm_work questions about the proxy?19:51
xgerman_we replace the normal plugin with one which proxies EVERY request to Octavia — so 3rd party would need to be in Octavia19:52
openstackgerritOpenStack Proposal Bot proposed openstack/octavia master: Updated from global requirements  https://review.openstack.org/48817019:52
*** JudeC has joined #openstack-lbaas19:52
openstackgerritOpenStack Proposal Bot proposed openstack/python-octaviaclient master: Updated from global requirements  https://review.openstack.org/48817119:57
rm_workxgerman_: that was not really my question :P20:02
rm_worki'll get to it20:04
*** KeithMnemonic1 has joined #openstack-lbaas20:05
rm_workxgerman_: this needs a review tho: https://review.openstack.org/#/c/486859/20:05
xgerman_ok20:06
*** KeithMnemonic2 has quit IRC20:08
*** sshank has joined #openstack-lbaas20:14
johnsomUgh, was just about to do the release then this g-r came in swapping the one that came in this morning.20:15
johnsomCan I get one of you to hit these two so if they pass they go straight in?20:15
johnsomhttps://review.openstack.org/#/c/48817020:15
rm_worklol sure20:15
johnsomhttps://review.openstack.org/#/c/48816920:16
rm_workgot them20:16
johnsomThanks!20:16
johnsomAfter those I think we are ready to go20:16
*** gcheresh_ has quit IRC20:34
*** KeithMnemonic2 has joined #openstack-lbaas20:38
openstackgerritGerman Eichberger proposed openstack/octavia master: Ignore 404 amphora error when deleting resources  https://review.openstack.org/48723220:40
*** KeithMnemonic1 has quit IRC20:42
*** sshank has quit IRC20:50
*** sshank has joined #openstack-lbaas20:52
*** yamamoto_ has joined #openstack-lbaas21:13
*** yamamoto_ has quit IRC21:20
*** gongysh has joined #openstack-lbaas21:36
*** gongysh has quit IRC21:36
*** aojea_ has quit IRC21:40
*** aojea has joined #openstack-lbaas21:41
*** aojea has quit IRC21:45
*** armax has joined #openstack-lbaas21:53
*** aojea has joined #openstack-lbaas21:54
*** tongl has quit IRC21:56
*** aojea has quit IRC21:59
*** yamamoto has joined #openstack-lbaas22:04
openstackgerritMerged openstack/neutron-lbaas master: Updated from global requirements  https://review.openstack.org/48816922:06
johnsomThere is one....  Zuul is being super slow on the octavia one, worried it's going to fail.  Which frankly if it's going to fail I hope it hurries up....22:12
*** aojea has joined #openstack-lbaas22:14
*** fnaval has quit IRC22:16
*** armax has quit IRC22:17
*** armax has joined #openstack-lbaas22:18
*** cpusmith has quit IRC22:18
*** aojea has quit IRC22:19
*** rcernin has quit IRC22:26
*** catintheroof has quit IRC22:27
johnsom2 hr 36 min gate time.  Geez, I thought we had timeout limits...22:29
*** KeithMnemonic1 has joined #openstack-lbaas22:38
*** openstack has joined #openstack-lbaas22:46
*** KeithMnemonic2 has joined #openstack-lbaas22:46
*** jidar has joined #openstack-lbaas22:47
*** oomichi has joined #openstack-lbaas22:47
*** m-greene has joined #openstack-lbaas22:48
*** zw has joined #openstack-lbaas22:48
*** KeithMnemonic1 has quit IRC22:49
*** leitan has quit IRC23:23
*** leitan has joined #openstack-lbaas23:35
openstackgerritMerged openstack/octavia master: Updated from global requirements  https://review.openstack.org/48817023:42
*** kbyrne has quit IRC23:47
*** kbyrne has joined #openstack-lbaas23:48
*** catintheroof has joined #openstack-lbaas23:52

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!