Friday, 2017-12-01

*** rcernin has quit IRC00:02
*** rcernin has joined #openstack-lbaas00:02
bar_cgoncalves, nice work!00:06
bar_johnsom, Quota client: https://review.openstack.org/#/c/518767/00:06
johnsomOn the list!00:07
*** rcernin has quit IRC00:07
johnsomI would like to get that merged before Tuesday next week if we can.   I need to cut a queens release of the client.00:07
*** rcernin has joined #openstack-lbaas00:07
bar_I got another 1 or 2 of my own, you know. They're just sitting there, waiting for some core to come to their rescue.00:08
johnsomOr any other reviewer!  grin00:09
bar_one of them has already +100:09
johnsomNot 20?  grin00:09
bar_what can I say, tough crowd00:10
johnsomlol00:10
openstackgerritAdam Harwell proposed openstack/octavia master: Optimize update_health process  https://review.openstack.org/50487500:13
openstackgerritAdam Harwell proposed openstack/octavia master: Clean up test_update_db.py a little bit  https://review.openstack.org/52086300:13
openstackgerritAdam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s)  https://review.openstack.org/43561200:13
rm_worki hope that's correct rebasing00:13
*** armax has quit IRC00:54
openstackgerritBar RH proposed openstack/python-octaviaclient master: Extend loadbalancer_create valid VIP parameters combinations  https://review.openstack.org/51943901:19
openstackgerritBar RH proposed openstack/python-octaviaclient master: Complement Octavia client with a set of features  https://review.openstack.org/52266601:19
rm_workputting the amphora-api in the client might be good, but it's an admin thing so it's less urgent01:20
*** sticker has joined #openstack-lbaas01:25
bar_rm_work what do you suggest? is there a reason to defer it?01:26
rm_workno reason to defer01:27
rm_workjust, if there's more important work01:27
rm_workbut i'd really like to have it :P01:27
rm_worki assume like...01:27
rm_workopenstack loadbalancer amphora list01:27
rm_workand01:28
openstackgerritMichael Johnson proposed openstack/octavia master: Provider driver spec  https://review.openstack.org/50995701:28
rm_workopenstack loadbalancer amphora list --loadbalancer <lb_id>01:28
bar_it is working, even with filters (loadbalancer id, compute id, role, provisioning status)01:28
bar_I use it for a few days now...01:28
rm_workerr, wait, did you do it already?01:28
bar_:-D01:29
rm_worki didn't actually see01:29
rm_worklol01:29
rm_workloool yes you did01:29
rm_workcool, i'm asking for something you already wrote01:29
rm_worklol01:29
rm_workyep you even did it the way i was thinking01:29
rm_workwhelp, i'll review that when i have a chance, prolly tomorrow01:29
bar_awesome, note that there dependencies (that await review too).01:30
bar_*there're01:31
bzhaojohnsom, ping. Hi sir, are you still online?01:49
bar_rm_work, do you have any idea why the documentation of octavia api contains only a subset of the possible amphora statuses?01:54
bzhaorm_work, a ha, :). Sir, could you help me to try to understand the details of johnsom said in the UDP spec, https://review.openstack.org/#/c/503606/7/specs/version1.1/udp_support.rst  in comments L70? I'm not sure I understood correct. :(. Sorry01:55
*** bbbbzhao_ has joined #openstack-lbaas01:56
bzhaoDid johnsom suggest it should be still 1:1 UDP listener:vrrp_instance? And through check script for move the vrrp state of LB which is MASTER role already from MASTER to BACKUP if there is a UDP keepalived dead.01:58
*** AlexeyAbashkin has joined #openstack-lbaas02:12
*** rcernin_ has joined #openstack-lbaas02:14
*** mestery_ has joined #openstack-lbaas02:16
*** AlexeyAbashkin has quit IRC02:16
rm_workbar_: i'm not sure why it'd be partial -- unless it's assuming that EVERYTHING has some statuses, which should be added to that list implicitly02:16
rm_workbar_: which statuses does it list? can you link me the docs you're looking at?02:17
*** ptoohill- has joined #openstack-lbaas02:17
*** ipsecguy_ has joined #openstack-lbaas02:19
bar_sec02:19
*** isp has joined #openstack-lbaas02:20
*** annp has joined #openstack-lbaas02:20
bar_rm_work, https://github.com/openstack/octavia/blob/master/octavia/common/constants.py#L101-L10202:20
*** rcernin has quit IRC02:21
*** ipsecguy has quit IRC02:22
*** issp has quit IRC02:22
*** zigo has quit IRC02:22
*** mestery has quit IRC02:22
*** ptoohill1 has quit IRC02:22
*** mestery_ is now known as mestery02:22
bar_note that i couldn't find any reference to this constant in the code, but the content is identical to the API doc02:24
*** zigo has joined #openstack-lbaas02:27
*** zigo is now known as Guest1326802:29
*** fnaval has joined #openstack-lbaas02:34
openstackgerritBar RH proposed openstack/python-octaviaclient master: Complement Octavia client with a set of features  https://review.openstack.org/52266602:35
rm_workthose statuses are all legit02:38
rm_workand i have seen them all within the last hour :P02:38
rm_workbar_: ^^02:38
rm_workwhich docs are you looking at?02:38
rm_workand what is the discrepancy exactly?02:38
bar_there should be more02:39
rm_workerr02:39
rm_workwhat would they be?02:39
bar_e.g. pending_update is missing from this list02:39
rm_workAMPs dont go to PENDING_UPDATE02:39
rm_workthey go from BOOTING->READY->ALLOCATED->(PENDING_DELETE->DELETED)/ERROR02:40
rm_workah hmmmm actually I MIGHT have seen "PENDING_CREATE"02:40
rm_workbut there is no PENDING_UPDATE state for them02:40
bar_aha02:40
bar_nor ACTIVE, right?02:41
rm_workright02:41
rm_workACTIVE == ALLOCATED02:41
rm_workor READY, for spares-pool02:41
rm_workhmm it is interesting we don't have PENDING_CREATE there02:42
rm_workit's also interesting that "SUPPORTED_PROVISIONING_STATUSES" includes some amphora statuses02:42
rm_workthat aren't possible for LBs02:42
rm_workamps don't have "provisioning/operating" statuses02:42
rm_workso the AMPHORA_* statuses shouldn't be listed in SUPPORTED_PROVISIONING_STATUSES02:43
rm_workjust SUPPORTED_AMPHORA_STATUSES02:43
rm_workagain, which docs page are you reading?02:43
rm_workI wonder if it is MORE correct02:43
bar_parameters.yaml02:44
rm_workerr02:44
bar_need a link?02:45
rm_workyeah that'd be easier02:45
bar_https://github.com/openstack/octavia/blob/master/api-ref/source/parameters.yaml#L146-L14902:46
rm_workok yeah, docs are correct mostly02:51
rm_workthey are missing PENDING_CREATE02:51
rm_workif you wanted to add that, go for it :)02:51
bar_no, I need to specify which statuses are valid for the client02:52
openstackgerritBar RH proposed openstack/python-octaviaclient master: Complement Octavia client with a set of features  https://review.openstack.org/52266602:52
rm_workyeah but i mean02:53
rm_workif you want to add that fix to one of your octavia patches, i'd +2 :P02:53
bar_haha, ok. intriguing proposal indeed.02:54
openstackgerritBar RH proposed openstack/octavia master: Fix filtering in list API calls  https://review.openstack.org/52268903:01
bar_rm_work, ^^03:02
openstackgerritBar RH proposed openstack/python-octaviaclient master: Complement Octavia client with a set of features  https://review.openstack.org/52266603:03
*** bar_ has quit IRC03:07
johnsombar_ what is all this????03:22
johnsomThose two match..03:23
johnsomAmps only have booting and ready for pre-allocated states03:23
johnsomrm_work where did you see PENDING_CREATE being used for an amphora?03:29
johnsombzhao o/03:30
*** codenamelxl has joined #openstack-lbaas03:31
bzhaojohnsom, Hi sir. Could you please check the https://review.openstack.org/#/c/503606/7/specs/version1.1/udp_support.rst in L70 I replied if you are free? I'm not sure I understood correct. Sorry. :)03:31
codenamelxlHi everyone, i'm having trouble with Floating IP for a LB.  I can connect to all VM through Floating IP. Can Connect to LB VIP. However, the floating IP attached to the VIP just does not work.03:33
johnsomcodenamelxl Are you using a DVR router?03:34
codenamelxlyes03:34
johnsombzhao Looking03:34
codenamelxlit's work with legacy03:34
bzhaojohnsom, Thank you, because I start up coding from backend layer to front layer, so I wish to make sure the backend part looks like.03:35
johnsomcodenamelxl If your neutron is older than Pike DVR has a bug that does not work adding floating IPs to some types of ports.03:35
codenamelxlI'm using Pike DVR03:35
johnsomHmm, they were supposed to have fixed it in Pike release.03:35
johnsomSo the floating IP attaches, but traffic does not flow?03:36
codenamelxlYes03:36
codenamelxlit attached03:36
codenamelxltraffic does not come through. Checked the Sec group. I  can access the VIP too.03:37
johnsomYeah, this was a long standing bug (like 4+ releases) in DVR.03:37
johnsomFloating IPs associated with an unbound port with DVR routers will not be distributed, but will be centralized and implemented in the SNAT namespace of the Network node or dvr_snat node. Floating IPs associated with allowed_address_pair port IP and are bound to multiple active VMs with DVR routers will be implemented in the SNAT namespace in the Network node or dvr_snat node. This will address VRRP use cases.03:37
johnsomMore information about this is captured in bug 1583694.03:37
openstackbug 1583694 in neutron "[RFE] DVR support for Allowed_address_pair port that are bound to multiple ACTIVE VM ports" [Wishlist,Fix released] https://launchpad.net/bugs/1583694 - Assigned to Swaminathan Vasudevan (swaminathan-vasudevan)03:37
johnsomhttps://docs.openstack.org/releasenotes/neutron/pike.html03:37
johnsomThat is the release note for Pike that addressed the issue.  At least that is what I was told.03:38
bzhaohttps://bugs.launchpad.net/bugs/173385203:39
openstackLaunchpad bug 1733852 in neutron "Incorrect ARP entries in new DVR routers for Octavia VRRP addresses" [Medium,In progress] - Assigned to Daniel Russell (danielr-2)03:39
johnsomThat is all vague and complex, but I was told it fixed the bug we saw03:39
johnsomOh no, there is another one????03:39
bzhaoYeah. I found it just last week.03:39
codenamelxlOops03:39
johnsomWell, sorry about that.  You can poke folks in the neutron channel to see if they can backport that fix.03:42
johnsomThe other alternative is to use a non-DVR router for your VIPs.03:42
johnsombzhao Yeah, sorry I was not clear03:45
johnsomCan you chat now?03:45
johnsomOn IRC?03:45
bzhaome ?03:45
johnsomyes03:45
johnsomDo you have a few minutes?  I will try to explain better03:46
bzhaoYeah.03:46
bzhaoThank you03:46
johnsomOk.  So the short of it, you were correct, I was wrong.03:46
johnsomI figured out that yes, the UDP keepaliveds should have no VRRP.03:47
bzhaoI just what to implement it like your suggest. So sorry to bother you. :)03:48
johnsomWhat I was proposing is to add a check script to the main VRRP keepalived.  We already have a directory that it will run scripts in.03:48
bzhaoOK, It's better to list the UDP keepalived process id and check whether all of them are alived in the single keepalived process03:49
johnsomThis script will check the health of the UDP keepalived and it it is failed will fail the main VRRP keepalived checks to move the VIP03:49
bzhaoYeah. I think we are close. :)03:49
*** rcernin has joined #openstack-lbaas03:50
bzhaoAnd the check script must be default for UDP cases.03:50
*** rcernin_ has quit IRC03:51
johnsomThe script will not check the health of the UDP member servers, that is a different script03:52
johnsomIt will just check the UDP keepalived health.03:52
bzhaoYeah, just check keepalived process. should we check the keepalived process id is alive?03:53
*** links has joined #openstack-lbaas03:55
bzhaofor example, if a UDP listener startup, we let its process_id into a particular directory, then the check script just list the dir and check the process id.03:55
bzhaoOnce there is a failure, the main VRRP instance move the VIP to the backup one03:56
johnsomYes, at a minimum.  There is a signal you can send keepalived that dumps a status file to /tmp, it may have better info, but at a minimum check the process is running via kill -0 or systemctl status03:56
johnsomYep03:56
johnsomThis is how we check the additional haproxy processes for each listener03:57
bzhaoOK, I know. Sorry. I think I understood now.03:59
bzhaoThank you03:59
johnsomCool.  No worries.  This is why we do specifications.  We can work out the details and make the best solution03:59
bzhaoaha, Thanks to your guidance04:01
bzhaoAll of them are very useful. :)04:01
johnsomHa, thanks!04:02
johnsomLet me know if there are more questions?04:02
bzhaoAlso, I make a contrast with haproxy and LVS concepts towards configuration options. I'm afraid that there are aome conflicts towards healthmonitor API.04:03
johnsomI commented on some that in the patch.04:04
bzhaoYeah. I just found the concepts are different. If we think that's OK. I can implement it like you mentioned .04:05
rm_workjohnsom: i literally watched it happen04:05
johnsombzhao Yeah, they may be slightly different04:05
rm_workjohnsom: https://i.imgur.com/cmWeTYU.png04:05
rm_workit's pre-booting04:05
rm_work*absolute initial state*04:06
rm_worki happened to catch that while i was watching04:06
johnsomHmmmm, what the heck is creating that....  I never intended that state to be there04:07
bzhaojohnsom, OK, that's clear from my side. There is no any other question now, if there is any issue during its forward, I wish your kind help and advise. :) Thank you.04:07
johnsombzhao Ok, no problem.  Let's make sure we document those differences04:08
bzhaojohnsom, aha, I will list the description from both side and add them into the doc work.04:09
johnsomrm_work Crumb, I found it.   sigh04:10
rm_workjohnsom: so you intended them to be CREATED in "BOOTING" state?04:12
rm_workthe initial record?04:12
johnsomYeah, pending create was intended to be booting04:13
johnsomBut the DB record is created with PENDING_CREATE then moves to booting04:14
bzhaocodenamelxl, I'm not sure whether this feature could solve u issue if your phycial env let u do this. DVR support centralized DNAT. Please check:  https://review.openstack.org/#/c/48533304:14
rm_workjohnsom: i'm not sure if that's actually too bad04:14
johnsomYeah04:15
rm_workbecause... initially they ... AREN'T booting04:15
rm_worktechnically04:15
rm_worksince, you know... it's just a DB record04:15
johnsomYeah04:15
rm_workand then when the nova create happens, it switches to BOOTING, right :P04:15
johnsomYeah04:16
rm_workso yeah it'd just be a doc update thing04:16
* rm_work shrugs04:16
johnsomYeah, it's docs and constants04:16
rm_workah and maybe constants04:16
rm_workyeah04:16
rm_workk i'll be back in a bit, need to run to target04:17
johnsomBar got it04:17
johnsomo/04:17
rm_workTargé04:17
rm_workheh k04:17
johnsomExactly04:17
codenamelxl@bzhao: https://bugs.launchpad.net/neutron/+bug/1733852 suggested that the bug just happen if i add the router after  i created the LB. Just checked the router's ARP. It looks fine. Maybe, there is something else.04:22
openstackLaunchpad bug 1733852 in neutron "Incorrect ARP entries in new DVR routers for Octavia VRRP addresses" [Medium,In progress] - Assigned to Daniel Russell (danielr-2)04:22
*** threestrands_ has joined #openstack-lbaas04:24
*** threestrands_ has quit IRC04:24
*** threestrands_ has joined #openstack-lbaas04:24
*** threestrands has quit IRC04:26
codenamelxl@bzhao: Checked the ARP on the amphora node. There is no ARP resolve for the router... Weird?04:33
*** kbyrne has quit IRC04:46
codenamelxl@bzhao Found it, the router can't reach the amphora. Not sure why. Any direction? Thanks04:48
*** reedip has quit IRC05:03
*** rcernin_ has joined #openstack-lbaas05:09
*** rcernin has quit IRC05:09
bzhaocodenamelxl, the Compute Nodes routers ARP entries should override by the vrrp_port's mac, because vrrp_port use vip_port as its allowed_address_pair, but the l2 pop generate the ARP entry just from the subnet port(including unbound port), So I dont think we can solve them through API/configuration. If possible, you can modify the ARP entries VIP address:MAC, modify the MAC to the real LB mac which is a MASTER role LB, and try whether it works.05:14
codenamelxlBoth of my IP point to the VRRP's MAC05:15
bzhaoer, maybe there is any thing we don't cover.05:21
rm_workjohnsom: errm, how would a *member* get a 409 on DELETE05:24
codenamelxlYep, it might be just me screwing something up.  Right now, i can ping the router from the Amphora, but the router can't ping the amphora. Any suggestion?05:24
rm_workif the LB itself is ACTIVE05:24
rm_workthis is ... perplexing05:24
johnsomrm_work the LB is in a PENDING05:25
rm_workok but like05:25
rm_workthis test05:25
rm_workliterally does "self.await_loadbalancer_active(self.lb_id)"05:26
rm_workand then tries to start deleting stuff (starting with members)05:26
rm_workand gets a 40905:26
rm_work2017-11-30 20:27:59.406 26528 INFO tempest.lib.common.rest_client [req-afb80d65-58ed-464e-ab9d-0898d365f6d0 ] Request (BasicOpsTest:test_octavia_failover): 200 GET https://octavia.phx-private.openstack.int.godaddy.com:443/v2.0/lbaas/loadbalancers/d31d4ff4-dd23-4d6c-bf76-cb38b802770a 0.411s05:26
rm_work^^ that's it seeing the LB back in ACTIVE05:26
rm_workthe next line is:05:26
rm_work2017-11-30 20:28:07.359 26528 INFO tempest.lib.common.rest_client [req-5d448256-cbbc-4f07-a931-678aef2fdbd0 ] Request (BasicOpsTest:_run_cleanups): 409 DELETE https://octavia.phx-private.openstack.int.godaddy.com:443/v2.0/lbaas/pools/ed3a81f5-7905-4ae2-aeea-da0b08475da2/members/a8599923-8de3-47e6-831c-e561e568d7b1 0.149s05:26
rm_workso unless it somehow went immediately back to PENDING ....05:27
johnsomRight05:27
rm_workhow would it go back to pending05:28
johnsomI would look at the timings and the o-cw logs05:28
rm_workyeah i guess so, ugh05:28
rm_workyeah API confirms somehow it's immutable05:31
rm_workbut that makes no sense :/05:31
rm_workahh wut05:35
rm_workok well, that's ... not what i expected. ok, different issue05:36
rm_workit actually failed while trying to do the self.await_loadbalancer_active05:36
rm_workT_T05:36
*** sticker has quit IRC05:36
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: WIP: Failover test  https://review.openstack.org/50155905:38
*** fnaval has quit IRC05:39
johnsomI beat the crud out of that API so pretty confident we don't have locking issues05:40
rm_workwell05:41
rm_workI'm not sure what's happening here either05:41
rm_workor why it seems so darned consistently to fail right here05:42
johnsomIt's not just this? http://logs.openstack.org/59/501559/13/check/octavia-v2-dsvm-scenario/97fe8b2/job-output.txt.gz#_2017-11-28_12_04_43_16539405:46
rm_workno05:47
rm_workthat's because devstack doesn't update the thing05:47
rm_workand i think that devstack is only in SINGLE mode05:47
rm_workthis test only works in ACTIVE_STANDBY topo05:47
rm_workI need to figure out a way to do it for SINGLE and for upstream05:47
rm_workit wouldn't work on upstream ACTIVE_STANDBY either05:48
rm_work:(05:48
johnsomYeah, before I think we didn't have enough ram on the gate hosts to boot act/stdby05:48
rm_workonce we get flavors working05:48
rm_workwe can have gate tests use that05:48
rm_workregister a flavor for the two topos05:48
rm_workand run tests accordingly05:48
*** gcheresh has joined #openstack-lbaas05:49
rm_workoh damnit i did the wrong thing in my debug05:50
*** eN_Guruprasad_Rn has joined #openstack-lbaas05:50
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: WIP: Failover test  https://review.openstack.org/50155905:50
*** armax has joined #openstack-lbaas06:01
*** pcaruana has joined #openstack-lbaas06:10
*** threestrands_ has quit IRC06:24
*** armax has quit IRC06:27
*** gcheresh has quit IRC06:35
openstackgerritHengqing Hu proposed openstack/octavia-dashboard master: Update openstacksdk construction to be forward compatible  https://review.openstack.org/52401106:52
*** reedip has joined #openstack-lbaas07:02
rm_workah somehow it tries to do tempest cleanup before the test is totally done or something O_o07:06
rm_workit's timing out hitting the API it looks like? wtf07:06
* rm_work dies07:06
*** fnaval has joined #openstack-lbaas07:08
*** fnaval has quit IRC07:13
*** bbbbzhao_ has quit IRC07:26
*** rcernin_ has quit IRC07:29
*** fnaval has joined #openstack-lbaas07:43
*** fnaval has quit IRC07:47
*** slaweq has joined #openstack-lbaas08:03
*** AlexeyAbashkin has joined #openstack-lbaas08:15
openstackgerritZhaoBo proposed openstack/octavia master: Support UDP load balance  https://review.openstack.org/50360608:27
openstackgerritBernard Cafarelli proposed openstack/octavia master: Rework amphora agent installation element  https://review.openstack.org/52262609:08
bcafarelrm_work: ^ merged amphora elements with a few enhancements :)09:14
*** gcheresh has joined #openstack-lbaas09:15
*** isp is now known as issp09:21
*** ipsecguy_ is now known as ipsecguy09:45
*** codenamelxl has left #openstack-lbaas09:47
*** sri_ has joined #openstack-lbaas09:58
*** gcheresh has quit IRC10:28
*** salmankhan has joined #openstack-lbaas10:33
*** openstackgerrit has quit IRC10:33
*** fnaval has joined #openstack-lbaas10:43
*** salmankhan has quit IRC10:44
*** fnaval has quit IRC10:47
*** salmankhan has joined #openstack-lbaas11:14
*** fnaval has joined #openstack-lbaas11:43
*** fnaval has quit IRC11:48
*** salmankhan has quit IRC11:55
*** gcheresh has joined #openstack-lbaas12:31
*** yamamoto has quit IRC12:53
*** yamamoto has joined #openstack-lbaas12:54
*** yamamoto has quit IRC13:09
*** pckizer has quit IRC13:15
*** links has quit IRC13:17
*** openstackgerrit has joined #openstack-lbaas13:22
openstackgerritBernard Cafarelli proposed openstack/octavia master: Rework amphora agent installation element  https://review.openstack.org/52262613:22
*** pck has joined #openstack-lbaas13:23
*** yamamoto has joined #openstack-lbaas13:25
*** eN_Guruprasad_Rn has quit IRC13:27
*** pck has quit IRC13:30
*** pck has joined #openstack-lbaas13:30
*** yamamoto has quit IRC13:32
*** yamamoto has joined #openstack-lbaas13:33
-openstackstatus- NOTICE: gerrit has been restarted to get it back to its normal speed.13:51
*** links has joined #openstack-lbaas13:58
*** yamamoto has quit IRC14:01
*** AlexeyAbashkin has quit IRC14:02
*** salmankhan has joined #openstack-lbaas14:09
*** yamamoto has joined #openstack-lbaas14:17
*** links has quit IRC14:26
*** yamamoto has quit IRC14:33
*** slaweq has quit IRC14:41
*** slaweq has joined #openstack-lbaas14:41
*** Alex_Staf has joined #openstack-lbaas14:52
*** sdaniel has joined #openstack-lbaas14:53
sdanielHi, I would like to ask, how can I identify why the Load balancer is not deployable ?14:55
sdanielI'm running OS Ocata and tried to configure based on the official guidelines14:55
*** yamamoto has joined #openstack-lbaas14:59
*** yamamoto has quit IRC15:00
*** armax has joined #openstack-lbaas15:32
*** yamamoto has joined #openstack-lbaas15:35
*** Alex_Staf has quit IRC15:39
*** fnaval has joined #openstack-lbaas15:50
*** Alex_Staf has joined #openstack-lbaas16:04
*** slaweq has quit IRC16:04
*** slaweq has joined #openstack-lbaas16:04
*** slaweq has quit IRC16:09
xgerman_sdaniel what means not deployable16:09
*** Alex_Staf has quit IRC16:10
sdanieljust a sec, I read it from the logs16:22
*** armax has quit IRC16:27
sdanielCould not load neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver16:28
sdanielI got this or the following:16:29
xgerman_This is not Octavia but neutron lbaas16:29
sdanielLoadbalancer 413c7406-a916-4278-bacd-dd4ffd2c8626 is not deployable.16:29
xgerman_ok, so you want to check if the lbaas agent is running16:30
johnsomThis is a known neutron warning that has no meaning: Could not load neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver16:30
johnsomTypically it does actually load the driver16:30
sdanielWhich one is the expected warning log from the two ?16:32
sdanielOk, so the first one16:33
sdanielAnd what about with the following warning: "Stats socket not found for loadbalancer f533b6dd-495f-4393-817f-b21b3640c96e" ?16:35
johnsomYeah, that happens with that old driver.  A load balancer has been created but a listener has not yet been created.16:38
*** slaweq has joined #openstack-lbaas16:42
*** slaweq_ has joined #openstack-lbaas16:42
sdanielAnd what about the other ?16:42
xgerman_did you check if the agents are running?16:43
sdanielThe service is active, I did all the steps from here: https://docs.openstack.org/ocata/networking-guide/config-lbaas.html16:44
sdanielBut I can't ping16:44
sdaniel( Dashboard is not installed at all, so I skipped that part)16:44
*** slaweq has quit IRC16:46
sdanielneutron-lbaasv2-agent.service is running16:48
xgerman_neutron agent list or so?16:48
xgerman_ok16:48
xgerman_you are not supposed to ping anyway16:48
xgerman_this will run an haproxy inside a namesoace16:49
sdaniel(A have couple of agents: metadata, loadbalancerv2, linux bridge, dhcp)16:49
johnsomYeah, since you are running the old driver, "neutron agent-list" should show a :-) next to it16:49
sdanielYes, I heard about that16:49
sdaniel( I found out, the agent status result came from there )16:50
sdanielIf i exec16:51
sdanielip netns command16:51
sdanielI get only 1 result16:51
sdaniela qdhcp-.....16:51
xgerman_try to create a listener16:54
*** yamamoto has quit IRC16:54
*** yamamoto has joined #openstack-lbaas16:56
*** yamamoto has quit IRC17:01
johnsomYes, after you have created a listener I would expect to see a q-lbaas namespace appear. At that point you know the driver is configured correctly and working17:03
*** Alex_Staf has joined #openstack-lbaas17:11
*** armax has joined #openstack-lbaas17:11
*** AlexeyAbashkin has joined #openstack-lbaas17:13
*** Alex_Staf has quit IRC17:15
*** slaweq_ has quit IRC17:15
*** slaweq has joined #openstack-lbaas17:16
*** AlexeyAbashkin has quit IRC17:17
*** slaweq has quit IRC17:20
*** AlexeyAbashkin has joined #openstack-lbaas17:37
*** AlexeyAbashkin has quit IRC17:41
*** slaweq has joined #openstack-lbaas17:42
*** slaweq has quit IRC17:52
*** slaweq has joined #openstack-lbaas17:53
*** slaweq has quit IRC17:57
*** yamamoto has joined #openstack-lbaas17:58
*** yamamoto has quit IRC18:08
*** salmankhan has quit IRC18:12
*** salmankhan has joined #openstack-lbaas18:15
*** slaweq has joined #openstack-lbaas18:21
openstackgerritJude Cross proposed openstack/neutron-lbaas master: [WIP] Remove unnecessary lazy-loaded queries  https://review.openstack.org/47769818:25
*** slaweq has quit IRC18:26
*** sshank has joined #openstack-lbaas18:30
*** salmankhan has quit IRC18:32
*** sshank has quit IRC18:33
*** sshank has joined #openstack-lbaas18:41
*** salmankhan has joined #openstack-lbaas18:45
rm_workjohnsom / xgerman_ is Failover API syncronous?18:49
rm_worklooks like it?18:49
rm_worknot using a handler18:49
xgerman_No, throws it on the queue18:49
rm_workhmmm18:49
rm_workreally?18:49
rm_workso https://review.openstack.org/#/c/523242/4/octavia/controller/worker/controller_worker.py18:50
xgerman_+ failover hads the lowest priority if you run rate limiting…18:50
rm_workjohnsom is marking the LB active again immediately after running self._perform_amphora_failover18:50
rm_workoh derp18:50
rm_worki thought i was in the API file18:51
rm_worknm18:51
*** tongl has joined #openstack-lbaas18:51
xgerman_rm_work: https://review.openstack.org/#/c/524254/19:04
xgerman_backport of the VIP port fix19:04
rm_work+A19:05
xgerman_thanks19:05
xgerman_once this merges I need to play with my deployment scripts ;-)19:05
rm_workxgerman_: so how do you trgger an *amphora* failover with the API19:08
rm_workor does it only do LB failover right now19:08
xgerman_LB failover19:08
rm_workk19:08
xgerman_amphora would be on the octavia API19:09
xgerman_wheread failing ove ran LB has maing for all kind of vendors19:09
rm_workyeah and this merged before the amp-api didn't it19:24
rm_workk, guess it needs to be added19:24
openstackgerritAdam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s)  https://review.openstack.org/43561219:27
*** fnaval has quit IRC19:28
*** sshank has quit IRC19:29
*** fnaval has joined #openstack-lbaas19:30
rm_workwould still love to see https://review.openstack.org/#/c/504875/ merged :)19:32
rm_workpretty plox19:32
xgerman_=A19:33
rm_workah looks like https://review.openstack.org/#/c/520863/3 needs +A again due to rebase too19:33
rm_work^_^19:33
rm_workthanks19:33
*** tongl has quit IRC19:44
*** sdaniel has quit IRC19:50
johnsomI am going to run to lunch, but I plan to keep working on the driver spec the rest of the day.19:52
xgerman_+119:53
openstackgerritMerged openstack/octavia master: Fix the failover API to not fail with immutable LB  https://review.openstack.org/52324220:04
*** AlexeyAbashkin has joined #openstack-lbaas20:11
*** AlexeyAbashkin has quit IRC20:15
*** slaweq has joined #openstack-lbaas20:23
*** pcaruana has quit IRC20:25
*** slaweq has quit IRC20:27
openstackgerritSanthosh Fernandes proposed openstack/octavia master: [WIP] L3 ACTIVE-ACTIVE Data model impact  https://review.openstack.org/52472220:39
openstackgerritMerged openstack/octavia master: Optimize update_health process  https://review.openstack.org/50487520:39
openstackgerritMerged openstack/octavia master: Clean up test_update_db.py a little bit  https://review.openstack.org/52086320:39
openstackgerritSanthosh Fernandes proposed openstack/octavia master: [WIP] L3 ACTIVE-ACTIVE Data model impact  https://review.openstack.org/52472220:40
*** sanfern has quit IRC20:41
*** sanfern has joined #openstack-lbaas20:41
*** sanfern has quit IRC20:42
*** sanfern has joined #openstack-lbaas20:42
*** sanfern has quit IRC20:43
*** sanfern has joined #openstack-lbaas20:43
*** sanfern has quit IRC20:43
*** sanfern has joined #openstack-lbaas20:44
*** sanfern has quit IRC20:44
*** sanfern has joined #openstack-lbaas20:45
*** sanfern has quit IRC20:45
*** sanfern has joined #openstack-lbaas20:45
*** sanfern has quit IRC20:46
*** sanfern has joined #openstack-lbaas20:46
*** sanfern has quit IRC20:47
*** sanfern has joined #openstack-lbaas20:47
*** sanfern has quit IRC20:47
*** sanfern has joined #openstack-lbaas20:48
*** sanfern has quit IRC20:48
*** sanfern has joined #openstack-lbaas20:49
*** sanfern has quit IRC20:49
*** sanfern has joined #openstack-lbaas20:49
*** sanfern has quit IRC20:50
*** sanfern has joined #openstack-lbaas20:50
*** sanfern has quit IRC20:51
*** sanfern has joined #openstack-lbaas20:51
*** sanfern has quit IRC20:51
*** slaweq has joined #openstack-lbaas21:10
*** AlexeyAbashkin has joined #openstack-lbaas21:11
johnsomrm_work Way to ping me on lunch... grin.  I tried to jump in with some info in infra.  Not sure if there was more needed or other discussions.21:13
*** AlexeyAbashkin has quit IRC21:15
rm_workyeah...21:18
*** rcernin has joined #openstack-lbaas21:22
openstackgerritAdam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s)  https://review.openstack.org/43561221:28
*** salmankhan has quit IRC21:30
*** sshank has joined #openstack-lbaas21:38
*** tongl has joined #openstack-lbaas21:42
*** ianychoi has quit IRC21:44
*** gcheresh has quit IRC21:49
openstackgerritAdam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s)  https://review.openstack.org/43561222:18
*** salmankhan has joined #openstack-lbaas22:19
*** bar_ has joined #openstack-lbaas22:33
bar_rm_work, hey22:33
bar_johnsom, were you looking for me yesterday?22:35
johnsombar_ Seems like a year ago... ha22:35
bar_ha, I have just read the log from yesterday22:36
johnsombar_ Oh, nevermind.  Adam clarified for me.22:37
bar_johnsom, pending_create is here to stay, correct?22:37
johnsomYeah, my bad on that one.22:37
bar_np22:37
*** sshank has quit IRC22:38
bar_johnsom, perhaps you would know why this patch (https://review.openstack.org/#/c/522666/) won't verify?22:41
bar_I got dependency error for the former patch set, the current just isn't handed over to Zuul. so it seems.22:42
johnsomUmm That seems like a zuul bug/issue.  There was an announcement about zuul issues this morning.  I would try another recheck.22:44
bar_done22:44
johnsomHmm, no luck there either.  I will ping in the infra channel22:45
bar_thx!22:46
openstackgerritHengqing Hu proposed openstack/octavia master: Support UDP load balance  https://review.openstack.org/50360622:48
johnsombar_ Yeah, ok, infra says you have a dependency loop, which you kind of do.22:50
johnsombar_ Depend-On is only used for cross repo dependencies, if it's in the same repo you just use the parent/child relationship in git22:51
bar_johnsom, ok, I did both to make sure22:51
openstackgerritMichael Johnson proposed openstack/python-octaviaclient master: Complement Octavia client with a set of features  https://review.openstack.org/52266622:51
johnsomThat should fix you up22:52
bar_ok, great22:53
*** slaweq has quit IRC22:54
*** dayou has quit IRC23:16
johnsomrm_work xgerman Question for you folks.  In the provider driver spec, should we be passing python objects over to the driver or JSON documents?  I know if we do JSON documents the drivers can use jsonschema to do the validation of features they support or not, etc.23:18
xgerman_usually I do python->python especially with a map it really doesn’t matter one way or the other23:24
openstackgerritAdam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s)  https://review.openstack.org/43561223:44
rm_workif it doesn't have to go over the queue, i'm fine with python objects23:47
rm_workjust has to be documented23:48
*** fnaval has quit IRC23:48
rm_workcan't count the number of times i've passed slightly the wrong thing <_<23:48
xgerman_:-)23:48
johnsomYeah, no queue planned.  Just trying to think of good validation methods.  I found that voluptuous is in G-R so that is an option23:48
rm_workyeah we use that internally23:49
rm_worki'm used to it kinda23:49
johnsomOk, I just didn't want to get too far without a few more brains on the topic.23:49
johnsomI think I can get through the LB section today.  I may ping for a quick look over to make sure I'm not too far in out field before I tackle the rest of the doc next week.23:51

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!