Friday, 2018-01-19

*** fnaval has quit IRC00:00
*** yamamoto has joined #openstack-lbaas00:06
*** sshank has joined #openstack-lbaas00:09
*** Swami has quit IRC01:00
*** sshank has quit IRC01:13
openstackgerritHengqing Hu proposed openstack/python-octaviaclient master: Improve unit tests results with subTest  https://review.openstack.org/53125701:26
*** jniesz has quit IRC01:47
*** harlowja has quit IRC02:31
*** threestrands_ has joined #openstack-lbaas02:49
*** threestrands_ has quit IRC02:49
*** threestrands_ has joined #openstack-lbaas02:49
*** threestrands has quit IRC02:51
*** sshank has joined #openstack-lbaas03:08
*** sanfern has joined #openstack-lbaas03:10
*** sshank has quit IRC03:18
*** AlexeyAbashkin has joined #openstack-lbaas03:44
*** jappleii__ has joined #openstack-lbaas03:44
*** jappleii__ has quit IRC03:45
*** jappleii__ has joined #openstack-lbaas03:45
*** jappleii__ has quit IRC03:46
*** threestrands_ has quit IRC03:46
*** jappleii__ has joined #openstack-lbaas03:47
*** jappleii__ has quit IRC03:48
*** AlexeyAbashkin has quit IRC03:48
*** jappleii__ has joined #openstack-lbaas03:48
*** rcernin has quit IRC03:49
*** rcernin has joined #openstack-lbaas03:49
*** gans has joined #openstack-lbaas04:02
*** gans has quit IRC04:12
*** gans has joined #openstack-lbaas04:15
*** gans has quit IRC04:20
*** armax has quit IRC04:26
*** armax has joined #openstack-lbaas04:27
*** links has joined #openstack-lbaas04:37
openstackgerritchenxiangui proposed openstack/neutron-lbaas-dashboard master: Modify links in README.rst  https://review.openstack.org/53559804:39
openstackgerritHengqing Hu proposed openstack/neutron-lbaas-dashboard master: Updating for new sphinx docs jobs  https://review.openstack.org/53560004:44
openstackgerritSanthosh Fernandes proposed openstack/octavia master: L3 ACTIVE-ACTIVE Data model impact  https://review.openstack.org/52472205:03
*** harlowja has joined #openstack-lbaas05:21
*** gcheresh has joined #openstack-lbaas05:27
*** gcheresh has quit IRC05:34
*** jappleii__ has quit IRC05:52
*** gcheresh has joined #openstack-lbaas06:02
*** armax has quit IRC06:03
*** gcheresh has quit IRC06:10
openstackgerritOpenStack Proposal Bot proposed openstack/neutron-lbaas-dashboard master: Imported Translations from Zanata  https://review.openstack.org/53562406:15
*** logan- has quit IRC06:16
*** gcheresh has joined #openstack-lbaas06:25
openstackgerritGanpat Agarwal proposed openstack/octavia master: ACTIVE-ACTIVE ExaBGP rest api driver  https://review.openstack.org/52700906:26
*** harlowja has quit IRC06:29
*** annp has joined #openstack-lbaas06:33
*** sanfern has quit IRC06:38
*** sanfern has joined #openstack-lbaas06:39
*** logan- has joined #openstack-lbaas06:40
*** gcheresh has quit IRC06:43
*** pcaruana has joined #openstack-lbaas06:44
*** gans has joined #openstack-lbaas06:45
openstackgerritHengqing Hu proposed openstack/octavia master: Make frontend interface attrs less vrrp specific  https://review.openstack.org/52113807:01
*** b_bezak has joined #openstack-lbaas07:35
*** rcernin has quit IRC07:43
*** rm_work has quit IRC08:08
*** AlexeyAbashkin has joined #openstack-lbaas08:13
*** tesseract has joined #openstack-lbaas08:16
*** sanfern has quit IRC08:16
*** harlowja has joined #openstack-lbaas08:17
*** sanfern has joined #openstack-lbaas08:17
*** rm_work has joined #openstack-lbaas08:20
*** jmccrory has quit IRC08:26
*** sbalukoff_ has quit IRC08:27
*** sbalukoff_ has joined #openstack-lbaas08:27
*** annp has quit IRC08:28
*** annp has joined #openstack-lbaas08:29
*** jmccrory has joined #openstack-lbaas08:33
*** harlowja has quit IRC08:37
*** PagliaccisCloud has quit IRC08:38
*** PagliaccisCloud has joined #openstack-lbaas08:45
*** jmccrory has quit IRC08:50
*** gcheresh has joined #openstack-lbaas08:52
*** jmccrory has joined #openstack-lbaas08:52
*** rm_work has quit IRC08:56
*** gcheresh has quit IRC09:08
*** rm_work has joined #openstack-lbaas09:36
*** pcaruana has quit IRC09:50
*** pcaruana has joined #openstack-lbaas09:53
*** pcaruana has quit IRC09:53
*** pcaruana has joined #openstack-lbaas09:54
*** salmankhan has joined #openstack-lbaas10:21
*** gans has quit IRC10:39
*** gans has joined #openstack-lbaas10:39
*** salmankhan has quit IRC10:43
*** salmankhan has joined #openstack-lbaas10:47
*** salmankhan has quit IRC11:43
*** salmankhan has joined #openstack-lbaas11:50
*** gcheresh has joined #openstack-lbaas11:50
*** Alex_Staf has quit IRC11:52
*** Alex_Staf has joined #openstack-lbaas11:53
*** annp has quit IRC11:55
*** gcheresh has quit IRC12:04
*** salmankhan has quit IRC12:07
*** salmankhan has joined #openstack-lbaas12:07
*** hogepodge has quit IRC12:26
*** PagliaccisCloud has quit IRC12:27
*** strigazi has quit IRC12:27
*** irenab has quit IRC12:27
*** reedip has quit IRC12:28
*** hogepodge has joined #openstack-lbaas12:28
*** strigazi has joined #openstack-lbaas12:29
*** PagliaccisCloud has joined #openstack-lbaas12:30
*** irenab has joined #openstack-lbaas12:30
*** reedip has joined #openstack-lbaas12:42
*** gans has quit IRC12:53
*** gcheresh has joined #openstack-lbaas13:05
*** links has quit IRC13:28
*** gcheresh has quit IRC13:30
*** aojea_ has joined #openstack-lbaas13:50
*** fnaval has joined #openstack-lbaas13:51
*** fnaval has quit IRC13:51
*** fnaval has joined #openstack-lbaas13:52
*** aojea__ has joined #openstack-lbaas13:55
*** aojea_ has quit IRC13:58
*** aojea_ has joined #openstack-lbaas14:01
*** atoth has joined #openstack-lbaas14:02
*** aojea__ has quit IRC14:04
*** beagles_afk is now known as beagles14:04
*** aojea__ has joined #openstack-lbaas14:05
*** aojea_ has quit IRC14:08
*** salmankhan has quit IRC14:09
*** aojea_ has joined #openstack-lbaas14:11
*** aojea__ has quit IRC14:13
*** fnaval has quit IRC14:18
*** aojea__ has joined #openstack-lbaas14:20
*** aojea_ has quit IRC14:24
*** salmankhan has joined #openstack-lbaas14:28
*** aojea_ has joined #openstack-lbaas14:28
*** aojea__ has quit IRC14:29
*** aojea__ has joined #openstack-lbaas14:30
*** aojea_ has quit IRC14:34
*** dayou has quit IRC14:34
*** aojea_ has joined #openstack-lbaas14:36
*** aojea__ has quit IRC14:39
*** aojea_ has quit IRC14:44
*** dayou has joined #openstack-lbaas14:46
*** armax has joined #openstack-lbaas15:01
*** yamamoto has quit IRC15:05
*** yamamoto has joined #openstack-lbaas15:06
*** Alex_Staf has quit IRC15:11
*** yamamoto has quit IRC15:17
*** ndanl has quit IRC15:29
*** yamamoto has joined #openstack-lbaas15:48
*** fnaval has joined #openstack-lbaas15:54
*** ivve has quit IRC16:10
*** gcheresh has joined #openstack-lbaas16:15
*** b_bezak has quit IRC16:20
*** b_bezak has joined #openstack-lbaas16:21
*** gcheresh has quit IRC16:23
*** ivve has joined #openstack-lbaas16:25
*** b_bezak has quit IRC16:26
*** AlexeyAbashkin has quit IRC16:35
*** sanfern has quit IRC16:35
*** jniesz has joined #openstack-lbaas16:41
*** ivve has quit IRC16:56
*** ivve has joined #openstack-lbaas17:10
*** yamamoto has quit IRC17:31
*** ivve has quit IRC17:35
*** pcaruana has quit IRC17:36
*** yamamoto has joined #openstack-lbaas17:38
*** yamamoto has quit IRC17:38
*** ivve has joined #openstack-lbaas17:52
*** sanfern has joined #openstack-lbaas18:12
*** AlexeyAbashkin has joined #openstack-lbaas18:16
sanfernhi johnsom ,18:16
johnsomsanfern Hi18:17
sanfernwe have multiple peer_ip in config file, now each distributor creation will create one record in peer tables18:18
sanfernjason was mentioning we need to make peer entry unique based on ip_address, local_as,18:19
*** AlexeyAbashkin has quit IRC18:20
sanfernwe should not create peer table from get_create_distributor_flow, instead we need to make it one time entry into table using CLI18:20
sanfernpeer table entry18:21
johnsomHow do you know which peer to map a lb_amp to?18:21
sanfernall peers should map all amps. Peer list is static one time populated in the setup18:23
sanfernotherwise each Distribution creation will have duplicate entries of peers18:23
johnsomOk, so here is my thought.  Really, this is going to come in from flavors when those are ready.  So, for now why not keep them in the config as a list of peers, populate the table on create (like it would with flavors) and make sure it handles the "already exists" case.18:27
johnsomAs it would with flavors18:28
johnsomI thought the plan for the peers was to have one address on all of the ToR that simplifies this....18:28
*** rstarmer has quit IRC18:29
sanfernI agree,  then I will make ip+remote_as as primary constrain to avoid duplicate entries.18:32
johnsom+118:35
sanfernon ExaBGPAPI driver should I derive from DistributorAmpDriverMixin and only implement update exabgp config18:35
johnsomsanfern Yes, I think that makes sense18:36
sanfernok18:37
*** yamamoto has joined #openstack-lbaas18:39
sanfernin ExaBGP agent will have start/stop/register_amp/unregister_amp/status API's18:39
*** rstarmer has joined #openstack-lbaas18:39
johnsomRight, but it will use the existing methods for those18:39
sanfernreg_amp will announce and unreg will withdraw routes.18:40
johnsomStart/stop may need to be be added to the generic distributor interface.  I think I looked and foudn the existing are targeted to listener only18:40
johnsomYep, in the agent code it will use the common method (all distributors do this), but the code will branch on the distributor type (L3) to execute your commands vs. mine. I assume you will have an L3 module.18:41
*** atoth has quit IRC18:41
sanfernPlugging VIP's to dummy0.cfg will be in common I need to remove from reg_amp and similarly unreg_amp18:42
sanfernyes my L3 module is exabgp rest api driver18:42
johnsomdriver? I thought we were talking about the agent, not the driver side18:44
sanfernstart/stop requires L3 module requires to exabgp18:44
sanfernyes Agent18:44
johnsomYeah, start/stop interface should be generic for distributors, not exabgp. But in the code it will branch based on the type.18:45
sanfernok18:45
johnsomI want to minimize the exabgp custom stuff if multiple drivers are going to need the same thing.18:46
sanfern+118:46
sanfernhow to pass/return list of objects to the flow ?18:47
johnsomSame as we pass any other object18:48
johnsomSo it's something you need to store in the flow before it runs?18:48
johnsomI.e. not generated inside the flow?18:48
sanfernyes18:48
johnsomhttps://github.com/openstack/octavia/blob/master/octavia/controller/worker/controller_worker.py#L12818:49
*** yamamoto has quit IRC18:50
sanfernok18:52
*** rstarmer has quit IRC19:21
*** harlowja has joined #openstack-lbaas19:31
*** rstarmer has joined #openstack-lbaas19:31
*** rstarmer has quit IRC19:32
*** sanfern has quit IRC19:39
*** AlexeyAbashkin has joined #openstack-lbaas19:45
*** rstarmer has joined #openstack-lbaas19:47
*** ivve has quit IRC19:49
*** AlexeyAbashkin has quit IRC19:49
*** ivve has joined #openstack-lbaas20:01
*** slaweq has joined #openstack-lbaas20:05
ivvejohnsom: got that lbaas v2 working finally :)20:23
johnsomivve Glad to hear it20:23
ivvethe patch submitted will fix it, i had some other stuff that was my fault20:24
ivveim not sure, but it might be heat related20:24
ivveit was the way of assigning flip to the vip20:25
ivvei was creating a port first but now i tried without and assigned a manual flip, then it was alright20:25
ivvei tired using the loadbalancer attribute vip_port_id to assign a flip to it via heat, but it fails20:26
*** AlexeyAbashkin has joined #openstack-lbaas20:27
johnsomHmm, interesting20:27
ivveindeed20:27
ivveim trying like this now https://hastebin.com/yacazaniro.css20:27
ivvebut it fails20:28
johnsomThe VIP port idea is the ID that would be used for the flip20:28
ivveyeah thats what i thought20:28
ivvelemme get the message20:28
*** AlexeyAbashkin has quit IRC20:31
ivveCreate Failed20:31
ivveResource Create Failed: Notfound: Resources.Vip To Float: External Network 40d90e43-F85b-4d4f-A1b6-194be8f12f1a Is Not Reachable From Subnet Df807edf-9f68-4623-9e73-8d9630da4049. Therefore, Cannot Associate Port 7c648f87-69b2-4ec1-B349-85916e033935 With A Floating Ip. Neutron Server Returns Request Ids:20:31
ivvethis is interesting because manually it just assigns without issue20:32
ivveto that very network20:32
johnsomThat is a very confusing error message....20:33
ivvethat portid is the vip_port_id20:34
ivvethe network is a created network in the heat template20:34
ivveand the external network is reachable by the created router in the heat template20:34
ivveso the stack continues20:35
ivvebut when its done, i manually press the assign button to get a flip to that network.. it works20:35
johnsomYeah, sounds like heat may have a bug20:36
ivveill ask in the heat channel but first make a few searches20:37
ivvebtw, regarding octavia. the instances that are created in order for the entire service to be functional, where are they created? in a tenant?20:38
openstackgerritIhar Hrachyshka proposed openstack/octavia master: DNM testing whether lib/neutron switch breaks this repo  https://review.openstack.org/53592720:43
*** salmankhan has quit IRC20:47
ivveooookay... so i updated it with the net hardcoded and that worked20:48
ivvelets try a fresh one20:49
johnsomivve Yes, they are created in the tenant defined in the octavia.conf file20:49
ivvecool, so its possible to put say, in the admin tenant? or is there some kind of best practice? i will be using OSA20:50
johnsomivve There is an OSA role for octavia.20:50
johnsomDevstack uses admin, but it is usually best for production to use a tenant. However that is more work as you need to set quotas and RBAC20:50
openstackgerritIhar Hrachyshka proposed openstack/neutron-lbaas master: DNM testing whether lib/neutron switch breaks this repo  https://review.openstack.org/53594120:51
ivveah20:51
ivvenow im guessing here, but could it be that the vip_port_id is created before the actual router connecting to the proper net for a flip?20:52
ivvedepends_on: { get_resource: the router }20:53
ivveor what20:53
johnsomI don't know anything about how the heat stuff is setup20:53
johnsomI don't think you can create a port without a valid net, that just seems wrong. But that doesn't mean heat doesn't have some variable/storage issues20:54
ivveill try depend on so that the vip waits until the router is completed20:54
ivvebut one might think that it should be "smart" enough to figure it out20:55
ivvesince order is not important20:55
ivvewell at least not for me coding the yaml20:55
*** aojea has joined #openstack-lbaas20:56
ivvenah didn't help same error20:59
ivveoh well enough for today21:00
ivvejohnsom: thanks for listening :)21:01
johnsomNP21:01
ivvehttps://bugs.launchpad.net/heat/+bug/162661921:02
openstackLaunchpad bug 1626619 in OpenStack Heat newton "OS::Neutron::FloatingIP is missing hidden dependencies" [Undecided,Fix committed] - Assigned to Rabi Mishra (rabi)21:02
ivvebut this is pike21:02
*** aojea_ has joined #openstack-lbaas21:02
*** aojea has quit IRC21:05
*** aojea has joined #openstack-lbaas21:07
*** aojea_ has quit IRC21:10
*** rstarmer has quit IRC21:11
*** aojea_ has joined #openstack-lbaas21:12
*** aojea has quit IRC21:14
*** aojea__ has joined #openstack-lbaas21:17
*** aojea_ has quit IRC21:20
*** aojea_ has joined #openstack-lbaas21:22
*** aojea__ has quit IRC21:25
*** aojea__ has joined #openstack-lbaas21:27
*** rstarmer has joined #openstack-lbaas21:29
*** aojea_ has quit IRC21:30
*** aojea has joined #openstack-lbaas21:32
*** aojea__ has quit IRC21:35
*** aojea_ has joined #openstack-lbaas21:36
*** aojea has quit IRC21:39
*** aojea__ has joined #openstack-lbaas21:42
*** aojea_ has quit IRC21:45
*** aojea__ has quit IRC21:51
*** aojea has joined #openstack-lbaas21:52
*** aojea_ has joined #openstack-lbaas21:57
*** aojea has quit IRC22:01
*** tesseract has quit IRC22:01
*** aojea has joined #openstack-lbaas22:03
*** aojea_ has quit IRC22:06
*** aojea_ has joined #openstack-lbaas22:07
*** aojea has quit IRC22:10
*** aojea has joined #openstack-lbaas22:13
*** aojea_ has quit IRC22:15
*** rstarmer has quit IRC22:18
*** aojea_ has joined #openstack-lbaas22:18
*** aojea has quit IRC22:21
*** aojea has joined #openstack-lbaas22:23
*** aojea has quit IRC22:23
*** aojea_ has quit IRC22:26
*** salmankhan has joined #openstack-lbaas22:33
*** salmankhan has quit IRC22:38
rm_workjohnsom: this is ... interesting22:44
rm_workso out of about 50 pools22:44
johnsomUh-Oh22:44
rm_work20 or so are flapping between either OFFLINE/ONLINE or OFFLINE/ERROR22:44
rm_workbasically every health check22:44
rm_worki'm trying to understand if the members are really broken or not22:44
johnsomop status? or prov?22:44
rm_workOP22:44
rm_workvia HMs22:44
rm_workbut regardless, i feel like something must be wonky22:45
rm_worki picked one to look at closer, which is flapping between ERROR and OFFLINE for the pool22:45
johnsomYeah, would be interesting to see how the member status looks22:46
rm_workmember status seems to be consistently ERROR22:47
rm_work3 members in this pool, all ERROR22:47
rm_workbut every other health run, it flaps between ERROR and OFFLINE for the pool22:47
rm_workmember status hasn't changed22:47
rm_workjust pool op_status22:47
johnsomHmmmm, with all members in ERROR, I would expect pool to be in ERROR22:47
rm_workscanning through the code now to see if anything jumps out22:48
rm_workyes, me too22:48
rm_workso what is shoving it back to OFFLINE22:48
johnsomYeah, to my memory, OFFLINE is only when there are no members defined or it's admin down22:48
rm_workyeah so confirmed, LB and MEMBER statuses never leave ERROR22:50
rm_workbut pool flaps between ERROR and OFFLINE22:50
rm_workI bet we are doing something dumb22:50
rm_workscanning update_health()22:51
rm_workAH YEAH22:52
rm_workI bet this is bad22:52
rm_workone sec, code links incoming22:52
rm_workhttps://github.com/openstack/octavia/blob/master/octavia/controller/healthmanager/update_db.py#L225-L22822:53
rm_workso this seems the obvious candidate22:53
rm_workso... if a pool is in ERROR... is there a change it's not in that lsit22:53
rm_work*list22:53
rm_work*chance22:53
rm_workhttps://github.com/openstack/octavia/blob/master/octavia/controller/healthmanager/update_db.py#L19422:54
rm_workpools = listener['pools']22:54
rm_workis that list filtered? could it be?22:54
johnsomIt should be the actual list of backends from the haproxy stats.22:54
rm_workyeah so uhh22:54
rm_workif it's in error22:54
rm_workcould HAProxy maybe not list it22:55
johnsomI'm wondering if we have a case where a pool on a listener is not bound to an lb???22:55
rm_workit is22:55
rm_workone sec, i can pm you the DB entries for this stuff if you want22:55
rm_workbut22:55
rm_worki don't think it's that22:55
rm_worklooking at how listener there is filled...22:55
johnsomA status tree output would be handy22:55
rm_workhttp://paste.openstack.org/show/647546/22:57
rm_worki mean, that's what i'm looking at22:57
rm_workand it flaps22:57
rm_workso the next run that'll be 9 ERRORs22:57
rm_workso yeah22:58
rm_workthis seems like a pretty obvious possibility:22:58
rm_workour health message doesn't include the pool on the listener if the pool is in error22:59
rm_work(from haproxy)22:59
rm_workwe then pull the listeners out of the health message22:59
rm_workand then pass them to process_pools22:59
rm_workhttps://github.com/openstack/octavia/blob/master/octavia/controller/healthmanager/update_db.py#L219-L22622:59
rm_workwe hit that `if` on 22523:00
rm_workand it's sunk23:00
rm_worki'm 99% sure that's it23:00
rm_workneed to verify what our health message is looking like23:00
rm_workthough ... then wouldn't it always hit that23:01
johnsomI doubt it.  Doesn't make sense why it would flip flop.  The only reason there wouldn't be a pool record is if the "backend" isn't defined in haproxy, which is how we do "admin down" or pending_create prov23:01
rm_workhmm23:01
rm_worki need to see the health message23:01
johnsomBummer, no debug log for that (though it would be a lot of log traffic)23:04
rm_workyeah23:04
rm_worki can get what HAProxy is reporting, on the amp, right?23:04
rm_workhave to remember how to do that23:04
johnsomyes23:04
rm_workno `nc`, no `socat` ...23:05
johnsomYeah, you have to scp it over23:06
rm_worklol23:06
rm_workk got it23:09
johnsomecho "show stat" | socat unix-connect:4f30c4d2-1d05-4a75-885e-5ab790034235.sock stdio23:09
rm_workyeah i got it23:09
johnsomWe filter that down in the code, but might as well get the whole blob23:10
rm_workPM'd23:10
rm_workehh i guess it doesn't have IPs ...23:10
rm_workhttps://gist.github.com/rm-you/85790ff5b163cd977fb24fa14be2d4d923:10
johnsomYeah, so the "BACKEND" record is there23:10
johnsomIt does show "DOWN"23:11
johnsomSo, it should be sending that pool record.  Can you confirm with the status tree that there is only one pool on this LB?23:11
rm_workthere's two23:12
rm_workthe other pool isn't flapping23:12
rm_workah hold up23:12
rm_workmaybe you are only seeing the other pool lol23:12
rm_workneed to compare IDs23:12
rm_workerr23:12
rm_workwell only one pool on this LISTENER23:12
rm_worktwo listeners23:12
rm_workso i guess this is right23:12
johnsomOk, so two processes, two sockets23:13
rm_workyeah, i queried the socket for the listener the ERROR pool was on23:13
johnsomMaybe that is where the bug is23:13
rm_workOH23:15
rm_workROFL23:15
rm_workyeah i think i see it23:15
rm_workmultiple listeners23:15
rm_workthe for loops aren't nested, right?23:15
rm_workso ... "listener"23:15
rm_workare processed once23:15
rm_workand then again?23:15
rm_workkinda23:15
rm_workehh but you have tracking for that...23:16
rm_workok i'm gonna shut up and finish tracing this instead of typing stream of consciousness :P23:16
johnsomOk.23:18
rm_workYEP got it23:18
rm_workyou define processed_pools above23:18
johnsomWorst case you tcpdump the health packets and decrypt by hand to see what's in it23:18
rm_workbut instead of ADDING the pools that each listener processes to it23:18
rm_workit's overwritten23:18
rm_workso the second listener processes its pools and overwrites the list23:18
rm_workthen it'll process the next pool as a "LB pool"23:18
rm_workhttps://github.com/openstack/octavia/blob/master/octavia/controller/healthmanager/update_db.py#L148-L19823:19
rm_workfirst and last line23:19
johnsomYeah, but that should be correct I think. It's per listener, so each listener should overwrite it23:20
rm_workerr no23:20
rm_workprocessed_pools is USED below23:20
rm_workoutside the listener for-loop23:20
johnsomRight23:20
johnsomAs it should23:20
rm_workright ...23:20
rm_workOUTSIDE the for-loop where you set it23:21
rm_workno?23:21
rm_workyou say "# Don't re-process pools shared with listeners"23:21
rm_workwhich implies that list should be a constant append23:21
johnsomYeah, I think I see.23:22
johnsomhttps://github.com/openstack/octavia/blob/master/octavia/controller/healthmanager/update_db.py#L19823:22
*** AlexeyAbashkin has joined #openstack-lbaas23:22
johnsomThis should append23:22
johnsombut how do we have multple pools per listener?23:23
johnsomCan you PM me the openstack loadbalancer status?23:23
rm_worksee this23:23
openstackgerritAdam Harwell proposed openstack/octavia master: Fix processing pool statuses for LBs with multiple listeners  https://review.openstack.org/53599223:24
rm_workthat is what I believe should happen23:24
rm_workthe processed_pools list isn't additive right now23:24
johnsomI don't think 209 needs to be there. pools on LB should be a unique constraint23:25
rm_workwell yeah23:25
rm_worktechnically it doesn't matter right now23:25
rm_workBUT23:25
rm_workthat list is supposed to be "every pool we process"23:25
rm_worki'd like it to be up-to-date in case someone comes along later and tries to use it23:25
rm_workit's supposed to be an ever-growing list of pools that we have updated already23:26
rm_workso every time we process one, we need to make sure it gets into that list23:26
rm_workright?23:26
*** AlexeyAbashkin has quit IRC23:26
rm_worktechnically we don't use it after 209, but that's no reason to be lame about keeping it accurate, lol23:27
*** slaweq has quit IRC23:27
rm_workwe DO need to have the change on 207 though because otherwise it'll overwrite every loop lol23:27
johnsomJust extra mem and CPU time23:27
rm_workbarely23:28
rm_worki mean, the other option is to make it a passthrough23:28
rm_worki can do that23:28
johnsomYeah, that is bad looking at it.  It should just throw those away on 20723:28
rm_workone sec23:28
rm_workyes23:28
rm_workprobably23:28
rm_worki'll make a pass-223:28
openstackgerritAdam Harwell proposed openstack/octavia master: Fix processing pool statuses for LBs with multiple listeners  https://review.openstack.org/53599223:30
rm_workbetter?23:30
rm_workneed to fix tests probably (hopefully?)23:30
rm_workoh one more thing, crap23:32
rm_workhow do we deal with "pools" that's passed in23:33
rm_workthat's coming from the listener stuff23:33
rm_workso it literally can't be used here...23:34
rm_worki think this is the fix for that:23:34
openstackgerritAdam Harwell proposed openstack/octavia master: Fix processing pool statuses for LBs with multiple listeners  https://review.openstack.org/53599223:34
rm_workjohnsom: ^^23:35
johnsomrm_work Ummm, if you pass in []  where is it getting the pools list from the amp?23:35
rm_workthat's the pool list from the LISTENER23:36
rm_workand the only way that's used is for status setting23:36
rm_workwhich... if the pool isn't on a Listener, SHOULD always be OFFLINE23:36
johnsomNo, it could be used in a L7 policy23:36
rm_workok, but then it'd be on the listener still, no?23:37
rm_workprocessed as part of whatever listener it's used on?23:37
rm_workbecause the status for the pools here come from HAProxy23:37
rm_workand we only get statuses for backends when they're on listeners...23:37
rm_workso it's impossible for the code that "gets the status" to run, if the pool wasn't on a listener23:37
johnsomYeah, I'm trying to remember if that is true.  Will haproxy not let you have an un-bound "backends" section23:38
johnsomOr do we23:39
rm_workhmmm23:39
rm_workif we do, we still were pulling it from the wrong place before23:39
johnsomyeah, I follow that23:39
rm_workbecause the thing that WAS being passed in there ... was from the listener loop23:39
*** yamamoto has joined #openstack-lbaas23:40
rm_workalso, shouldn't we be marking it as "processed" so long as this function was run on it? even if we just set the status to OFFLINE23:40
rm_workeh maybe not23:40
rm_workbut yeah, even if we need to pass pools, what was there before was *not it*23:40
rm_workmaybe health['pools'] ?23:40
rm_workhow do we construct this message again...23:41
-openstackstatus- NOTICE: Zuul will be offline over the next 20 minutes to perform maintenance; active changes will be reenqueued once work completes, but new patch sets or approvals during that timeframe may need to be rechecked or reapplied as appropriate23:42
rm_workhealth_daemon.py23:42
rm_workand no, there's only "listeners" in the status message23:43
rm_worki think passing [] is correct23:43
johnsomOk23:44
rm_worki'm going to apply this patch locally to my test environment23:44
rm_workand see if i can repro / fix23:45
rm_workyep repro is super easy23:52
*** fnaval has quit IRC23:52
rm_workmade LB, made two listeners, one pool each, same two members on each pool, one HM looking for a bad status23:53
rm_workone pool is fine, one flaps between ERROR and OFFLINE23:53
rm_workother stays in ONLINE23:53
rm_worknow to apply the patch...23:53

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!