Wednesday, 2018-06-06

*** yamamoto has joined #openstack-lbaas00:03
*** yamamoto has quit IRC00:09
*** longkb1 has joined #openstack-lbaas00:33
*** yamamoto has joined #openstack-lbaas01:06
openstackgerritMichael Johnson proposed openstack/octavia master: Implement provider drivers - Driver Library  https://review.openstack.org/57135801:06
*** kobis has joined #openstack-lbaas01:08
*** kobis has quit IRC01:09
*** kobis has joined #openstack-lbaas01:10
*** harlowja has quit IRC01:10
*** yamamoto has quit IRC01:11
*** kobis has quit IRC01:15
*** JudeC_ has quit IRC01:20
*** threestrands_ has joined #openstack-lbaas01:33
*** threestrands has quit IRC01:36
*** hongbin has joined #openstack-lbaas01:48
*** blake has joined #openstack-lbaas01:55
*** yamamoto has joined #openstack-lbaas02:07
*** yamamoto has quit IRC02:12
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Correctly guess amp count based on detected topo  https://review.openstack.org/57266102:22
*** blake has quit IRC02:26
*** SumitNaiksatam has joined #openstack-lbaas02:29
*** SumitNaiksatam has quit IRC02:44
*** SumitNaiksatam has joined #openstack-lbaas02:50
*** yamamoto has joined #openstack-lbaas03:08
*** kobis has joined #openstack-lbaas03:12
*** kobis has quit IRC03:12
*** kobis has joined #openstack-lbaas03:12
*** yamamoto has quit IRC03:14
*** kobis has quit IRC03:17
*** bzhao__ has joined #openstack-lbaas03:38
*** links has joined #openstack-lbaas04:02
*** yamamoto has joined #openstack-lbaas04:10
*** hongbin has quit IRC04:13
*** yamamoto has quit IRC04:16
*** harlowja has joined #openstack-lbaas04:17
*** kobis has joined #openstack-lbaas04:28
*** kobis has quit IRC04:39
*** harlowja has quit IRC04:45
*** yamamoto has joined #openstack-lbaas04:56
openstackgerritAdit Sarfaty proposed openstack/octavia master: Use object instead of object id in the drivers delete callbacks  https://review.openstack.org/57197405:01
openstackgerritAdit Sarfaty proposed openstack/octavia master: Remove a duplicated key in unit test dict  https://review.openstack.org/57266905:03
*** JudeC_ has joined #openstack-lbaas05:24
*** ptoohill1 has quit IRC05:28
*** ptoohill1 has joined #openstack-lbaas05:28
*** links has quit IRC05:33
*** yboaron has joined #openstack-lbaas05:43
*** links has joined #openstack-lbaas05:49
openstackgerritCarlos Goncalves proposed openstack/octavia master: Add grenade support  https://review.openstack.org/54965405:51
*** kobis has joined #openstack-lbaas06:00
*** kbyrne has quit IRC06:02
*** JudeC_ has quit IRC06:03
*** kbyrne has joined #openstack-lbaas06:04
bzhao__Hi, guys, do we have a plan to support ca certificate for listener?  It is necessary that  client authentication case in real LB scenario06:09
openstackgerritMerged openstack/octavia master: Fix amp failover where failover already failed  https://review.openstack.org/54898906:29
*** links has quit IRC06:44
*** pcaruana has joined #openstack-lbaas06:44
*** ispp has joined #openstack-lbaas06:48
rm_workbzhao__: i think i would like that too...06:49
bzhao__rm_work:  cool. For our current implementation,  we support to upload services side certificate , as the existing fields "default_tls_container_ref",  "sni_container_refs" in Listener API.  We support single direction authentication, but not double direction. Right?06:54
rm_workright06:57
bzhao__rm_work:  So the new ca certificates specified for client side in listener is necessary for double direction authentication.  Ha, I will post a RFE for this in storyborad, but not very clear should we need to expose something about  <"double direction authentication enable"  and "ca files" fields  introduced> or just <"ca files" fields> in Listener API?07:03
*** links has joined #openstack-lbaas07:03
*** rcernin has quit IRC07:07
*** links has quit IRC07:12
*** openstackgerrit has quit IRC07:19
*** nmanos has joined #openstack-lbaas07:22
*** ispp has quit IRC07:25
cgoncalvesgrenade job passed!07:27
rm_workmake it pass again :P07:29
*** links has joined #openstack-lbaas07:33
cgoncalvesrechecked :)07:46
rm_workbzhao__: yeah hmm, i would look to see what other services call it07:55
cgoncalvesrm_work, uh-oh! http://logs.openstack.org/54/549654/35/check/octavia-grenade/fd5303f/logs/screen-o-hm.txt.gz#_Jun_06_06_57_48_33789007:56
rm_workerm07:57
rm_workit couldn't find a listener by that ID lol07:57
rm_workwut07:57
rm_worki don't ... hold on07:57
rm_workah right07:58
rm_workit was deleted07:58
rm_work"Amphora e112a159-13b8-4cf7-a053-6cffb7113230 health message reports 1 listeners when 0 expected"07:58
rm_workone sec07:58
rm_workmight be a legit bug07:58
*** yboaron has quit IRC07:59
rm_workmust be a race07:59
rm_workbut anyway yeah it's a bug, one sec08:00
*** threestrands_ has quit IRC08:03
*** JudeC_ has joined #openstack-lbaas08:10
rm_worksorry this is more complex than i thought lol08:18
rm_workobviously a bug08:18
rm_workfixing it08:18
*** lxkong has quit IRC08:29
*** ispp has joined #openstack-lbaas08:34
*** openstackgerrit has joined #openstack-lbaas08:46
rm_workcgoncalves: this:08:46
openstackgerritAdam Harwell proposed openstack/octavia master: Fix stats update when missing listener in DB  https://review.openstack.org/57270208:46
rm_workit's kinda dumb08:46
rm_workI KNOW we have this info... and I could be a purist and say "we shouldn't look it up like this", but that's dumb IMO08:46
rm_workwill see what johnsom thinks in the morning08:46
cgoncalvesrm_work, that was quick :)08:47
*** ispp has quit IRC08:47
rm_workeh I thought it'd just be a minute but it took like 45 <_<08:47
cgoncalvesI'll rebase my grenade on top of that08:47
rm_workeugh and i missed a bad thing08:48
rm_workhttps://review.openstack.org/#/c/572661/08:48
rm_workdidn't work for active-standby topo T_T08:49
rm_workwhich is annoying because that is what i run, and i should know better08:49
cgoncalvesgo easy on yourself. you've done a magnificent work on the tempest tests!08:51
rm_workyeah but it cost me an hour today, which is annoying, lol08:57
numanscgoncalves, hey09:16
numansI am working on octavia ovn driver. I have a question.09:16
numansWhen i delete a listener, the ovn driver returns the proper status code and the listener gets deleted.09:17
numanswhen i run "openstack loadbalancer listener list", it is empty09:17
numansbut the entry in the octavia db for the listener is still there. Why isn't it getting deleted from the listener table ?09:18
sapdrm_work: How can I get loadbalancer id inside amphora instance ?09:21
rm_worknumans: is `provisioning_status` == "DELETED"?09:24
rm_worksapd: hmm ... i am not sure if we save that in there?09:24
numansrm_work, ues it is09:24
rm_workthe listener ID will be there09:24
numansyes it is09:24
rm_worknumans: then, that sounds normal? though ... i thought we deleted them immediately... so, I'm not sure if we're changing things a bit or if i was mistaken09:25
rm_worknumans: normally we have our housekeeping process clean up deleted entries after a configured expiry period09:25
numansrm_work, i have noticed with other resources as well i.e pool09:25
rm_workyeah09:26
numansrm_work, http://paste.openstack.org/show/722782/09:26
numansok. let me just wait for a while and see if it gets deleted. the issue though is if i create a listener again with the same port, i get error09:26
rm_workwe might be doing things like that now, with the new provider system09:26
rm_workerk, right, we probably didn't check on that T_T09:27
rm_workyou might have a bug09:27
rm_worklet me take a look at how we are doing this09:27
rm_worknumans: oh, also I think there is a patch up to change how deletes work09:27
cgoncalvesnumans, hi!09:27
rm_workwe are still solidifying that code09:27
numanscgoncalves, hi. rm_work answered my questions :)09:28
cgoncalvesnumans, ah, ok. reading the backlog09:28
rm_workah but that doesn't actually deal with this09:28
numansrm_work, thanks. i will look out the patches. right now i have applied the patch manually https://review.openstack.org/#/c/571358/09:29
numanswhich now is merged09:29
numansrm_work, thanks.09:29
rm_workso i'm not even sure how you are able to set the status to deleted yet, since the code for the callbacks isn't complete still?09:29
rm_workahh ok yeah09:29
rm_workthat is the one09:29
*** yboaron has joined #openstack-lbaas09:30
rm_workyeah ok09:32
rm_worknumans: congratulations! you found a major bug09:32
rm_workbecause deletes are also just treated as a "status update", it updates the status now for listeners to DELETED instead of actually doing a delete, which is what we did before09:32
rm_workand because it doesn't ACTUALLY delete the listener (or pool or anything), some of our code that expected that (like the listener API that ensured no duplicate ports) is failing09:33
numansglad that you found the reason09:33
rm_workneed to talk to johnsom in the morning and decide how to deal with that09:33
cgoncalvesrm_work, why is that? (listeners marked as DELETED)09:33
numansok. i was wondering if something is wrong in the driver code09:33
rm_workwe could change the driver callback lib to do an actual `delete` action if that's the new status, OR we could update the API code to filter on only non-deleted listeners when checking port availability09:34
numansi mean the ovn provider driver09:34
rm_workyeah, ignore that for now09:34
rm_workwe'll fix it ASAP09:34
numansrm_work, thanks.09:34
openstackgerritCarlos Goncalves proposed openstack/octavia master: Add grenade support  https://review.openstack.org/54965409:35
*** JudeC_ has quit IRC09:35
sapdrm_work: I would like to use custom collector to collect more metrics in amphora. :D09:35
rm_worki am leaning towards the latter fix, in the API and actually leave the objects there as DELETED status (which honestly I feel like we should have done before) but that will require some thought, because it could have other unintended consequences09:35
rm_worksapd: you can do that! make an element for it, add it to the element list when you build the image, and it will work :)09:35
rm_workyou can look at how we add our amphora-agent element09:36
sapdrm_work: Yes. I know it.  but I would like to know these metrics belong to which loadbalancer. So I need to know loadbalancer id inside amphora09:36
rm_workyou can know the listener ID easily... so you can query the Octavia API to get the loadbalancer ID from that?09:37
rm_workbut, it does seem like it would be useful to log that somewhere... :/09:38
*** JudeC_ has joined #openstack-lbaas09:38
rm_workeugh, i think we are doing the port validation as a DB constraint... which will make fixing it that way painful09:38
*** links has quit IRC09:39
rm_workprobably we need to change the bulk_update method to do a real delete if the status is DELETED, rather than just updating the status09:39
rm_worki guess that's that09:40
rm_worki'll poke michael about that in the morning if he hasn't figured it all out already09:40
*** JudeC_ has quit IRC09:43
*** ispp has joined #openstack-lbaas09:53
*** ispp has quit IRC09:55
*** links has joined #openstack-lbaas09:57
*** nmanos has quit IRC10:00
*** kobis has quit IRC10:01
*** nmanos has joined #openstack-lbaas10:01
*** yboaron_ has joined #openstack-lbaas10:01
*** kobis has joined #openstack-lbaas10:02
openstackgerritchenge proposed openstack/octavia master: Amend the spelling error of a word  https://review.openstack.org/57271810:04
*** yboaron has quit IRC10:04
*** links has quit IRC10:16
*** annp has quit IRC10:21
*** kobis has quit IRC10:22
*** links has joined #openstack-lbaas10:29
*** lxkong has joined #openstack-lbaas10:45
*** kobis has joined #openstack-lbaas10:46
sapdrm_work: So how to get listener ID :D10:46
*** longkb1 has quit IRC10:47
rm_worksapd: in /var/local/octavia/ the folders that look like UUIDs and have haproxy.cfg inside, are the listener ID names10:47
rm_workif you create two listeners, there will be two folders, etc10:48
sapdrm_work:  how can I push loadbalancer id to amphora instance?10:49
rm_workinteresting... you could probably change the templates10:51
rm_worklet me see10:51
rm_workso, like this maybe10:52
rm_worksorry, taking me a minute longer because tests10:57
*** yboaron_ has quit IRC11:02
rm_worksapd: ok, check this out11:02
openstackgerritAdam Harwell proposed openstack/octavia master: Add lb_id comment to amp haproxy listener config  https://review.openstack.org/57273011:03
rm_workI THINK that should work11:03
rm_workdidn't test it for real or anything11:03
rm_workbut it should put the lb id in the listener config file, after the name11:03
rm_worki thought it might have just been a template change, but we pass a kind of weird translated version of the LB-dict into the renderer, unfortunately :/11:07
rm_workso have to actually change a little bit of code11:07
*** kobis has quit IRC11:11
cgoncalvesrm_work, your patch fixed the exception seen in the grenade job11:25
*** yboaron_ has joined #openstack-lbaas11:28
*** kobis has joined #openstack-lbaas11:30
*** amuller has joined #openstack-lbaas11:58
*** kobis has quit IRC12:00
*** kobis has joined #openstack-lbaas12:10
*** fnaval has joined #openstack-lbaas12:14
*** AlexeyAbashkin has joined #openstack-lbaas12:29
*** links has quit IRC12:33
*** atoth has joined #openstack-lbaas12:46
*** hongbin has joined #openstack-lbaas12:57
*** kobis has quit IRC13:30
*** kobis has joined #openstack-lbaas13:30
*** yboaron_ has quit IRC13:44
*** Alexey_Abashkin has joined #openstack-lbaas13:51
*** amuller_ has joined #openstack-lbaas13:53
*** AlexeyAbashkin has quit IRC13:53
*** amuller has quit IRC13:53
*** Alexey_Abashkin is now known as AlexeyAbashkin13:53
*** nmanos has quit IRC14:05
*** hvhaugwitz has quit IRC14:08
*** hvhaugwitz has joined #openstack-lbaas14:14
*** AlexeyAbashkin has quit IRC14:28
*** AlexeyAbashkin has joined #openstack-lbaas14:36
xgerman_ls14:48
*** kobis has quit IRC15:02
dayou_.15:03
*** lxkong has quit IRC15:12
*** AlexeyAbashkin has quit IRC15:14
*** AlexeyAbashkin has joined #openstack-lbaas15:16
*** pcaruana has quit IRC15:23
*** amuller_ is now known as amuller15:36
*** irenab has quit IRC15:44
*** irenab has joined #openstack-lbaas15:47
*** phuoc_ has quit IRC15:53
*** phuoc_ has joined #openstack-lbaas15:54
johnsomBusy night15:56
*** kobis has joined #openstack-lbaas15:58
*** hongbin has quit IRC16:10
*** yamamoto has quit IRC16:14
*** yamamoto has joined #openstack-lbaas16:14
*** yamamoto has quit IRC16:17
*** yamamoto has joined #openstack-lbaas16:19
*** SumitNaiksatam has quit IRC16:43
*** irenab has quit IRC16:44
*** irenab has joined #openstack-lbaas16:46
*** AlexeyAbashkin has quit IRC16:57
*** mstrohl has joined #openstack-lbaas16:58
*** JudeC_ has joined #openstack-lbaas17:02
*** kobis has quit IRC17:03
*** yamamoto has quit IRC17:10
*** mastrohl has joined #openstack-lbaas17:11
*** yamamoto has joined #openstack-lbaas17:13
*** SumitNaiksatam has joined #openstack-lbaas17:13
*** mstrohl has quit IRC17:16
*** mstrohl has joined #openstack-lbaas17:16
*** kobis has joined #openstack-lbaas17:16
*** yamamoto has quit IRC17:18
*** mastrohl has quit IRC17:19
*** mastrohl has joined #openstack-lbaas17:19
*** mstrohl has quit IRC17:21
*** sapd1 has joined #openstack-lbaas17:24
*** sapd2 has joined #openstack-lbaas17:25
*** yamamoto has joined #openstack-lbaas17:28
*** yamamoto has quit IRC17:30
*** yamamoto has joined #openstack-lbaas17:30
*** yamamoto_ has joined #openstack-lbaas17:43
*** yamamoto has quit IRC17:47
*** sapd1 has quit IRC17:48
openstackgerritMichael Johnson proposed openstack/octavia master: Improve the error logging for zombie amphora  https://review.openstack.org/56136918:01
*** yamamoto_ has quit IRC18:18
*** yamamoto has joined #openstack-lbaas18:28
*** yamamoto has quit IRC18:32
rm_workjohnsom: did you catch upon backlog? there are a few things18:35
johnsomyes, I have commented on a few.  Need to look at the status issue after the meeting.18:35
rm_workk18:36
*** yamamoto has joined #openstack-lbaas18:36
johnsomrm_work I already had this open: https://review.openstack.org/#/c/561369/ I think we should chain yours on top of mine or merge them into one patch.18:36
rm_workah k18:36
rm_workyeah i see it does similar18:37
rm_worki can just abandon mine18:37
rm_workunless you like how i did it18:37
rm_workeh now that i realize it's just doing stuff for the neutron queue.... meh18:40
rm_worki don't really care how hard we try to make it right18:41
rm_workwe can just do your thing18:41
*** amuller has quit IRC19:12
*** SumitNaiksatam has quit IRC19:15
*** SumitNaiksatam has joined #openstack-lbaas19:15
*** sapd2 has quit IRC19:31
*** atoth has quit IRC19:32
nmagnezirm_work, p/19:38
nmagnezirm_work, o/19:38
nmagnezirm_work, plz just take a look at https://review.openstack.org/#/c/549263/10/octavia/network/drivers/neutron/allowed_address_pairs.py@274 so we can merge :)19:38
*** issp has joined #openstack-lbaas19:39
*** mastrohl has quit IRC19:39
johnsomYeah, I need to re-review that one19:43
*** kobis has quit IRC19:43
*** SumitNaiksatam has quit IRC19:46
rm_worknmagnezi: i mean...19:49
nmagnezirm_work,  if i got that comment wrong please let me know19:50
nmagnezirm_work, everything else looked okay to me19:50
rm_workyou aren't explicitly wrong?19:50
rm_workbut like19:50
*** blake has joined #openstack-lbaas19:50
rm_work¯\_(ツ)_/¯19:50
nmagneziLOL19:50
nmagneziI think it's fair you nuke all ports for that sec-group, but we cannot declare that those owned by Octavia :P19:51
* rm_work sighs19:51
*** yamamoto has quit IRC19:51
nmagnezirm_work, don't hate me. O_O19:51
openstackgerritAdam Harwell proposed openstack/octavia master: When SG delete fails on vip deallocate, try harder  https://review.openstack.org/54926319:53
rm_workalready had german's +2, could have merged it T_T19:53
rm_workbut now gates and waiting again19:53
rm_worklol19:53
nmagnezilol19:54
nmagnezirm_work, I can just +1 to give johnsom time to review before you merge ;)19:54
rm_workaugh lol19:55
rm_workyou should +2 and let johnsom review and merge :P19:55
*** SumitNaiksatam has joined #openstack-lbaas19:57
*** yamamoto has joined #openstack-lbaas19:58
johnsom#startmeeting Octavia20:00
openstackMeeting started Wed Jun  6 20:00:04 2018 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.20:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
*** openstack changes topic to " (Meeting topic: Octavia)"20:00
openstackThe meeting name has been set to 'octavia'20:00
xgerman_o/20:00
johnsomHello everyone20:00
johnsomI have a list of announcements today...20:00
johnsom#topic Announcements20:00
*** openstack changes topic to "Announcements (Meeting topic: Octavia)"20:00
johnsomThere is interest in starting a Zun compute driver for Octavia20:01
nmagnezio/20:01
johnsom#link http://lists.openstack.org/pipermail/openstack-dev/2018-June/131056.html20:01
johnsomGood stuff there.  I hope we can support that effort20:01
johnsomThe next OpenStack summit is in Berlin and the one after will be in Denver (downtown and not at the train, grin)20:02
johnsomIt sounds likely that in 2019 the PTG and summit will merge back into one event.20:02
*** yamamoto has quit IRC20:02
johnsom(These are all notes I have captured from the mailing list BTW20:03
johnsomThere is a proposal for Barbican to become a base service for OpenStack20:03
johnsom#link https://review.openstack.org/#/c/572656/20:03
johnsomYou can vote and comment on that patch20:03
johnsomAlso of note, today is Rocky milestone 220:04
johnsomThat means I will be cutting milestone releases20:04
johnsom#link https://releases.openstack.org/rocky/schedule.html20:04
johnsomAny other announcements today?20:04
*** blake has quit IRC20:05
*** blake__ has joined #openstack-lbaas20:05
johnsom#topic Brief progress reports / bugs needing review20:05
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)"20:05
johnsomI have been busy working on the provider driver activities.  I think all of the provider driver spec is now implemented.  There are still things we need to work out and fix, notably status updates and the update calls.20:06
johnsomI have also been working on the neutron-lbaas to octavia load balancer migration tool.  As it is today, it *should* work for octavia provider load balancers.  I still need to finish the support for migrating other driver LBs and some testing.20:07
nmagnezithis is great!20:08
johnsomAny other updates?20:08
rm_workre: Milestone releases -- i think you should wait a day if possible, we have some fixes to merge for some important bugs related to stuff that just merged IMO20:08
nmagneziI'll try to spawn a node for that. hopefully devstack can handle installing both octavia and n-lbaas20:08
johnsomrm_work I need to get that in today, but we have the rest of the day.  Can you put together a list?20:09
nmagneziI have a story that needs review, maybe worth to discuss in the open discussion part20:09
johnsomOk20:09
nmagnezi#link https://storyboard.openstack.org/#!/story/200216720:09
nmagneziIf it's acceptable, I can try to make it work20:10
johnsomOther updates?  I know we have made some progress on grenade.20:10
nmagnezibut there are open questions there20:10
johnsomHmm, yeah, this leads into another conversation I wanted to start in open discussion.20:12
nmagnezinp20:12
johnsomIf there are no other progress updates I guess we can jump in there20:13
johnsomOk20:13
johnsom#topic Brief progress reports / bugs needing review20:13
johnsomopps20:13
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)"20:13
johnsom#topic Open Discussion20:13
*** openstack changes topic to "Open Discussion (Meeting topic: Octavia)"20:13
xgerman_that was quick20:13
johnsomOk, so my topic was about API versioning20:13
*** blake__ is now known as blake20:13
johnsomYeah, I didn't get the right topic cut and pasted, so duplicate topic for progress reports.20:14
johnsomNir did you want to go first or should we just talk about it in the context of the API versions?20:14
nmagnezii'm fine with both20:14
nmagnezijohnsom, lead the way :)20:15
johnsomOk, So we have been bad20:16
johnsomCurrently we have only one version for the v2 API "v2.0" though we have added new capability20:16
xgerman_yeah, we should be at 2.120:17
johnsomI think it is time we really get serious about versioning the API so that clients, etc. can work with our API across deployments20:17
johnsomI proposed20:17
johnsom#link https://review.openstack.org/#/c/559460/20:17
johnsomBut there was pushback about the 2.120:18
johnsomIt's probably not the right answer anyway.20:18
johnsomI think it might be time we consider microversioning20:19
johnsom#link https://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html20:19
johnsomI'm not the biggest fan of this20:19
johnsomMost notably if people make a request without a micro version they get the oldest version of the API by default.20:19
*** yamamoto has joined #openstack-lbaas20:20
*** yamamoto has quit IRC20:20
johnsomI think that is a bit lame20:20
johnsomIt also means we have to think about how we want to handle version in the API code.20:20
nmagnezijohnsom, just for context, when you say that we added new capability, which API capability you refer? cascade delete? asking just to get a sense of how major/minor that feature was..20:20
johnsomAs well as in tempest as Nir mentioned20:20
xgerman_amphora API20:21
johnsomWell, Queens added amphora failover20:21
nmagnezigotcha20:21
rm_workyeah the amp API, which ... i don't know if anyone uses? it definitely isn't end-user20:21
johnsomRocky adds timeouts and listing provider drivers20:21
rm_workand might not be in the right place given changes made with provider anyway <_<20:21
nmagnezirm_work, yeah but it's still an API change. a totally new API call20:22
nmagnezijohnsom, yup. those listener timeouts will cause tests that we currently have in our tempest plugin to fail against stable/queens20:22
rm_workyeah i guess we can't "undo" adding it ... but i wonder if it should be in a different place when we finally actually put up v2.1 or whatever20:23
nmagneziwhich makes sense. we just need to find a way to skip it somehow20:23
rm_workeh, this is not relevant for right now, so ignore me20:23
rm_worknmagnezi: wait, why do they cause test failures? because extra data comes back?20:23
*** SumitNaiksatam_ has joined #openstack-lbaas20:23
*** SumitNaiksatam has quit IRC20:23
*** SumitNaiksatam_ is now known as SumitNaiksatam20:23
rm_worki feel like that's badly designed testing :/20:23
rm_workbut ... ehh20:24
nmagnezirm_work, because some listener attrs just didn't exist in Queens20:24
rm_workoh20:24
johnsomThere is some docs for tempest testing with microversions20:24
johnsom#link https://docs.openstack.org/tempest/latest/microversion_testing.html20:24
rm_workah right you mean the NEW tests (which test those)20:24
rm_workand yeah obviously they don't work :P20:24
nmagnezirm_work, yes. it's in the story I submitted https://storyboard.openstack.org/#!/story/200216720:25
nmagnezijohnsom, will read it.20:25
johnsomTo be honest, I have not read up on the microversion stuff recently, so can't talk in too much detail.20:25
rm_workyeah i hate the "oldest" thing too20:26
rm_worki don't suppose we could just ... do it differently? or is that code that's in the client / etc20:26
rm_workand not really our choice20:26
nmagnezijohnsom, if we don't want to go with micro versioning (because of what you wrote above), can't we just go with 2.1 for that reason?20:26
rm_workyeah i don't particularly mind a v2.120:27
xgerman_So the way I undertstand microversions is that you rev up to 2.X and someday call it fine and then release 2.X as 3.020:27
johnsomThere was discussion at the summit about how to fix it to not be the "oldest"20:27
xgerman_and if somebody needs some of the “new” functionality they need to do that call with the appropriate microversion20:27
xgerman_which all sounds pretty crazy to me20:28
johnsomI am fine with v2.1, it's what I proposed, but we need to figure out how to support the older and newer in our API code20:28
rm_workyeah20:28
rm_worki mean ermm20:28
johnsomxgerman Yeah, so we are all in agreement on microversions kind of suck20:28
johnsomOn option is a full copy20:28
rm_worki suppose it's naive / bad to just make a v2.1 directory in the api module, and make new classes that just inherit and pass from the old ones? >_>20:28
johnsomRight20:29
rm_worki guess that could lead to a mess20:29
johnsomA bit, but at least it would be kind of clear20:29
rm_workeugh i don't want to be in the business of fixing bugs in multiple classes at once20:29
xgerman_well, the people behind microversions have little concerns for the plights of us programmers having to implement those schemes20:30
rm_workthe microversions stuff has like... decorators with version numbers and allows multiple functions with the same name?20:30
rm_workand they live side-by-side?20:30
johnsomI'm not up to date enough on it to say20:32
rm_worki need to do some research but, this is something we do need <_<20:32
rm_workdo we need it BY ROCKY?20:32
johnsomI would likely need to dig into nova code to see how they do it20:32
xgerman_I think like it or not but we probably have to do microversions20:32
johnsomI'm not convinced he have to do microversions.20:33
rm_workerr i am thinking of broader than openstack20:33
johnsomneutron does not20:33
nmagnezimaybe we can learn from neutron https://specs.openstack.org/openstack/neutron-specs/specs/liberty/microversioning.html20:33
xgerman_you want to do extensions?20:33
rm_worklike, i would like to research how this is done in general20:33
johnsomYeah, ok.  I am fine with putting it on the next agenda20:33
johnsomI do think we need something in place by the end of Rocky20:34
nmagneziyeah.20:34
rm_workk20:34
rm_workthough i will maybe miss next week20:34
rm_worki'm in an all week internal summit thing next week20:34
xgerman_sure, but it’s a major change and R-2…20:34
nmagnezijohnsom, as for https://storyboard.openstack.org/#!/story/2002167 , not sure if this needs to depend on API versioning. what do you think?20:35
johnsomnmagnezi We do need to have that running in queens. I am fine with moving forward to add the gates non-voting20:36
rm_workyeah, we should do that20:37
nmagnezigreat20:37
rm_workand i can noodle fixing the tests to be a little more compatible20:37
rm_workwe do have SOME api versioning20:37
rm_workthere's dates20:37
rm_workI generally tried to update those on api changes but may not always have remembered :(20:37
nmagneziI'll try to make it happen and keep you all posted20:37
rm_workbut, probably i can tag on whatever queens uses20:37
rm_worki hope it's different20:37
johnsomOk. Sounds good20:38
johnsomAny other discussion today?20:38
rm_workummm20:39
rm_workeh20:40
rm_worki was gonna plug some stuff20:40
rm_workbut i think we're on it20:40
johnsomOk20:40
rm_worki'll try to get a list of stuff i'd like by R220:40
johnsomExcellent20:41
johnsomIt looks like we finally have movement on the zuul job update to collect the nodejs coverage reports, so the coverage patches can merge.20:41
johnsomI also cut a python-octaviaclient in case you missed it20:41
johnsomOtherwise, I don't have any more topics20:42
rm_workk20:42
johnsomGoing once...20:42
rm_worki feel like i'm forgetting something20:42
rm_workbut i'll remember after you close the meeting and we can discuss later :P20:43
johnsomgrin20:43
johnsom#endmeeting20:43
*** openstack changes topic to "Discussion of OpenStack Load Balancing (Octavia) | https://etherpad.openstack.org/p/octavia-priority-reviews"20:43
openstackMeeting ended Wed Jun  6 20:43:11 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:43
openstackMinutes:        http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-06-06-20.00.html20:43
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-06-06-20.00.txt20:43
openstackLog:            http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-06-06-20.00.log.html20:43
nmagnezio/20:43
nmagnezijohnsom, btw, what do you think about option #3 I listed https://storyboard.openstack.org/#!/story/2002167 ? is it something that we can even do?20:44
johnsomnmagnezi You mean a git tag for tempest?20:44
johnsomNo, the tempest plugin is supposed to be branchless.20:45
johnsomWhich is just, odd anyway20:45
nmagnezijohnsom, I know we can tag. the question is whether or not it is acceptable in OpenStack to release new a tag for an already version20:45
openstackgerritMerged openstack/octavia-tempest-plugin master: Correctly guess amp count based on detected topo  https://review.openstack.org/57266120:45
nmagnezijohnsom, not talking about branches. just git tags20:46
nmagnezijohnsom, say we decide which HEAD makes sense to work ok against stable/queens..20:46
johnsomYeah, technically it is possible, but I don't think it is in the spirit of tempest.  I think we would get pushback20:47
rm_worknmagnezi: so the problematic tests are.... well, i guess the whole API-TEST thing20:47
rm_workright?20:47
rm_worksince it sends those fields20:47
johnsomRight, it's testing new stuff20:47
rm_worki could ... ummm20:47
rm_workwe could configure tempest with a release20:48
rm_worklike20:48
rm_workoctavia_release = queens20:48
rm_workunder loadbalancer20:48
nmagnezirm_work, I don't think that will be accepted20:48
rm_workand we could explicitly handle some stuff in code20:48
nmagnezirm_work, https://docs.openstack.org/tempest/latest/HACKING.html#skipping-tests20:48
rm_worknot skipping20:48
rm_worklike...20:48
rm_worklet me just show you20:49
nmagneziok20:49
*** JudeC__ has joined #openstack-lbaas21:01
*** JudeC_ has quit IRC21:01
rm_worknmagnezi: this21:06
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Compatibility with past releases (option 1)  https://review.openstack.org/57300321:06
*** issp has quit IRC21:06
rm_workwhat do you think?21:06
*** yamamoto has joined #openstack-lbaas21:20
*** yamamoto has quit IRC21:33
cgoncalvesrm_work, https://docs.openstack.org/tempest/latest/HACKING.html#skipping-tests21:45
rm_workright, i don't think this applies21:46
rm_workA) Based on configuration (see the config flag?)21:46
rm_workB) Not skipping the test21:46
cgoncalvesbtw, sorry I missed the meeting. I forgot to give a heads up21:46
cgoncalvesrm_work, yeah... although I'm not that keen on that option 121:47
cgoncalvesa decorator to skip could be interesting to explore21:48
rm_workwe don't want to SKIP the test21:48
rm_workthat'd be lame21:48
rm_workthen we would have no tests :P21:48
rm_workjust modify the test to match the target cloud intelligently21:48
cgoncalvesbut that would imply writing multiple similar tests, that that one with the timeouts21:48
rm_workwhy not just do this <_<21:49
cgoncalvesfor some cases you can modify the test, for others you have to skip (e.g. amphora failover)21:49
rm_workthat seems like a lot more work21:49
rm_workwe already do essentially this (change tests based on cloud features) in the *exact same way*21:49
rm_workhttps://github.com/openstack/octavia-tempest-plugin/blob/master/octavia_tempest_plugin/tests/api/v2/test_member.py#L139-L14921:50
rm_worktwo things right there21:50
rm_workthat are based on config21:50
rm_workthe first one does EXACTLY this21:50
cgoncalveshmmm in that case, if I had to choose between that and your option 1 patch, I'd say option 1 patch because it's more explicit to users21:53
* rm_work shrugs21:54
rm_workwill see what johnsom thinks21:54
rm_workoh right i was in the middle of making alist21:54
*** rcernin has joined #openstack-lbaas21:54
openstackgerritAdam Harwell proposed openstack/octavia master: Add lb_id comment to amp haproxy listener config  https://review.openstack.org/57273021:55
openstackgerritAdam Harwell proposed openstack/octavia master: Improve the error logging for zombie amphora  https://review.openstack.org/56136921:59
rm_workjohnsom: ^^ fixed your unit test failures21:59
johnsomOh, thanks21:59
*** lxkong has joined #openstack-lbaas22:01
rm_workand reviewing that22:03
rm_workas it's one of the ones i want to get in22:03
rm_workas well as, it'd be nice to merge my SG thing22:03
rm_workthough not as urgent22:03
rm_work(just, it's ready, and fairly straightforward)22:03
xgerman_rm_work: in my practice I am seeing the Warn utput on busy amps (https://github.com/openstack/octavia/blob/master/octavia/controller/healthmanager/health_drivers/update_db.py#L137-L142)22:04
xgerman_wonder if we should ignore the heartebeat when an amp is busy22:04
rm_workxgerman_: yeah that's not good :(22:04
rm_workno, because then when it comes out of busy, it might immediately try to failover22:04
rm_workunless we also update the code that does the un-busy to also update the time22:05
xgerman_k, but in my. case it just count up the secs since last update and keeps skipping22:05
rm_workbut that is like22:05
rm_workso minimal22:05
rm_workthe real issue is that your stuff is crazy overloaded somehow :/22:05
rm_workerr22:05
rm_workthat is not what that does22:05
rm_workit's the time from packet receipt, until processing22:05
rm_worknothing to do with the health table data22:05
xgerman_ah22:05
rm_workit means you are receiving the packet, and it is taking a LONG TIME to process22:06
rm_workwhich is bad-news-bears22:06
xgerman_yep22:06
rm_workyour processes are still broken22:06
xgerman_thsi time we think it’s the DB22:06
rm_workdid we get all the patches backported and are you up-to-date?22:06
rm_workthe DB traffic really isn't that heavy, i thought that a few times but it turns out it's really not a big deal for MySQL unless you have like 1000000 amps22:06
rm_workwhat are the times you're seein22:07
xgerman_no, mysql being broken22:07
rm_worklike, 10s?22:07
rm_workwhat is your heartbeat interval?22:07
xgerman_1022:09
xgerman_Ibut we have seen galera to go crazy — so not us22:09
*** fnaval has quit IRC22:15
rm_workummmmmm22:15
rm_workhow many amps22:15
rm_workxgerman_: something is *wrong*22:16
rm_workobviously22:16
xgerman_yeah, mysql went out to lunch22:16
rm_workbut like22:16
rm_work<_<22:16
johnsomSigh, xgerman_ your issue in your lab has nothing to do with Octavia.  You can stop the process and everything is still hosed up22:16
rm_workyeah i think he's getting that :P22:16
rm_workbut i'm surprised it's mysql too22:16
johnsomoctavia is handling it properly. The DB steps are taking 40 seconds, so it's throwing stuff out.22:17
rm_workyep, that's what he said actually22:17
rm_workstill surprising22:17
xgerman_yeah, I was just curious why we don;t ignore heartbeats from amps which are busy…22:17
xgerman_which rm_work answered22:17
johnsomYeah, in the mysql container it's at 1200% CPU according to top, so....22:19
rm_workloool22:19
rm_workeh, johnsom maybe it doesn't matter, no one should be installing the R2 anyway honestly...22:36
rm_workI hope...22:36
johnsomSo, no list?22:37
xgerman_better safe than sorry22:37
rm_worki was compiling one but basically all that ended up on it were things that are meh, and things that are just not going to be ready today22:38
rm_workthe zombie thing for one (as it fixes a bug that causes explosions in the stats update)22:38
rm_workbut i had to -1 it22:38
rm_workand the SG one really doesn't matter much22:38
rm_workand the driver callback stuff and the delete api updates are not going to happen for a bit22:39
xgerman_The SG one would be useful for me but I go of shas so no real pressure22:41
rm_workright22:44
rm_worki assume people running so new that they would do RCs, would ... actually just use master or SHAs22:44
xgerman_unless you pip install22:46
rm_workerr22:46
rm_workwould the RC actually go out to people22:46
xgerman_if you install from pypi22:47
rm_workerg22:48
*** threestrands has joined #openstack-lbaas22:51
*** gigo has quit IRC22:51
*** gigo has joined #openstack-lbaas22:56
*** eandersson_ is now known as eandersson22:59
johnsomI have posted the MS2 release patch23:16
johnsomrm_work Struggling with your comment on the zombie patch23:28
rm_workerm23:30
johnsomI see a bug, but not the amp deleted thing23:31
rm_workso you say you're looking for spare amps23:31
johnsomI am saying, if it doesn't come back with an LBs owning the amp, and the amp isn't a spare, ignore the health message23:33
openstackgerritMichael Johnson proposed openstack/octavia master: Improve the error logging for zombie amphora  https://review.openstack.org/56136923:37
rm_workso23:37
rm_workamp = self.amphora_repo.get(session, id=health['id'])23:37
rm_workthat gets the amp23:37
rm_workif amp.load_balancer_id23:37
rm_workthat's the spare check23:38
johnsomnot a spare check23:38
rm_workerr wait23:38
rm_workhold on23:38
rm_workthis whole thing...23:39
rm_workare you expecting this whole thing to execute if the amp is deleted?23:39
johnsomIf it is deleted, it should log and return, not process the heartbeat23:40
rm_workerm23:40
rm_workif it's deleted, it still won't execute this23:40
johnsomBecause if it is deleted, it shouldn't be sending heartbeats23:40
rm_workif it's deleted, it will still get a return value from  db_lbs_on_amp = self.amphora_repo.get_all_lbs_on_amphora23:40
rm_workthat doesn't filter by non-deleted23:41
rm_worki think i just put the comment in the wrong place23:41
johnsomThat is true, it would match if the load balancer is deleted, so it would still process the heartbeat23:42
rm_workyeah, LB+amp could both be deleted23:43

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!