Wednesday, 2018-08-01

*** yamamoto has quit IRC00:03
*** yamamoto has joined #openstack-lbaas00:08
*** longkb has joined #openstack-lbaas00:52
*** phuoc_ has quit IRC00:52
*** phuoc_ has joined #openstack-lbaas00:53
openstackgerritZhaoBo proposed openstack/octavia master: UDP jinja template  https://review.openstack.org/52542001:02
openstackgerritZhaoBo proposed openstack/octavia master: UDP for [2]  https://review.openstack.org/52965101:02
openstackgerritZhaoBo proposed openstack/octavia master: UDP for [3][5][6]  https://review.openstack.org/53939101:02
johnsombzhao__ bbbzhao_ Sorry, I had an issue come up today I had to deal with where barbican client has a bug.  Looking at this now.  I'm not sure we want to re-order flow items as they are very specific.  I will look at the changes, but it makes me worry.01:09
johnsomPostVIPPlug is a step after the VIP has been plugged into the amphora that is used to configure the interfaces.01:10
bzhao__johnsom:  Never mind. I understand. ;-) .  Let me check again the code.01:11
johnsomI can look too.  I plan to focus on UDP for the next four hours or so01:11
bzhao__johnsom:  But the task "PostVIPPlug" will call  plug_vip to agent side. Agent side will plug the vip interface into the namespace and configure something..And read the system defaut kernel configuration is there also.. So I add the necessary kernal configuration for udp there either. These are all based on generic LB create flow -> listener create flow. And failover flow seems not follow that. Maybe I'm not right01:18
bzhao__here. ;-)01:18
*** abaindur has quit IRC01:19
johnsomAh, yes, ok, I mis understood what you changed the order of.01:19
bzhao__<- Bad description. ;-)01:20
*** atoth has quit IRC01:27
*** ravioli16 has joined #openstack-lbaas01:38
*** ravioli16 has quit IRC01:38
*** c13 has joined #openstack-lbaas01:39
c13With our IRC ad service you can reach a global audience of entrepreneurs and fentanyl addicts with extraordinary engagement rates! https://williampitcock.com/01:39
c13I thought you guys might be interested in this blog by freenode staff member Bryan 'kloeri' Ostergaard https://bryanostergaard.com/01:39
c13Read what IRC investigative journalists have uncovered on the freenode pedophilia scandal https://encyclopediadramatica.rs/Freenodegate01:39
c13A fascinating blog by freenode staff member Matthew 'mst' Trout https://MattSTrout.com/01:39
*** OwenBarfield has joined #openstack-lbaas01:39
*** ATDT91121 has joined #openstack-lbaas01:40
*** OwenBarfield has quit IRC01:40
*** ATDT91121 has quit IRC01:40
*** c13 has quit IRC01:40
johnsomJoy, spam01:41
*** puff has joined #openstack-lbaas01:43
puffWith our IRC ad service you can reach a global audience of entrepreneurs and fentanyl addicts with extraordinary engagement rates! https://williampitcock.com/01:43
puffI thought you guys might be interested in this blog by freenode staff member Bryan 'kloeri' Ostergaard https://bryanostergaard.com/01:43
puffRead what IRC investigative journalists have uncovered on the freenode pedophilia scandal https://encyclopediadramatica.rs/Freenodegate01:43
puffA fascinating blog by freenode staff member Matthew 'mst' Trout https://MattSTrout.com/01:43
*** ChanServ sets mode: +r 01:44
johnsomWell, we can try that...01:45
johnsomsee if it stops the spam bots for a bit01:46
*** hongbin has joined #openstack-lbaas01:47
bzhao__Seems another trust auth needed . ;-)01:48
*** puff has quit IRC01:48
*** hongbin_ has joined #openstack-lbaas01:51
*** hongbin has quit IRC01:52
johnsombzhao__ I am going to start creating bugs for issues I see.  These don't need to be fixed before merge, just before final Rocky release candidate.  So please don't panic02:13
johnsomgrin02:13
bzhao__johnsom:  Thanks, Micheal. I will keep an eye on the bugs and try to fix then.02:46
johnsombzhao__ I am tagging them with UDP so they will show in this list: https://storyboard.openstack.org/#!/story/list?status=active&tags=UDP02:46
bzhao__Yeah, thanks, that's pretty nice.02:47
bzhao__Good for search. ;-)02:47
johnsomThe member down showing as "DRAINING" is probably the most important so far02:48
bzhao__Yeah, as I just get the info from the config file/kernel file. hmm02:50
johnsomWe may want to use notify_up/notify_down scripts to write a status file per member. and not use inhibit_on_failure02:51
johnsomOr just check known members against the current member list. if it is missing, it is down02:52
johnsomagain not using inhibit_on_failure02:52
bzhao__I see. That's mine.  Make sense now.02:52
bzhao__Lacks the practise. ;-(02:53
bzhao__experience..02:53
johnsomNo worries02:54
*** hongbin_ has quit IRC02:54
bzhao__Owns to your kind direction. ;-). ha02:56
*** yamamoto has quit IRC03:00
*** yamamoto has joined #openstack-lbaas03:06
*** yamamoto has quit IRC03:14
*** yamamoto has joined #openstack-lbaas03:26
*** yamamoto has quit IRC03:34
*** yamamoto has joined #openstack-lbaas03:36
*** yamamoto has quit IRC04:03
*** yamamoto has joined #openstack-lbaas04:04
johnsombzhao__ Is this the start of your day?  I have some comments on one of the patches. Not sure if you can address tonight or if I should be doing them.04:04
bzhao__johnsom:  Here is the time for lunch. I don't find the comments. Let me have a try. ;-).04:07
johnsombzhao__ I have not posted them yet04:07
bzhao__johnsom:  Ok.04:07
johnsombzhao__ have lunch and I will post them when you return.  Just seeing if you have time to do a couple of fixes today.04:07
johnsomThis way I can keep reviewing/testing instead of stopping to fix04:08
johnsomI have about one hour left tonight I can work.04:08
bzhao__No rest is OK for me. It's time for me to fight for octavia.04:09
johnsomgrin04:09
bzhao__;-)04:09
johnsomSorry reviews have been slow on this. We had a lot to get done in Rocky that distracted.04:09
bzhao__Never mind, I real understand. I have done a very gread job in Network Zone of OpenStack.. The rank 1st is you.04:10
bzhao__nonono04:11
bzhao__YOU  have done a very gread job in Network Zone of OpenStack.. The rank 1st is you.04:11
johnsomHa04:11
bzhao__what a trerrible mistake I make...04:11
bzhao__s/gread/great/04:12
bzhao__=.=04:12
johnsomNo worries04:13
bzhao__;-). I will leave for lunch.  Thanks very much, micheal.04:14
openstackgerritMichael Johnson proposed openstack/octavia master: Followup patch for UDP support  https://review.openstack.org/58769005:24
johnsombzhao__ Ok, I have commented on patch 1 and 3, I will review 2 again tomorrow. I have also created a new patch that adds the release notes, API-ref update, and removes the misc_dynamic setting.05:25
johnsomI think the biggest issue might be the One packet schedule setting. I think we don't need that as it's one SP type. That would be the "default" setting and having it removed would be "SOURCE_IP"05:26
johnsomOverall they are minor comments. Your validation and tests are very good.05:26
johnsomOne packet scheduling will behave like a load balancer with no session persistence defined.05:28
openstackgerritMichael Johnson proposed openstack/octavia master: Followup patch for UDP support  https://review.openstack.org/58769005:45
johnsomOk, time for sleep. Catch you all in the morning05:50
*** hvhaugwitz has quit IRC05:55
*** hvhaugwitz has joined #openstack-lbaas05:55
*** dims has quit IRC06:13
*** dims has joined #openstack-lbaas06:15
bzhao__johnsom:  Thanks, micheal, have a good rest.06:21
*** dims has quit IRC06:22
*** dims has joined #openstack-lbaas06:25
*** pcaruana has joined #openstack-lbaas06:42
*** ispp has joined #openstack-lbaas06:57
dmelladojohnsom: how did you get to finish the spam xD06:58
dmelladodamn xD06:59
*** rcernin has quit IRC07:05
*** rpittau has quit IRC07:24
*** zigo_ is now known as zigo07:26
*** threestrands has quit IRC07:26
*** velizarx has joined #openstack-lbaas07:33
*** nmagnezi has joined #openstack-lbaas07:34
*** yamamoto has quit IRC07:54
*** velizarx has quit IRC07:59
*** yamamoto has joined #openstack-lbaas08:01
*** velizarx has joined #openstack-lbaas08:04
*** ivve has quit IRC08:40
*** salmankhan has joined #openstack-lbaas08:56
*** thomasem has quit IRC09:28
*** yamamoto has quit IRC09:37
*** nmagnezi has quit IRC09:38
*** yamamoto has joined #openstack-lbaas09:38
*** yamamoto has quit IRC09:41
*** yamamoto has joined #openstack-lbaas09:45
*** yamamoto has quit IRC09:46
*** yamamoto has joined #openstack-lbaas09:53
*** ispp has quit IRC10:02
*** longkb has quit IRC10:14
*** jiteka has quit IRC10:22
*** jiteka- has quit IRC10:22
*** yamamoto has quit IRC10:34
*** yamamoto has joined #openstack-lbaas11:08
bzhao__johnsom:  errrrrr, maybe I make a mistake for removing all OPS when I see the comment in https://review.openstack.org/#/c/539391/39/octavia/db/migration/alembic_migrations/versions/76aacf2e176c_extend_support_udp_protocol.py@59,  you suggest that the "default" setting  when the pool.session_persistence not exist, is that right? if yes, I think we don't need SESSION_PERSISTENCE_ONE_PACKET_SCHEDULING anymore,  as in the11:12
bzhao__previous way to set OPS type just can be from API, but right now, we set it as default if no pool.session_persistence . Ahhhh, wish my thought is the same with you.11:12
*** threestrands has joined #openstack-lbaas11:15
*** rpittau has joined #openstack-lbaas11:33
*** nmagnezi has joined #openstack-lbaas11:33
*** velizarx has quit IRC11:40
*** yamamoto has quit IRC11:48
*** yamamoto has joined #openstack-lbaas11:51
*** velizarx has joined #openstack-lbaas11:59
*** KeithMnemonic has quit IRC12:04
*** KeithMnemonic has joined #openstack-lbaas12:04
*** nmagnezi has quit IRC12:21
*** nmagnezi has joined #openstack-lbaas12:24
*** openstack has joined #openstack-lbaas12:59
*** barjavel.freenode.net sets mode: +ns 12:59
*** barjavel.freenode.net sets mode: -o openstack13:00
-barjavel.freenode.net- *** Notice -- TS for #openstack-lbaas changed from 1533128361 to 140302124413:00
*** barjavel.freenode.net sets mode: +crt-s 13:00
*** nmagnezi has joined #openstack-lbaas13:00
*** KeithMnemonic has joined #openstack-lbaas13:00
*** velizarx has joined #openstack-lbaas13:00
*** yamamoto has joined #openstack-lbaas13:00
*** rpittau has joined #openstack-lbaas13:00
*** salmankhan has joined #openstack-lbaas13:00
*** pcaruana has joined #openstack-lbaas13:00
*** dims has joined #openstack-lbaas13:00
*** hvhaugwitz has joined #openstack-lbaas13:00
*** phuoc_ has joined #openstack-lbaas13:00
*** bbbbzhao_ has joined #openstack-lbaas13:00
*** cgoncalves has joined #openstack-lbaas13:00
*** annp has joined #openstack-lbaas13:00
*** andreykurilin has joined #openstack-lbaas13:00
*** jmccrory has joined #openstack-lbaas13:00
*** sapd has joined #openstack-lbaas13:00
*** colin- has joined #openstack-lbaas13:00
*** vegarl has joined #openstack-lbaas13:00
*** mugsie has joined #openstack-lbaas13:00
*** ltomasbo has joined #openstack-lbaas13:00
*** ianychoi has joined #openstack-lbaas13:00
*** openstackgerrit has joined #openstack-lbaas13:00
*** devfaz has joined #openstack-lbaas13:00
*** zigo has joined #openstack-lbaas13:00
*** bcafarel has joined #openstack-lbaas13:00
*** eandersson has joined #openstack-lbaas13:00
*** JudeC has joined #openstack-lbaas13:00
*** irenab has joined #openstack-lbaas13:00
*** ipsecguy has joined #openstack-lbaas13:00
*** crazik has joined #openstack-lbaas13:00
*** Krast has joined #openstack-lbaas13:00
*** strigazi has joined #openstack-lbaas13:00
*** dmellado has joined #openstack-lbaas13:00
*** oanson has joined #openstack-lbaas13:00
*** xgerman_ has joined #openstack-lbaas13:00
*** mrhillsman has joined #openstack-lbaas13:00
*** sbalukoff_ has joined #openstack-lbaas13:00
*** numans has joined #openstack-lbaas13:00
*** keithmnemonic[m] has joined #openstack-lbaas13:00
*** dulek has joined #openstack-lbaas13:00
*** korean101 has joined #openstack-lbaas13:00
*** PagliaccisCloud has joined #openstack-lbaas13:00
*** zioproto has joined #openstack-lbaas13:00
*** dosaboy has joined #openstack-lbaas13:00
*** frickler has joined #openstack-lbaas13:00
*** amotoki has joined #openstack-lbaas13:00
*** colby_ has joined #openstack-lbaas13:00
*** johnsom has joined #openstack-lbaas13:00
*** lxkong has joined #openstack-lbaas13:00
*** coreycb has joined #openstack-lbaas13:00
*** beisner has joined #openstack-lbaas13:00
*** ctracey has joined #openstack-lbaas13:00
*** dayou has joined #openstack-lbaas13:00
*** logan- has joined #openstack-lbaas13:00
*** bzhao__ has joined #openstack-lbaas13:00
*** Eugene__ has joined #openstack-lbaas13:00
*** jlaffaye_ has joined #openstack-lbaas13:00
*** wolsen has joined #openstack-lbaas13:00
*** LutzB has joined #openstack-lbaas13:00
*** wayt has joined #openstack-lbaas13:00
*** dlundquist has joined #openstack-lbaas13:00
*** mordred has joined #openstack-lbaas13:00
*** obre has joined #openstack-lbaas13:00
*** rm_work has joined #openstack-lbaas13:00
*** dougwig has joined #openstack-lbaas13:00
*** hogepodge has joined #openstack-lbaas13:00
*** amitry_ has joined #openstack-lbaas13:00
*** fyx has joined #openstack-lbaas13:00
*** mnaser has joined #openstack-lbaas13:00
*** ptoohill- has joined #openstack-lbaas13:00
*** gigo has joined #openstack-lbaas13:00
*** pck has joined #openstack-lbaas13:00
*** raginbajin has joined #openstack-lbaas13:00
*** ptoohill has joined #openstack-lbaas13:00
*** mjblack has joined #openstack-lbaas13:00
*** barjavel.freenode.net sets mode: +b *!~pxydzqwy@107.163.64.5413:00
*** barjavel.freenode.net changes topic to "Discussion of OpenStack Load Balancing (Octavia) | https://etherpad.openstack.org/p/octavia-priority-reviews"13:00
*** yamamoto has quit IRC13:25
*** amuller has joined #openstack-lbaas13:29
*** yamamoto has joined #openstack-lbaas13:48
*** nmagnezi has quit IRC14:02
openstackgerritGerman Eichberger proposed openstack/octavia master: Delete zombie amphora when detected  https://review.openstack.org/58750514:04
*** nmagnezi has joined #openstack-lbaas14:09
*** yamamoto has quit IRC14:23
cgoncalveshttp://logs.openstack.org/14/587414/2/check/octavia-v2-dsvm-scenario-centos.7/ae0cf08/job-output.txt.gz#_2018-07-31_22_09_05_65932214:27
cgoncalves^ this is bogging me quite a bit. the job only overrides the nodeset14:28
cgoncalvesthe log until that point is basically the same as the ubuntu-based nodeset so I don't get it14:28
openstackgerritMerged openstack/octavia master: Fix DIB_REPOREF_amphora_agent not set on Git !=1.8.5  https://review.openstack.org/58485614:33
*** hongbin has joined #openstack-lbaas14:42
*** yamamoto has joined #openstack-lbaas14:48
*** yamamoto has quit IRC14:51
*** yamamoto has joined #openstack-lbaas14:51
*** nmagnezi has quit IRC14:52
openstackgerritMerged openstack/octavia master: Fix the bionic gate to actually run Ubuntu bionic  https://review.openstack.org/58690614:53
*** yamamoto has quit IRC14:56
*** yamamoto has joined #openstack-lbaas14:57
*** yamamoto has quit IRC14:57
*** ChanServ sets mode: +f #openstack-unregistered15:10
*** pcaruana has quit IRC15:15
*** velizarx has quit IRC15:19
*** yamamoto has joined #openstack-lbaas15:30
johnsomcgoncalves I happened to have infra's attention so asking about your centos job15:33
*** openstackstatus has joined #openstack-lbaas15:33
*** ChanServ sets mode: +v openstackstatus15:33
openstackgerritZhaoBo proposed openstack/octavia master: UDP jinja template  https://review.openstack.org/52542015:36
openstackgerritZhaoBo proposed openstack/octavia master: UDP for [2]  https://review.openstack.org/52965115:36
openstackgerritZhaoBo proposed openstack/octavia master: UDP for [3][5][6]  https://review.openstack.org/53939115:36
johnsombzhao__ Morning! lol.  I am starting my day and plan to work on UDP as much as I can. Is there something you know I should help with or go back to working on review?15:37
*** yamamoto has quit IRC15:37
bbbbzhao_johnsom:  lol, just back home from office. ;-). post the patch on my phone through RDP app.15:46
johnsombbbbzhao_ Wow, over RDP. lol15:47
-openstackstatus- NOTICE: Due to ongoing spam, all OpenStack-related channels now require authentication with nickserv. If an unauthenticated user joins a channel, they will be forwarded to #openstack-unregistered with a message about the problem and folks to help with any questions (volunteers welcome!).15:48
bbbbzhao_I remove the OPS, treat as a default setting if the default pool doesn't contain a session_persistence.15:48
bbbbzhao_ha.. it's an APP developed by Microsoft.15:48
johnsombbbbzhao_ yes, I am familiar with it.  Just running from a phone must be hard with small screen, etc.15:49
bbbbzhao_Yeah, my eye is nearly blind. ;-)15:50
johnsombbbbzhao_ Ok, I will pickup work on the patches.  Hopefully we can start merging today.15:50
bbbbzhao_johnsom:  Thanks very much. I  just test the whole patches pep8 UT and functional tests. Not test the fullstack test yet.15:53
bbbbzhao_Just not sure that remove OPS healthmonitor type  is right, but I had removed all...15:55
bbbbzhao_I think a quick look on patch 1 and 3 for making sure that is what we want. And patch 2 need review, that patch is so big, it maybe a hard work..15:57
cgoncalvesjohnsom, they must like you. I asked on qa and infra channels this morning but no answer15:59
johnsomcgoncalves lol, no comment.  Nah, they are busy and like I said I had their attention already on a nested virt issue.  So, the consensus is that it's a permissions issue on centos.  They asked if we could inject a "ls - alR" in the job to show the permissions on /home/zuul down16:01
johnsomthe guess is /home/zuul is not readable by the stack account tox is using16:02
johnsomtempest/tox16:02
cgoncalvesjohnsom, that's plausible, yes. I need to check how to inject such thing in a zuul v3 job. AFK for next 2 hours16:05
johnsomok16:05
johnsombbbbzhao_ Just to check, I may start posting patch updates for UDP. Is that ok?16:34
bbbbzhao_OK.16:35
xgerman_I don’t think my patch broke dvsm-api16:35
xgerman_https://review.openstack.org/#/c/587505/16:41
johnsomxgerman_ You got a yahtzee, all of the v2 gates failed.16:45
johnsomYes, you did cause those failures: http://logs.openstack.org/05/587505/4/check/octavia-v2-dsvm-noop-api/98b16a5/controller/logs/screen-o-cw.txt.gz#_Aug_01_14_48_27_04610416:45
johnsomYou have database lock issues in that new code16:46
johnsomCommented. Do you really need "secheduled for delete"? Can't we just go do it like we do failovers?16:54
openstackgerritGerman Eichberger proposed openstack/octavia master: Delete zombie amphora when detected  https://review.openstack.org/58750517:05
xgerman_yes, we need scheduled for delete. How are you gonna detect that they need to be deleted otherwise?17:05
*** jiteka has joined #openstack-lbaas17:12
openstackgerritMichael Johnson proposed openstack/octavia master: UDP jinja template  https://review.openstack.org/52542017:14
openstackgerritMichael Johnson proposed openstack/octavia master: UDP for [2]  https://review.openstack.org/52965117:14
openstackgerritMichael Johnson proposed openstack/octavia master: UDP for [3][5][6]  https://review.openstack.org/53939117:14
openstackgerritMichael Johnson proposed openstack/octavia master: Followup patch for UDP support  https://review.openstack.org/58769017:14
johnsomDoh17:14
johnsomScrewed that up...17:14
johnsomApply more coffee, then I will  fix17:14
openstackgerritMichael Johnson proposed openstack/octavia master: UDP jinja template  https://review.openstack.org/52542017:18
openstackgerritMichael Johnson proposed openstack/octavia master: UDP for [2]  https://review.openstack.org/52965117:21
*** salmankhan has quit IRC17:23
openstackgerritMichael Johnson proposed openstack/octavia master: UDP for [3][5][6]  https://review.openstack.org/53939117:24
openstackgerritMichael Johnson proposed openstack/octavia master: Followup patch for UDP support  https://review.openstack.org/58769017:27
johnsomOk, fixed. Sorry for that.....17:27
johnsomPlease ignore my bad rebase.17:27
*** nmagnezi has joined #openstack-lbaas17:38
*** nmagnezi has quit IRC18:15
*** salmankhan has joined #openstack-lbaas18:55
*** salmankhan has quit IRC18:59
eanderssonWhen hitting neutron-lbaas, what is the difference form hitting /v2.0/lbaas/pools/<uuid>/members vs. /v2.0/lbaas/pools/<uuid>/members.json ?19:10
*** amuller has quit IRC19:14
johnsomeandersson None19:17
eanderssonI figured as much19:17
eanderssonThanks19:18
johnsomThe old extension path was for when you had an xml option in API responses.19:18
johnsomLong since dropped in OpenStack to my knowledge19:18
*** nmagnezi has joined #openstack-lbaas19:19
johnsomeandersson Docs calls it out here: https://developer.openstack.org/api-ref/network/v2/#response-format19:19
rm_worki thought we already had code for deleting zombies19:22
johnsomNo, just ignoring them19:23
johnsomThis takes it to the next level... lol19:23
johnsomSo have visions of the health manager spinning a shotgun.  (movie reference)19:23
jitekaHello johnsom rm_work, is one of you could tell me if that error :19:25
jitekaoctavia.network.drivers.neutron.allowed_address_pairs BadRequest: Unrecognized attribute(s) 'project_id'19:25
jitekais due to a bad configuration ? or which parameters is the culprit here ?19:25
jitekait seems to happend when octavia call neutron API19:25
jitekaon a loadbalancer create19:25
johnsomjiteka Hmm, what version of Octavia and neutron do you have?19:26
jitekaoctavia queens and neutron mitaka19:27
johnsomThis is likely due to neutron not supporting the new name for tenant_id, which is project_id.  By new I mean Keystone changed it in Mitaka or Newton if not earlier19:27
johnsomjiteka Ah, ok. Yeah, that was added 8 months ago for queens. Kind of surprised neutron doesn't just ignore it.  That would be a compatibility bug. We need to ask neutron if it wants "project_id" or "tenant_id" I would guess.19:30
johnsomIt is here in the code: https://github.com/openstack/octavia/blame/master/octavia/network/drivers/neutron/allowed_address_pairs.py#L39219:31
johnsomOpened a bug for it: https://storyboard.openstack.org/#!/story/200327819:33
jitekaawesome thanks johnsom19:34
rm_workxgerman_: i made some comments on that patch19:43
xgerman_k19:43
rm_workmostly just naming/doc/log nits, except for the session handling bit...19:44
rm_work@johnsom: is the review list up to date?19:48
johnsomrm_work reasonably yes19:49
johnsom#startmeeting Octavia20:00
openstackMeeting started Wed Aug  1 20:00:11 2018 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.20:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
openstackThe meeting name has been set to 'octavia'20:00
johnsomHi folks!20:00
cgoncalveso/20:00
johnsom#topic Announcements20:00
rm_worko/20:00
nmagneziO/20:01
johnsomWe are still tracking priority bugs for Rocky.  We are in feature freeze, but we can still be fixing bugs.....20:01
johnsom#link https://etherpad.openstack.org/p/octavia-priority-reviews20:01
xgerman_o/20:01
johnsomAs an FYI, Rocky RC1 is next week. This is where we will cut a stable branch for Rocky. We should strive to have as many bug fixes in as we can.20:02
johnsomIt would be super nice to  only do one RC and start work on Stein20:02
johnsomI do have some sad news for you however....20:02
johnsomSince no one ran against me, you are stuck with me as PTL for another release.....20:03
johnsom#link https://governance.openstack.org/election/20:03
* xgerman_ raises the “4 more years” sign20:03
cgoncalves4 more years \o/ !!20:03
* nmagnezi joins xgerman_ 20:03
johnsomYou all are trying to make me crazy aren't you....20:03
cgoncalvescrazier20:03
xgerman_just showing our appreciation…20:03
nmagnezijohnsom, you scared me for a sec20:04
nmagnezijohnsom, not cool :)20:04
johnsomTowards the end of the year it will be three years for me. I would really like to see a change in management around here, so....  Start planning your campaign.20:04
xgerman_this is not where we elect PTLs for a year?20:04
johnsomNot yet. Maybe the cycle after Stein will be longer than six months....20:05
johnsomActually Stein is going to be slightly longer than a normal release to sync back with the summits20:05
xgerman_:-)20:05
johnsom#link https://releases.openstack.org/stein/schedule.html20:05
johnsomIf you are interested in the Stein schedule20:06
johnsomAlso an FYI, all of the OpenStack IRC channels now require you to be signed in with a freenode account to join.20:06
johnsom#link http://lists.openstack.org/pipermail/openstack-dev/2018-August/132719.html20:07
johnsomThere have been bad IRC spam storms recently. I blocked our channel yesterday, but infra has done the rest today.20:07
cgoncalvesI see the longer Stein release as a good thing this time around20:07
johnsomIt doesn't mean we can procrastinate though.... grin20:08
johnsomI think my early Stein goal is going to be implement flavors20:08
johnsomThat is all I have for announcements, anything I missded?20:09
johnsom#topic Brief progress reports / bugs needing review20:09
johnsomI have been pretty distracted with internal stuffs over the week, but most of that is clear now (some docs to create which I hope to upstream).20:10
johnsomI have also been focused on the UDP patch and helping there.20:10
xgerman_did two bug fixes: one when nova doesn’t release the port for failover + one for the zombie amps20:10
johnsomYeah, the nova thing was interesting. Someone turned off a compute host for eight hours.  Nova just sits on the instance delete evidently and doesn't do it, nor release the attached ports.20:12
johnsomIf someone has a multi-node lab where they can power of compute hosts, that patch could use some testing assistance.20:12
xgerman_+1020:12
xgerman_my multinode lab has customers — so I can’t chaos monkey20:13
cgoncalvesnothing special from my side: some octavia and neutron-lbaas backporting, devstack plugin fixing and CI jobs changes + housekeeping, then some tripleo-octavia bits20:13
rm_workinteresting, we're ... having that happen here, as we're patching servers on a rolling thing, and some hosts end up down for a while sometimes <_<20:13
johnsomAny other updates?  nmagnezi cgoncalves ?20:13
cgoncalvesxgerman_, could your patch (which I haven't looked yet) improve scenarios like https://bugzilla.redhat.com/show_bug.cgi?id=1609064 ? it sounds like it20:14
openstackbugzilla.redhat.com bug 1609064 in openstack-octavia "Rebooting the cluster causes the loadbalancers are not working anymore" [High,New] - Assigned to amuller20:14
nmagneziOn my end: have been looking deeply into active standby. Will report bunch of stories (and submit patches) soon20:14
nmagneziSome if the issues where already known ; some look new (at least to me)20:14
nmagneziBut nothing drastic20:14
johnsomrm_work the neat thing we saw once, but couldn't confirm was nova status said "DELETED" but there is a second status in the EXT that said "deleting"20:14
rm_workO_o20:15
nmagnezijohnsom, I actually have a question related to active standby, but that can wait for open discussion20:15
rm_workwell ... we wouldn't get bugs like cgoncalves linked, as our ACTIVE_STANDBY amps are split across AZs20:15
rm_workwith AZ Anti-affinity ;P20:15
rm_workwhich I still wish we could merge as an experimental feature, as i have seen at least two other operators HERE that use similar code20:16
xgerman_Stein…20:16
johnsomLooks like it went to failover those and there were no compute hosts left: Failed to build compute instance due to: {u'message': u'No valid host was found. There are not enough hosts available.'20:16
johnsomYeah, so too short of a timeout before we start failing over or too small of a cloud?20:17
cgoncalvesjohnsom, right. and I think after that the LBs/amps couldn't failover manually because they were in ERROR. I need to look deeper and need-info the reporter. anyway20:17
johnsomhmmm.  Ok, thanks for the updates20:18
johnsomOur main event today:20:18
johnsom#topic Make the FFE call on UDP support20:18
johnsomHmm, wonder if meeting bot is broken20:19
xgerman_mmh20:19
johnsomWell, we will see at the end20:19
cgoncalvesI *swear* I've been wanting to test this :( I even restacked this afternoon with latest patch sets20:19
johnsomCurrent status from my perspective:20:19
johnsom1. the client patch is merged and was in Rocky.  This is good and it seems to work great for me.20:19
johnsom2. Two out of the three patches I have +2'd as I think they are fine.20:20
johnsom3. I have started some stories for issues I see, but I don't consider show stoppers: https://storyboard.openstack.org/#!/story/list?status=active&tags=UDP20:20
johnsom4. I have successfully build working UDP LBs with this code.20:20
johnsom5. The gates show it doesn't break existing stuff. (the one gate failure today was a "connection reset" while devstack was downloading a package)20:21
rm_workYeah I think this also falls into "merge, fix bugs as we find them" territory20:21
rm_workas with any big feature20:21
rm_workso long as it doesn't interfere with existing code paths (which i believe it does not)20:22
johnsomYeah, I'm leaning that way too.  I need to take another pass across the middle patch to see if anything recent jumps out at me, but I expect we can turn and burn on that if needed.20:22
xgerman_I am not entirely in love with the additional code path for UDP LB health20:22
cgoncalveswould it make sense to somehow flag it as experimental? there's not a single tempest test for it IIRC20:23
xgerman_but we can streamline that later20:23
johnsomThat is in the middle patch I haven't looked at for a bit20:23
johnsomcgoncalves we have shipped stuff in worse shape tempest wise... sigh20:23
xgerman_yeah, my other beef is with having a special UDP listener on the amphora REST API…20:24
johnsomxgerman_ What do you mean? It's just another protocol on the listener.....20:24
johnsomOh, amphora-agent API?20:25
xgerman_yep: https://review.openstack.org/#/c/529651/57/octavia/amphorae/backends/agent/api_server/server.py20:26
johnsomcgoncalves I can probably wipe up some tempest tests for this before Rocky ships if you are that concerned.  We will need them20:26
johnsomWill be a heck of a lot easier than the dump migration tool tests20:26
xgerman_wouldn’t bet on it ;-)20:26
johnsomWell, I have been doing manual testing on this a lot so have a pretty good idea how I would do it20:27
cgoncalvesjohnsom, I'd prefer having at least a basic udp test but I wont ask you for that. too much already on your plate20:27
johnsomAny more discussion or should we vote?20:29
xgerman_well, how do others feel about the architecture ?20:30
xgerman_or, let’s vote ;-)20:30
johnsom#startvote Should we merge-and-fix the UDP patches? Yes, No20:30
openstackBegin voting on: Should we merge-and-fix the UDP patches? Valid vote options are Yes, No.20:30
openstackVote using '#vote OPTION'. Only your last vote counts.20:30
xgerman_#vote abstain20:31
openstackxgerman_: abstain is not a valid option. Valid options are Yes, No.20:31
johnsomNo maybe options for you whimps....  Grin20:31
cgoncalves#vote yes20:31
cgoncalves(do I get to vote?!)20:31
johnsom#vote yes20:31
johnsomYes, everyone gets a vote20:31
nmagneziI was not involved in this, but johnsom reasoning make sense to me20:32
nmagnezi#vote yes20:32
johnsomxgerman_ rm_work Have a vote?  Anyone else lurking?20:32
rm_workah20:32
xgerman_I thought sitting it out would be like abstain ;-)20:33
rm_work#vote yes20:33
* johnsom needs a buzzer for "abstain" votes20:34
rm_workthough I should at least try to get through the patches20:34
rm_workto make sure there's nothing that'd be hard to fix later20:34
johnsomYeah, I think 1 and 3 are good. I would like some time on 2 today, so maybe push to merge later today or early tomorrow20:34
johnsomGoing once....20:35
johnsomGoing twice.....20:35
johnsom#endvote20:35
openstackVoted on "Should we merge-and-fix the UDP patches?" Results are20:35
openstackYes (4): rm_work, nmagnezi, cgoncalves, johnsom20:35
johnsomSold, you are now the proud owners of a UDP protocol load balancer20:35
xgerman_dougwig: will be proud ;-)20:36
johnsomSo, cores, if you could give your approve votes on 1 and 3.  Give us some time on 2. I will ping in the channel if it's ready for the final review pass20:37
cgoncalveswe now *really* need to fix it for centos amps. I just tried creating a LB and it failed20:37
xgerman_what’s the patch I have to pull for a complete install? 1 or 3?20:37
johnsomAh bummer. cgoncalves can you help with that or too busy?20:37
cgoncalvesjohnsom, I will prioritize that for tomorrow20:38
johnsomxgerman_ 3 or https://review.openstack.org/53939120:38
xgerman_k20:38
johnsomI also added a follow up patch with API-ref and release notes and some minor cleanup20:38
johnsomhttps://review.openstack.org/58769020:38
johnsomWhich is also at the end of the chain.20:38
xgerman_k20:39
johnsomcgoncalves If you have changes, can you create a patch at the end of the chain? That way we can still make progress on review/merge but get it fixed20:39
cgoncalvesjohnsom, sure20:39
dougwigUDP, damn straight.20:39
johnsomIf I get done early with my review on 2 I might poke at centos, but no guarantees I will get there.20:39
johnsomdougwig o/ Sorry you missed the vote.  Now you can load balancer your DNS servers...   grin20:40
xgerman_:-)20:40
dougwignext up, rewrite in ruby20:40
johnsomYou had better sign up for PTL if you want to do that....20:41
johnsomgrin20:41
johnsom#topic Open Discussion20:41
johnsomnmagnezi I think you had an act/stdby question20:41
nmagnezijohnsom, yup :)20:41
nmagnezijohnsom, so did a basic test of spawning a highly available load balancer , and captured the traffic on both amps20:42
nmagneziSpecifically, on the namespace that we run there20:42
nmagneziMASTER: https://www.cloudshark.org/captures/1d0a1028c40220:42
nmagneziBACKUP: https://www.cloudshark.org/captures/8a4ee5b38e1820:42
nmagneziFirst question, mm.. I was not expecting to see tenant traffic in the backup one20:43
nmagneziEven if I manually "if down" the VIP interface (who does not send GARPs - I verified that) -> I still see that traffic20:43
nmagneziAnd that happens specifically when I send traffic towards the VIP20:44
nmagnezibtw in this example -> 10.0.0.1 is the qrouter NIC and 10.0.0.3 is the VIP20:44
johnsomIt's likely the promiscuous capture on the port.20:45
nmagnezijohnsom, you mean that it is because I use 'tcpdump -i any" in the namespace?20:46
johnsomOh, I know what it is. It's the health monitor set on the LB. It's outgoing tests for the member I bet20:46
xgerman_+120:46
nmagneziIIRC I didn't set any health monitor20:46
nmagneziLemme double check that real quick20:46
xgerman_mmh20:46
johnsomYour member is located out the VIP subnet (you didn't specify a subnet at member create)20:47
johnsomBecause on that backup, those HTTP are all outbound from the VIP20:47
nmagneziChecked.. no health monitor set20:48
nmagneziThe members reside on the same subnet as the VIP20:48
nmagneziAll in privste-subnet the is created by default in devstack20:49
johnsomHmm, they do look a bit odd. Yeah, my bet is the promiscuous setting on the port is picking up the response traffic from the master, let's look at the MAC addresses.20:49
johnsomThat is probably why only half the conversation is seen on the backup.20:50
nmagneziYeah that looked very strange.. no SYN packets20:50
johnsomIf you check, the backup's haproxy counters will not be going up20:51
nmagneziWill check that20:51
nmagneziBut honestly I was not expecting to see that traffic on the backup amp20:52
johnsomIt looks right to me in general. Yeah, generally I wouldn't either, but I'm just guessing it's how the network is setup underneath and the point of capture.20:53
nmagneziI still don't get why it's there actually. I know the two amps communicate for other stuff (e.g. keepalived)20:53
nmagneziokay20:53
johnsomThe key to helping understand that is to look at the MAC addresses of your ports and the packets.  The 0.3 packets will likely have the MAC of the base port on the master20:53
johnsomIf you switch it over it should be the base port of the backup in those packets.20:54
johnsomDoes that help?20:55
nmagneziI was inspecting the qrouter arp table while doing those tests. It remained consistent with the MASTER MAC address20:55
nmagneziIt does, thank you20:55
nmagneziWill keep looking into this20:55
johnsomOk, cool.20:55
johnsomAny other items today?20:55
nmagneziIf there's time I have another question20:56
nmagneziBut let other folks talk first20:56
nmagnezi:)20:56
johnsomSure, 5 minutes20:56
nmagneziGoing once..20:56
johnsomJust take it20:56
nmagneziha20:56
nmagneziok20:56
nmagneziSo if we look at the capture from master20:56
nmagneziMASTER: https://www.cloudshark.org/captures/1d0a1028c40220:57
nmagneziSome connections end with RST, ACK and RST20:57
nmagneziSome not20:57
nmagneziIs that an HAPROXY thing to close connections with pool members?20:57
nmagneziIt does not happen with all the sessions20:58
johnsomIf it is a flow with the pool member, yes, that is the connection between HAProxy and the member server.20:58
nmagneziOkay, no more questions here20:59
johnsomIf the client on the front end closes the connection to the LB, haproxy will RST the backend.20:59
nmagneziThank you!20:59
johnsomLet me see if I can find that part of the docs.20:59
johnsomI will send a link after the meeting21:00
nmagneziNp21:00
johnsomThanks folks!21:00
johnsom#endmeeting21:00
openstackMeeting ended Wed Aug  1 21:00:39 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-08-01-20.00.html21:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-08-01-20.00.txt21:00
openstackLog:            http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-08-01-20.00.log.html21:00
nmagnezio/21:00
johnsomnmagnezi Looking at the HAProxy http log entries might help you see why or who did the RST.21:02
johnsomThe magic decoder ring is here: http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#8.521:02
johnsomThere is also a case in the older versions of haproxy where a "reload" might trigger some RST handshakes.21:03
johnsomThat was fixed in 1.821:03
nmagnezijohnsom, hint taken ;)21:04
johnsomnmagnezi Ha, no hint implied21:04
johnsomThat is Adam's mission21:05
nmagneziI hope we can share good news about this in the near future21:06
nmagneziWe are trying21:06
nmagneziFor some time now..21:06
openstackgerritGerman Eichberger proposed openstack/octavia master: Delete zombie amphora when detected  https://review.openstack.org/58750521:25
*** nmagnezi is now known as nmagnezi_21:26
*** nmagnezi_ has quit IRC21:46
*** yamamoto has joined #openstack-lbaas21:57
rm_workah nmagnezi left but ... centos amps already use 1.8 default :P so the mission is complete ^_^22:17
rm_workoh also22:17
rm_workapproved for PTG travel through the foundation22:17
rm_workso I'm good :)22:18
rm_workflights booked, hotel should be covered too22:18
johnsomWahoo! Excellent.  Glad you can join us!22:19
*** hongbin has quit IRC22:20
johnsomrm_work You might find this interesting: https://storyboard.openstack.org/#!/story/200319722:21
johnsombbq client is not honoring the endpoint_type22:21
rm_workhmm22:21
rm_worki wonder if that's my fault22:22
rm_worki wrote a lot of that client22:22
rm_workthough i think it was changed a bunch since then too22:22
johnsomlol, well, there might be a reason I thought you might be interested22:22
rm_workOH yeah so, the GET calls thing is by design22:22
rm_workbecause technically barbican can do federation stuff22:22
rm_workso we can't ever assume anything22:22
rm_workthe user passes a ref, and *that ref is what we try to get*22:22
rm_workperiod22:23
rm_workit could be in another cloud even22:23
rm_workso long as federated identity works22:23
rm_workwhy/how is this causing us a problem?22:23
johnsomSo, if a deployment has internal endpoints and certs for service->service calls that differ from the public certs how do you pull anything out of it?22:23
rm_work:/22:24
rm_workwe could rewrite those intelligently in our own code22:24
rm_workbut it's really "not a bug"22:24
*** rcernin has joined #openstack-lbaas22:24
johnsomRight. It's the roach motel, you can put stuff in, but never get it out22:24
rm_workit HAS to work this way22:24
rm_workfor federation to work22:24
rm_workbarbican itself can't be in the business of rewriting secret refs because there's zero guarantee if it'll do the right thing22:25
johnsomRight, agree, but I am attempting to connect to the Barbican API to hand it an HREF to fetch. Right now the client fails to connect to the bbq API because it doesn't honor which endpoint to use out of keystone.22:26
johnsomIt ALWAYS goes to public22:26
johnsombasically22:26
johnsomFor gets.  For POST it does actually use the endpoint catalog22:27
rm_workit goes to the endpoint that the secret ref indicates22:27
rm_workit doesn't "pick public"22:27
rm_workif the ref was "http://some-other-site.com/secret/1234" it would go there22:28
rm_workregardless of what public/internal/admin whatever is set in keystone22:28
rm_workbecause that is where the ref says the secret lives22:28
johnsomRight, that is an OpenStack fail IMO22:28
rm_workno22:28
rm_workthis is how federation works22:28
johnsomAGree to strongly disagree. It may be how bbq does "federation" but ....22:29
johnsomYou understand the problem right? Octavia can't connect to public which is what the href is stamped with. It even fails host validation22:31
johnsomI looks like glance just doesn't use the bbq client. I wonder if this is why22:34
bbbbzhao_Thanks Team, thanks johnsom , it's 6:33am here. I will prepare to go to office and pick up the rest..22:34
rm_workit might be worth having this discussion in the barbican channel so others can give feedback too22:35
rm_workjohnsom: how would you propose you allow users to use a federated secret then?22:36
rm_workif the client force-directs them to the cloud's native instance22:36
johnsomYeah, I tried, no one has answered since yesterday22:36
rm_workprolly gotta ping some folks22:36
johnsomI am fine moving there if you want. Not sure if they stopped the spam or not22:37
rm_workeverywhere did22:40
rm_workinfra expanded it to global for all openstack channels22:41
*** abaindur has joined #openstack-lbaas22:47

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!