Wednesday, 2018-05-02

*** yamamoto has quit IRC00:00
johnsomBlah, so close. It looks like our Listener data model doesn't handle the sni_containers well00:02
johnsomIt puts them in as dicts where that method expects objects00:03
rm_workhmm00:06
rm_workjohnsom: wut: http://logs.openstack.org/11/492311/25/check/octavia-v2-dsvm-scenario/3bfafab/testr_results.html.gz00:08
rm_workdid I just get unlucky and catch some neutron change right today00:08
rm_workT_T00:08
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for listeners  https://review.openstack.org/49231100:08
rm_workooooh no, i actually "fixed" that function00:10
rm_workwhich breaks it for router-interface I guess00:10
johnsom"fixed"00:10
*** Swami has quit IRC00:11
rm_workyeahhhhhh so, yes, fixed00:12
rm_workhttps://review.openstack.org/#/c/492311/26/octavia_tempest_plugin/tests/waiters.py00:12
rm_worksee that ^^00:12
rm_workthat was totally not right >_>00:12
johnsomUnreadableCert: Could not read X509 from PEM00:12
johnsomUgh, how did this functional test ever run.....00:12
rm_workbut now it actually gets to where it calls show_func on the router00:12
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for listeners  https://review.openstack.org/49231100:16
rm_workah, got it00:16
rm_workbad call to cleanup00:16
johnsomThe answer is before it just DB jammed and ran for the hills00:16
rm_workjohnsom: lolwut00:16
rm_worki mean yes00:17
rm_workthat whole thing is just a dumpster fire00:17
rm_workit needs to be ripped out and rewritten00:17
johnsomhttps://github.com/openstack/octavia/blob/master/octavia/tests/functional/api/v2/test_load_balancer.py#L222500:17
rm_workbrb 15m00:28
rm_workhoping the scenarios will finish this time00:50
rm_workand then i'll post one more version that has the pep8 fix00:50
*** yamamoto has joined #openstack-lbaas00:57
*** yamamoto has quit IRC01:03
*** bzhao__ has joined #openstack-lbaas01:07
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for listeners  https://review.openstack.org/49231101:07
rm_workjohnsom: yep, listener tests are ready! pep8 fixed just pushed, but everything else passes. review when able, i'll start on pools tomorrow01:07
rm_workgonna go get some food while it's nice out :)01:07
johnsomNice, yeah, I wanted to stop an hour ago01:08
rm_workmainly looking for broad stroke stuff asap like "please combine these tests" or "please split this test" or whatever01:08
rm_workanywho, ttyl01:08
openstackgerritMichael Johnson proposed openstack/octavia master: Implement provider drivers  https://review.openstack.org/56379501:21
johnsom^^^ LB API with more driver and no handler...01:22
johnsomTomorrow listeners01:23
*** yamamoto has joined #openstack-lbaas01:59
*** yamamoto has quit IRC02:03
*** yamamoto has joined #openstack-lbaas02:05
*** yamamoto has quit IRC02:09
*** yamamoto has joined #openstack-lbaas02:20
*** sapd has quit IRC03:08
*** fnaval has joined #openstack-lbaas03:38
*** gans has joined #openstack-lbaas04:15
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for pools  https://review.openstack.org/56564004:19
*** gans has quit IRC04:22
*** ianychoi has joined #openstack-lbaas04:34
*** links has joined #openstack-lbaas04:50
rm_workjohnsom: ummm i may be crazy, but... i think i can ... make like 90% of the test code disappear with some fancy DRY work <_<05:04
rm_worki might do that right after pools05:04
johnsomHa, just don’t make it overly obscure such that no one can learn and expand the tests05:05
johnsomThe whole point of LB was to make an example / learning tool05:05
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for pools  https://review.openstack.org/56564005:08
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for listeners  https://review.openstack.org/49231105:14
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for pools  https://review.openstack.org/56564005:14
rm_workjohnsom: lolk05:14
rm_workjohnsom: but it would make ADDING tests for new objects like05:14
rm_worka couple of lines <_<05:14
rm_workwow actually pool tests looking good already05:27
rm_workso maybe tomorrow I can get one or two more hammered out05:28
rm_workreally, I just want to get to a *working full-traffic scenario test*05:29
rm_workbefore EOW05:29
rm_workbecause one of my sprint tasks is "get scenario tests running internally" :P05:30
*** yboaron_ has joined #openstack-lbaas05:30
*** fnaval has quit IRC05:53
*** astafeye__ has joined #openstack-lbaas05:58
*** sapd has joined #openstack-lbaas06:16
*** yboaron has joined #openstack-lbaas06:22
*** yboaron_ has quit IRC06:25
*** annp has joined #openstack-lbaas06:48
*** rcernin_ has joined #openstack-lbaas06:50
*** astafeye__ has quit IRC06:51
*** rcernin has quit IRC06:51
*** rcernin_ has quit IRC07:05
*** tesseract has joined #openstack-lbaas07:25
openstackgerritAdit Sarfaty proposed openstack/neutron-lbaas master: Fix EntityInUse exception message  https://review.openstack.org/56566107:37
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for listeners  https://review.openstack.org/49231107:56
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for pools  https://review.openstack.org/56564007:56
rm_workfound a possible test failure condition, pretty lulzy, but fixed07:57
rm_workhttp://logs.openstack.org/40/565640/3/check/octavia-v2-dsvm-noop-py35-api/1db1d2e/job-output.txt.gz#_2018-05-02_05_44_27_24893707:57
rm_workin noop mode, the stuff creates so quickly that it can have a duplicate create_time for two objects, making the "default sort order" (create_time) ambiguous and making the list test sporadically fail07:57
*** pcaruana has joined #openstack-lbaas08:08
*** astafeye__ has joined #openstack-lbaas08:08
*** salmankhan has joined #openstack-lbaas08:24
cgoncalvesno-op, no-go :)08:28
*** vegarl has quit IRC08:43
*** salmankhan has quit IRC08:43
*** vegarl has joined #openstack-lbaas08:44
*** salmankhan has joined #openstack-lbaas08:55
*** devfaz has quit IRC09:54
*** devfaz has joined #openstack-lbaas09:55
*** annp has quit IRC10:37
*** atoth has quit IRC10:40
*** fnaval has joined #openstack-lbaas11:07
*** fnaval has quit IRC11:12
*** atoth has joined #openstack-lbaas11:33
*** fnaval has joined #openstack-lbaas12:07
*** fnaval has quit IRC12:12
*** yamamoto has quit IRC12:25
*** yamamoto has joined #openstack-lbaas12:25
*** pcaruana has quit IRC12:30
*** nebu_ has joined #openstack-lbaas12:32
*** samccann has joined #openstack-lbaas12:56
*** pchavva has joined #openstack-lbaas13:07
*** fnaval has joined #openstack-lbaas13:07
*** pcaruana has joined #openstack-lbaas13:08
*** nebu_ has quit IRC13:12
*** fnaval has quit IRC13:12
*** fnaval has joined #openstack-lbaas13:49
*** yamamoto has quit IRC13:57
*** yamamoto has joined #openstack-lbaas14:11
*** openstackgerrit has quit IRC14:34
*** links has quit IRC14:34
*** astafeye__ has quit IRC14:49
cgoncalveshas anyone ever ran octavia dashboard py{27,35} tests locally?14:58
xgerman_no14:58
cgoncalvespy27 and py35 jobs seem to be skipping tests14:58
cgoncalveshttp://logs.openstack.org/11/543211/5/check/openstack-tox-py35/71736e8/job-output.txt.gz#_2018-04-23_11_56_40_64143314:58
xgerman_<troll>skipped tests always pass</troll>14:59
xgerman_ok, I would run the relevant tox14:59
xgerman_johnsom:  runs tests locally I think15:00
cgoncalveswell, actually we don't have tests written lol15:00
xgerman_ha, I guess we have our work cutout then. Probably ask around how other projects do UI tests — in the apst there was some selenium stuff but that has been discontinued15:01
cgoncalvesit caught my attention because our one of our downstream CI jobs was failing15:05
cgoncalvesdayou, are you around? :)15:05
johnsomcgoncalves I run the dashboard tests local.15:09
johnsomWhat tests are you looking for?15:09
johnsomI know dayou just reworked a bunch of the unit tests15:09
johnsomBut we do have pretty good unit test coverage.15:09
johnsomThe integration tests however, never ran and horizon team gave up on them, so we removed the gates for those.15:10
cgoncalvesjohnsom, where are the tests? I only find placeholders15:10
johnsomThey are integrated into the code15:10
johnsomOne sec15:10
johnsomcgoncalves So this is the (unreviewed I might add) patch for the python stuff: https://review.openstack.org/#/c/550721/  But most of our code is javascript which has in tree tests.15:11
johnsomcgoncalves If you look in the repo: https://github.com/openstack/octavia-dashboard/tree/master/octavia_dashboard/static/dashboard/project/lbaasv2/loadbalancers15:12
johnsomYou will see two files for each javascript module: loadbalancers.module.js and loadbalancers.module.spec.js15:12
johnsomThe spec file is the unit test code15:12
johnsomThe nodejs-npm-run-test gate runs those tests15:13
cgoncalvesah! so dashboard tests follow a different path structure than conventional projects15:14
johnsomcgoncalves: http://logs.openstack.org/63/564963/1/check/nodejs-npm-run-test/e811cfb/job-output.txt.gz#_2018-04-28_02_44_34_25381415:14
johnsomYes, our dashboard is Angular and not django (python)15:14
johnsomSo currently there is 100% unit coverage on the javascript modules15:15
cgoncalvesjohnsom, great! follow-up question is: why do we have py27, py25 jobs enabled?15:15
johnsomThere is still some, minor code in python to bridge us into horizon15:15
johnsomPlus it's politically the "right thing to do" for OpenStack projects....  grin15:17
cgoncalvesthose tests are eslint and karma in tox.ini?15:17
cgoncalvestrying to understand how to run them15:18
johnsomYes, I think so. You need the npm (nodejs) stuff installed for the javascript packages, etc.15:18
johnsomBut I have been success running them local.15:19
*** yamamoto has quit IRC15:21
cgoncalves"karma: commands succeeded"15:21
cgoncalvesthanks!15:21
johnsomNp15:22
dayoucgoncalves, https://review.openstack.org/#/c/550721/15:23
cgoncalvesdayou, great work you've been doing on the dashboard! keep up15:28
johnsom+10015:31
dayou:-P15:38
*** tesseract has quit IRC16:18
*** yboaron has quit IRC16:19
*** atoth has quit IRC16:21
*** yamamoto has joined #openstack-lbaas16:21
*** atoth has joined #openstack-lbaas16:24
*** yamamoto has quit IRC16:27
*** yboaron has joined #openstack-lbaas16:34
*** salmankhan has quit IRC16:37
*** b_bezak has joined #openstack-lbaas17:07
b_bezakHi17:08
johnsomHello17:08
b_bezakdo you happen to know how to enable "reqadd X-Forwarded-Proto:\ https" in Octavia?17:08
*** Swami has joined #openstack-lbaas17:08
b_bezakI mean on listener level, it looks like only X-Forwarded-Port and X-Forwarded-For are supported17:09
johnsomb_bezak Yes, when you create or update a listener you specify the "insertion_headers" option: https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-listener and https://developer.openstack.org/api-ref/load-balancer/v2/index.html#header-insertions document these17:10
johnsomb_bezak Oh, I don't think X-Forwarded-Proto is supported currently17:10
johnsomIt would need to be added.  You can create a story for that here: https://storyboard.openstack.org/#!/dashboard/stories and put up a patch. It would likely not be hard to add.17:11
b_bezakjohnsom: thank you. I'll look into it17:16
johnsomb_bezak Let us know if you run into questions, etc. we are here to help17:16
b_bezaksure17:19
*** b_bezak has quit IRC17:19
*** b_bezak has joined #openstack-lbaas17:20
*** yamamoto has joined #openstack-lbaas17:23
*** b_bezak has quit IRC17:24
*** yamamoto has quit IRC17:29
rm_workwhat does that do? I've never heard of X-Forwarded-Proto17:33
rm_worksince ... it would always have to be HTTP?17:34
rm_worksince you can't add headers to non-HTTP?17:34
johnsomrm_work It passes back to the backends whether the frontend connection was HTTP or HTTPS17:35
johnsomFor TLS termination17:35
rm_workah17:35
johnsomI have seen it used before. The use cases are... but hey17:36
*** salmankhan has joined #openstack-lbaas18:12
*** salmankhan has quit IRC18:16
*** yboaron has quit IRC18:19
*** yamamoto has joined #openstack-lbaas18:25
*** yamamoto has quit IRC18:31
nmagnezijohnsom, looks like there is no storyboard meeting this week19:04
johnsomHmm, ok19:04
*** yamamoto has joined #openstack-lbaas19:27
*** yamamoto has quit IRC19:33
-openstackstatus- NOTICE: The Gerrit service at review.openstack.org will be offline starting at 20:00 (in roughly 25 minutes) for a server move and operating system upgrade: http://lists.openstack.org/pipermail/openstack-dev/2018-May/130118.html19:36
*** atoth has quit IRC19:48
johnsom#startmeeting Octavia20:00
openstackMeeting started Wed May  2 20:00:03 2018 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.20:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
*** openstack changes topic to " (Meeting topic: Octavia)"20:00
openstackThe meeting name has been set to 'octavia'20:00
johnsomHi folks!20:00
cgoncalveshi20:00
xgerman_o/20:00
johnsom#topic Announcements20:01
*** openstack changes topic to "Announcements (Meeting topic: Octavia)"20:01
johnsomThe only announcement I have this week is that we have a new TC elected:20:01
xgerman_+120:01
johnsom#link https://governance.openstack.org/election/results/rocky/tc.html20:01
johnsomOh, and there is now an Octavia ingress controller for Octavia20:02
johnsom#link https://github.com/kubernetes/cloud-provider-openstack/tree/master/pkg/ingress20:02
johnsomAny other announcements this week?20:02
johnsom#topic Brief progress reports / bugs needing review20:03
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)"20:03
johnsomI have been busy working on the provider driver.  The Load Balancer part is now complete and up for review comments.20:03
johnsom#link https://review.openstack.org/#/c/563795/20:03
johnsomIt got a bit big due to single-call-create being part of load balancer.20:04
rm_worko/20:04
johnsomSo, I'm going to split it across a few patches (and update the commit to reflect that)20:04
-openstackstatus- NOTICE: The Gerrit service at review.openstack.org will be offline over the next 1-2 hours for a server move and operating system upgrade: http://lists.openstack.org/pipermail/openstack-dev/2018-May/130118.html20:04
*** ChanServ changes topic to "The Gerrit service at review.openstack.org will be offline over the next 1-2 hours for a server move and operating system upgrade: http://lists.openstack.org/pipermail/openstack-dev/2018-May/130118.html"20:04
nmagnezijohnsom, thank you for taking a lead on this. I will review it.20:05
johnsomHa, I guess there is that announcement as well20:05
rm_workI have been working on the octavia tempest plugin. Two patches ready for review (although I need to address johnsom's comments)20:05
johnsomI think the listener one will be a good example for what needs to happen with the rest of the API.  It's up next for me20:05
johnsom+1 on tempest plugin work20:05
johnsomAny updates on Rally or grenade tests?20:07
cgoncalvessorry, I still need to resume the grenade patch20:07
johnsomOk, NP.  Just curious for an update.20:08
nmagnezijohnsom, the rally scenario now works, i have some other internal fires to put out and then I'll iterate back to run it and report the numbers. it had a bug with the loadbalancers cleanup which is fixed now. so we are in a good shape there overall.20:08
johnsomCool!20:08
johnsomAny other updates this week or should we move on to our next agenda topic?20:09
nmagneziyeah :) it took quite a few tries but it worth the effort i think.20:09
johnsom#topic Discuss health monitors of type PING20:09
*** openstack changes topic to "Discuss health monitors of type PING (Meeting topic: Octavia)"20:09
johnsom#link https://review.openstack.org/#/c/528439/20:09
johnsomnmagnezi This is your topic.20:09
nmagneziopen it ^^ while gerrit still works :)20:10
rm_workPING is dumb and should be burned with fire20:10
nmagneziso, rm_work submitted a patch to allow operators to block it20:10
johnsomI can give a little background on why I added this feature.20:10
cgoncalvesrm_work: wait for it. I think you will like it ;)20:10
johnsom1. Most load balancers offer it.20:10
rm_workjohnsom: because you want users to suffer?20:10
nmagnezii commented that I understand rm_work's point, but I don't know if adding a config option is a good idea here20:10
nmagnezirm_work, lol20:11
rm_workwe're handing them a gun and pointing it at their foot for them20:11
nmagnezianyhow, the discussion I think we should have is whether or not we want to deprecate and later remove this option from our API20:11
rm_workcgoncalves: you're right :)20:11
johnsom2. I was doing some API load testing with members and wanted them online, but not getting HTTP hits to skew metrics.20:11
rm_workyou could also just ... not use HMs in a load test... they'll also be "online"20:12
rm_workor use an alternate port20:13
johnsomWell, they would be "no monitor"20:13
rm_workdoes TCP Connect actually count for stats?20:13
johnsomIt was basically, ping localhost so they all go online no matter what.20:13
johnsomSo, I'm just saying there was a reason I went to the trouble to fix that (beyond the old broken docs that listed it)20:14
rm_workwe could rename it to "DO_NOT_USE_PING"20:15
nmagnezijohnsom, your opinion is that we should keep ping hm as is?20:15
johnsomNow, I fully understand that joe-I-don't-know-jack-but-am-an-load-balancer-expert will use PING for all of the wrong reasons....  I have seen it with my own eyes.20:15
rm_workin *most openstack clouds* the default SG setup is to block ICMP20:16
rm_workthough I guess I can't back that up with actual survey data20:16
johnsomNice, so they instantly fail and they don't get too burned by being dumb20:16
johnsomgrin20:16
rm_workso people are like "all my stuff is down, your thing is broken"20:16
xgerman_I dislike most ooenstack clouds — there are some wacky clpuds out there20:17
rm_worklol20:17
johnsomMy stance is, most, if not all of our load balancers support it. There was at least one use case for adding it. It's there and works (except on centos amps). Do we really need to remove it?20:18
nmagnezijohnsom, in your eyes, what are the right reasons for using ping hm?20:18
* xgerman_ read about people using k8s to loadbalance since they don’t want to upgrade from Mitaka20:18
johnsomTesting purposes only...  Ha20:18
nmagnezilol20:18
nmagnezii'm not asking if we should or shouldn't remove this because of the centos amps. I'm asking this because it seem that everyone agree with rm_work's gentle statements about ping :)20:19
* rm_work is so gentle and PC20:19
rm_worktremendously gentle, everyone says so. anyone who doesn't is fake news20:20
johnsom#link http://andrewkandels.com/easy-icmp-health-checking-for-front-end-load-balanced-web-servers20:20
johnsomlol20:20
cgoncalves+1. unless there's a complelling use case for keeping ping, I'm for removing it20:20
rm_workwe SHOULD probably check with some vendors20:20
rm_workI wish we had more participation from them20:20
nmagnezithe point i'm trying to make here is that if ping is something we would want to keep, i don't think we need a config option to block it.20:20
xgerman_+120:21
rm_workI don't even see most of our vendor contacts in-channel anymore20:21
nmagneziif we agree that it should be removed, we don't need that config option as well :)20:21
xgerman_that’s why we are ding providers20:21
rm_worknmagnezi: yeah, this was supposed to be a compromise20:21
rm_workyou could argue that all compromise is bad and we should just pick a direction20:21
xgerman_anyhow, I think ping has value — not everybody runs HTTP or TCP20:21
xgerman_we have UDP coming up20:22
johnsomYeah, from what I see, all of our vendors support ICMP20:22
rm_workalright20:22
rm_workwell20:22
xgerman_just trying to thik through a UDP healthmonitor20:22
johnsomThis is true, UDP is harder to check20:22
rm_workyes20:22
johnsomMaybe someone will want us to load balance ICMP....20:22
johnsomgrin20:22
rm_workbut that's why there's TCP_CONNECT and alternate ports20:23
nmagneziHAHA20:23
rm_workany reason a UDP member wouldn't allow a TCP_CONNECT HM with the monitor_port?20:23
johnsomYes, if they don't have any TCP code....20:23
nmagnezirm_work, that might depend on the app you run on the members20:24
rm_worki mean20:24
johnsomYeah, so F5, A10, radware, and netscaler all have ICMP health check options20:24
rm_workyou would run another app20:24
rm_workthat is a health check for the UDP app20:25
rm_workto make sure it is up, etc20:25
rm_workso combo of connectable + 200OK response == good20:25
rm_workI was pretty sure that was the standard for healthchecking stuff and why we added the monitor_port thing to begin with20:25
johnsomWell, some of this UDP stuff is for very dumb/simple devices. That was what the use case discussion was at the PTG around the need for UDP20:25
nmagnezirm_work, sounds a little bit redundant. if you want to check the health of you ACTUAL app, why have another one just to answer the lb?20:26
xgerman_probably not too dumb for ICMP20:26
nmagnezi(but you could argue the same for ICMP, but at least it checks networking.. ha)20:26
johnsomSo, if the concern is for users mis-using ICMP, should we maybe just add a warning print to the client and dashboard?20:26
nmagnezijohnsom, +!20:26
nmagnezijohnsom, +120:26
xgerman_+120:26
rm_workk T_T20:26
rm_workI am ok with this20:27
nmagnezijohnsom, i would add another warning to the logs as well20:27
cgoncalves+1, plus warning msg in server side?20:27
rm_workeh, logs just go to ops, and they can see it in the DB20:27
rm_workwhich is easier to check20:27
rm_workand they already know it's dumb20:27
rm_worki wouldn't bother with the server side20:27
johnsomEh, not sure operators would care that much what heatlh monitors the users are setting.  Does that cross the "INFO" log level?????20:27
rm_workits users we need to reach20:27
nmagnezijohnsom, a user being dump sounds like a warning to me :)20:28
nmagnezidumb*20:28
johnsomYeah, I just want us to draw a balance between filling up log files with noise and having actionable info in there.20:29
*** yamamoto has joined #openstack-lbaas20:29
nmagneziwell, you only print it once, when the it's created20:29
nmagneziso it's not spamming the logs that bad20:29
johnsomHa, I have seen projects with 250 LBs in it.  Click-deploy....20:30
johnsomI am ok with logging it, no higher than INFO, if you folks think it is useful20:30
nmagnezifair enough.20:30
rm_workwait, isn't info the one that always prints?20:30
rm_workor, i guess that was your point20:31
rm_workk20:31
johnsomIt would be some "fanatical support" to have agents call the user that just did that....  Grin20:31
rm_workI would set up an automated email job20:31
nmagnezilol20:31
johnsomThat was flux...20:32
johnsomHa, ok, so where are we at with the config patch?20:32
rm_work"We noticed you just created a PING Health Monitor for LB #UUID#. We recommend you reconsider, and use a different method for the following reasons: ...."20:32
rm_workI mean... I would still like to be able to disable it, personally, but I grant that it should probably remain an option at large (however reluctantly)20:33
johnsomI can open a story to add warnings to the client and dashboard20:33
rm_workI can put WIP on this one or DNM or whatever, and just continue to pull it in downstream I guess <_<20:33
rm_workI just figured a config couldn't hurt20:33
*** yboaron has joined #openstack-lbaas20:33
*** pcaruana has quit IRC20:34
rm_workthe way I designed it, it would explain to the user when it blocks the creation20:34
nmagnezirm_work, if everyone else agree on that, I will not be the one to block it. Just wanted to raise discussion around this topic20:34
johnsomI am ok with empowering operators myself20:34
*** yamamoto has quit IRC20:34
rm_workcan we get CentOS to 1.8? :P20:34
rm_workI'd have a much weaker case then20:35
xgerman_\me wrong person to ask20:35
cgoncalves+1, still knowing nmagnezi is not a fan of adding config options like this20:35
nmagnezicgoncalves and myself are working on it. it's not easy but we are doing our best :)20:35
cgoncalvesrm_work: soon! ;)20:35
rm_workk20:35
nmagnezirm_work, we'll keep you posted20:35
rm_workI mean20:35
rm_workif we got a more official repo20:35
rm_workwe don't even need it in the main repo20:35
rm_workwe could merge my patch to the amp agent element20:35
rm_workerr, amp element20:35
rm_work(which I already pull in downstream)20:36
cgoncalvesrm_work: short answer is: likely to have 1.8 in OSP14 (Rocky)20:36
rm_workin what way?20:36
rm_workCentOS amps based on CentOS8?20:36
rm_workOfficial repo for OpenStack HAProxy?20:36
rm_workHAProxy 1.8 backported into CentOS7?20:37
cgoncalvescross tag. haproxy rpm in osp repo, same rpm as from openshift/pass repo20:37
rm_workok20:37
rm_workso we would update and merge my patch20:37
cgoncalveswe will keep haproyz 1.5 but add 'haproxy18' package20:38
rm_workyeah20:38
johnsom#link https://storyboard.openstack.org/#!/story/200195720:38
cgoncalvesrm_work: you could then delete the repo add part from your patch20:38
rm_workok20:39
rm_worki wish i could look up that CR now >_>20:39
rm_workgreat timing on gerrit outage for us, lol20:39
johnsomSo, I guess to close out the PING topic, vote on the open patch. (once gerrit is back)20:40
johnsom#topic Open Discussion20:41
*** openstack changes topic to "Open Discussion (Meeting topic: Octavia)"20:41
johnsomAny topics today?20:41
rm_workMulti-AZ?20:41
rm_workI have a patch, it is actually reasonable to review20:41
rm_workthe question is... since it will only work if every AZ is routable on the same L2... is this reasonable to merge?20:42
rm_workAt least one other operator was doing the same thing and even had some similar patches started20:42
johnsomWe have a bionic gate, it is passing, but I'm not sure how giving the networking changes they made. It must have a backward compatibility feature.  It's on my list to go update the amphora-agent for bionic's new networking.20:42
johnsomI have not looked at the AZ patch, so can't really comment at the moment20:43
rm_work(or if they're using an L3 networking driver)20:43
rm_workk, it's more about whether the concept is a -2 or not20:43
*** yboaron has quit IRC20:45
johnsomIn general mutli-AZ seems great to me.  However the details really get deep20:45
rm_workyeah20:47
rm_workthough if you have a routable L2 for all AZs, or you use an L3 net driver... then my patch will *just work*20:47
xgerman_+120:47
rm_workand the best part is that the only required config change is ... adding the additional AZs to the az config20:47
rm_work:)20:47
xgerman_Would love nova to do something reasonable but in the interim…20:48
johnsomYeah, so I think it's down to review20:49
johnsomWhich brings me to a gentle nag....20:49
xgerman_+120:49
johnsom#link ttps://review.openstack.org/#/q/(project:openstack/octavia+OR+project:openstack/octavia-dashboard+OR+project:openstack/python-octaviaclient+OR+project:openstack/octavia-tempest-plugin)+AND+status:open+AND+NOT+label:Code-Review%253C0+AND+NOT+label:Verified%253C%253D0+AND+NOT+label:Workflow%253C020:49
johnsomWell, when gerrit is back up.20:50
nmagnezijohnsom, forgot an  'h'20:50
rm_workono20:50
johnsomThere are a ton of open un-reviewed patches....20:50
johnsom#undo20:50
openstackRemoving item from minutes: #link ttps://review.openstack.org/#/q/(project:openstack/octavia+OR+project:openstack/octavia-dashboard+OR+project:openstack/python-octaviaclient+OR+project:openstack/octavia-tempest-plugin)+AND+status:open+AND+NOT+label:Code-Review%253C0+AND+NOT+label:Verified%253C%253D0+AND+NOT+label:Workflow%253C020:50
rm_workso many20:50
rm_workI need to go review too, but20:50
johnsom#link https://review.openstack.org/#/q/(project:openstack/octavia+OR+project:openstack/octavia-dashboard+OR+project:openstack/python-octaviaclient+OR+project:openstack/octavia-tempest-plugin)+AND+status:open+AND+NOT+label:Code-Review%253C0+AND+NOT+label:Verified%253C%253D0+AND+NOT+label:Workflow%253C020:50
rm_worknot just me :P20:50
johnsomYeah, please take a few minutes and help us with reviews.20:51
johnsomAny other topics today?20:51
johnsomOk then. Thanks everyone!20:52
johnsom#endmeeting20:52
*** openstack changes topic to "Discussion of OpenStack Load Balancing (Octavia) | https://etherpad.openstack.org/p/octavia-priority-reviews"20:52
nmagnezio/20:52
openstackMeeting ended Wed May  2 20:52:35 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:52
openstackMinutes:        http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-05-02-20.00.html20:52
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-05-02-20.00.txt20:52
openstackLog:            http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-05-02-20.00.log.html20:52
*** samccann has quit IRC21:03
*** pchavva has quit IRC21:14
*** yamamoto has joined #openstack-lbaas21:31
*** yamamoto has quit IRC21:37
rm_workjohnsom: the fact that you used [load_balancer] for the config section name makes me die inside21:51
rm_workand also test_load_balancer.py21:51
rm_workI think I'm going to rename all of it in this patch21:51
rm_workwe're trying to *de-underscore* the things21:52
rm_workright?21:52
johnsomNo, that config section is dictated by the tempest docs and our service type21:52
*** bcafarel has quit IRC21:52
rm_workuuuuuuugh21:54
rm_workwhy does our service type have an underscore21:54
rm_worki hate everyone and everything21:54
johnsomIt is a hyphen in the service type. I put it in as without, but got out voted21:59
*** pchavva has joined #openstack-lbaas22:01
*** openstackgerrit has joined #openstack-lbaas22:02
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for listeners  https://review.openstack.org/49231122:02
*** openstackgerrit has quit IRC22:04
-openstackstatus- NOTICE: Gerrit maintenance has concluded successfully22:10
*** fnaval has quit IRC22:18
*** fnaval has joined #openstack-lbaas22:22
rm_workjohnsom: by whom? T_T22:28
*** fnaval_ has joined #openstack-lbaas22:28
*** fnaval has quit IRC22:30
johnsomA bunch of folks, Monty, etc. I even had that as a octavia meeting agenda item....22:30
rm_workT_T22:30
*** fnaval_ has quit IRC22:30
rm_workhow did I vote22:30
johnsomI don't remember22:31
johnsomIs what it is now....22:31
*** yamamoto has joined #openstack-lbaas22:32
*** rcernin has joined #openstack-lbaas22:36
*** yamamoto has quit IRC22:38
rm_workI need to post a gif of my face to /r/WatchPeopleDieInside/22:43
johnsomHuh, who knew that was actually a thing....22:48
johnsomrm_work Are you drinking martini's now?22:53
johnsomYou seem to have DRY on the brain22:53
johnsomI'm going to push back on a few of those comments22:53
*** openstackgerrit has joined #openstack-lbaas22:59
openstackgerritMichael Johnson proposed openstack/octavia master: Implement provider drivers - Load Balancer  https://review.openstack.org/56379522:59
*** fnaval has joined #openstack-lbaas23:01
*** redondo-mk_ has joined #openstack-lbaas23:09
*** beisner_ has joined #openstack-lbaas23:09
*** mnaser_ has joined #openstack-lbaas23:09
*** fnaval has quit IRC23:15
*** redondo-mk has quit IRC23:16
*** beisner has quit IRC23:16
*** mnaser has quit IRC23:16
*** sbalukoff has quit IRC23:16
*** redondo-mk_ is now known as redondo-mk23:16
*** beisner_ is now known as beisner23:16
*** mnaser_ is now known as mnaser23:16
rm_workjohnsom: i am actually finishing up on the kwargs change in the clients now23:19
rm_worki was working on it immediately after pushing the last patch but got distracted23:19
rm_workalmost done23:19
*** sbalukoff has joined #openstack-lbaas23:22
rm_workjohnsom: or do you mean on the provider patch? because i thought i mostly agreed with you23:33
*** yamamoto has joined #openstack-lbaas23:34
*** yamamoto has quit IRC23:40
*** Swami has quit IRC23:46
johnsomrm_work Yeah, the provider patch.23:50
*** pchavva has quit IRC23:57
rm_workI will jump through SO MANY HOOPS to keep DRY23:58
rm_work(also no martinis yet, just chugging cough syrup)23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!