Friday, 2018-05-18

*** yamamoto has joined #openstack-lbaas00:12
*** yamamoto has quit IRC00:17
*** AlexStaf has joined #openstack-lbaas00:17
*** KeithMnemonic has quit IRC00:19
*** longkb has joined #openstack-lbaas00:19
*** annp has joined #openstack-lbaas01:22
*** vnovakov2 has joined #openstack-lbaas01:45
*** vnovakov has quit IRC01:47
*** yamamoto has joined #openstack-lbaas01:58
*** yamamoto has quit IRC02:04
*** yamamoto has joined #openstack-lbaas02:20
*** yamamoto has quit IRC02:25
*** phuoc_ has joined #openstack-lbaas02:40
*** phuoc has quit IRC02:42
*** yamamoto has joined #openstack-lbaas02:43
openstackgerritenikanorov proposed openstack/neutron-lbaas master: Make subnet_id an optional parameter for pool members  https://review.openstack.org/56930802:46
*** yamamoto has quit IRC02:48
Eugene_rm_work: hi. i still decided to fix it ^02:49
*** yamamoto has joined #openstack-lbaas03:04
*** yamamoto has quit IRC03:09
*** yamamoto has joined #openstack-lbaas03:25
*** yamamoto has quit IRC03:29
*** ivve has quit IRC03:43
*** yamamoto has joined #openstack-lbaas03:45
*** yamamoto has quit IRC03:52
*** rcernin has quit IRC04:03
*** rcernin has joined #openstack-lbaas04:03
*** yamamoto has joined #openstack-lbaas04:08
*** yamamoto has quit IRC04:09
*** yamamoto has joined #openstack-lbaas04:09
*** phuoc has joined #openstack-lbaas04:59
*** phuoc_ has quit IRC05:01
*** vnovakov2 has quit IRC05:23
*** annp has quit IRC05:37
*** annp has joined #openstack-lbaas05:38
*** yamamoto has quit IRC06:31
*** kobis has joined #openstack-lbaas06:33
openstackgerritOpenStack Proposal Bot proposed openstack/octavia-dashboard master: Imported Translations from Zanata  https://review.openstack.org/56934606:50
*** yamamoto has joined #openstack-lbaas06:51
*** yamamoto has quit IRC06:56
*** kobis has quit IRC06:57
*** rcernin has quit IRC07:01
*** kobis has joined #openstack-lbaas07:03
*** ispp has joined #openstack-lbaas07:05
*** yamamoto has joined #openstack-lbaas07:12
*** tesseract has joined #openstack-lbaas07:14
*** ispp has quit IRC07:16
*** threestrands has quit IRC07:17
*** kobis has quit IRC07:18
*** yamamoto has quit IRC07:20
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for members  https://review.openstack.org/56619907:21
rm_workforgot to add the right depends-on07:21
rm_workwhich is why you were getting a fail07:21
rm_workjohnsom: ^^07:22
*** yamamoto has joined #openstack-lbaas07:34
*** AlexeyAbashkin has joined #openstack-lbaas07:46
*** AlexStaf has quit IRC08:08
*** kobis has joined #openstack-lbaas08:17
*** ispp has joined #openstack-lbaas08:36
*** kobis has quit IRC08:50
*** salmankhan has joined #openstack-lbaas08:51
*** ispp has quit IRC08:52
*** ispp has joined #openstack-lbaas09:00
*** ispp has quit IRC09:08
*** ispp has joined #openstack-lbaas09:16
*** salmankhan has quit IRC10:10
*** ispp has quit IRC10:12
*** salmankhan has joined #openstack-lbaas10:13
*** kobis has joined #openstack-lbaas10:17
*** kobis has quit IRC10:27
*** yamamoto has quit IRC10:30
*** salmankhan has quit IRC10:31
*** salmankhan has joined #openstack-lbaas10:36
*** kobis has joined #openstack-lbaas10:45
*** kobis has quit IRC10:48
*** longkb has quit IRC10:49
*** kobis has joined #openstack-lbaas10:49
*** annp has quit IRC10:51
*** yamamoto has joined #openstack-lbaas10:51
*** kobis has quit IRC10:52
*** yamamoto has quit IRC10:56
*** kobis has joined #openstack-lbaas10:58
*** yamamoto has joined #openstack-lbaas11:02
*** yamamoto has quit IRC11:06
*** ispp has joined #openstack-lbaas11:08
*** atoth has quit IRC11:12
*** yamamoto has joined #openstack-lbaas11:24
*** kobis has quit IRC11:29
*** yamamoto has quit IRC11:29
*** yamamoto has joined #openstack-lbaas11:41
*** salmankhan1 has joined #openstack-lbaas11:50
*** salmankhan has quit IRC11:50
*** salmankhan1 is now known as salmankhan11:50
*** atoth has joined #openstack-lbaas12:00
*** samccann has joined #openstack-lbaas13:05
*** phuoc_ has joined #openstack-lbaas13:12
*** yamamoto has quit IRC13:13
*** phuoc has quit IRC13:15
*** kobis has joined #openstack-lbaas13:27
*** yamamoto has joined #openstack-lbaas13:34
*** yamamoto has quit IRC13:35
*** yamamoto has joined #openstack-lbaas13:36
*** ispp has quit IRC13:58
*** ispp has joined #openstack-lbaas14:04
*** ispp has quit IRC14:07
*** phuoc has joined #openstack-lbaas14:08
*** ispp has joined #openstack-lbaas14:09
*** ispp has quit IRC14:09
*** phuoc_ has quit IRC14:11
*** kobis has quit IRC14:25
*** phuoc has quit IRC14:35
*** kobis has joined #openstack-lbaas14:39
*** ispp has joined #openstack-lbaas14:57
*** ispp has quit IRC14:58
*** ispp has joined #openstack-lbaas15:15
*** ispp has quit IRC15:17
*** salmankhan has quit IRC15:18
*** ispp has joined #openstack-lbaas15:23
*** ispp has quit IRC15:24
*** kobis has quit IRC15:26
*** kobis has joined #openstack-lbaas15:26
*** kobis has quit IRC15:33
*** atoth has quit IRC15:36
*** ispp has joined #openstack-lbaas15:36
*** ispp has quit IRC15:40
*** tesseract has quit IRC15:46
*** AlexeyAbashkin has quit IRC16:06
*** kobis has joined #openstack-lbaas16:38
*** kobis has quit IRC16:43
*** kobis has joined #openstack-lbaas16:47
*** kobis has quit IRC16:47
*** kobis has joined #openstack-lbaas16:48
johnsomCores, can we get a review on this stable/ocata patch: https://review.openstack.org/#/c/562850/17:00
johnsomThere was a mailing list email where someone ran into this issue17:00
*** imacdonn has quit IRC17:31
*** imacdonn has joined #openstack-lbaas17:31
*** salmankhan has joined #openstack-lbaas17:44
johnsomcgoncalves One quick fix and you have +2 from me: https://review.openstack.org/#/c/54965417:50
johnsomAwesome job BTW17:50
*** salmankhan has quit IRC17:51
rm_worknoice17:52
johnsomNot sure if he is around. Have tempted to just fix and get on with it17:52
rm_workyeah i might, let me see17:52
johnsomrm_work BTW, after I applied the HM patch I'm now getting 11 failures where before I just had 2.17:52
johnsomDumping logs now to dig in17:53
rm_workummmm wut17:53
rm_workit's passing upstream O_o17:53
rm_workI guess it's prolly in the APIs17:53
johnsomYeah, but I'm running live17:53
rm_workwhich yeah17:53
rm_worki'll look17:53
rm_worki was running these against my staging cloud and they were all passing17:54
rm_worki'll get my devstack back to Live17:54
rm_workoh, it is17:54
*** leitan has joined #openstack-lbaas17:54
johnsomYeah, maybe I did something wrong, not sure yet. For whatever reason the tempest.log file didn't have the tracebacks17:55
johnsomSo re-ran, dumping console to a log file17:55
rm_workjust the member ones?17:57
rm_worklike, what -r did you use17:57
johnsomI ran everything17:57
rm_workk17:59
rm_worklet me know when i can see a runlog / debuglog17:59
johnsomAh, hmm, might be a broken devstack, I'm seeing quota issues18:00
johnsomI'm going to rebuild it.  ~10 minutes18:00
johnsomAre you fixing the grenade patch or should I?18:02
*** yamamoto has quit IRC18:02
rm_workit's literally just an edit to that regex?18:03
rm_workyou do it, i'm not 100% confident i know the right thing18:03
*** ivve has joined #openstack-lbaas18:03
johnsomyeah, let me get it then18:03
rm_workbut I am ready to review18:03
rm_workyeah maybe my devstack is "reverse-broken"? :P18:03
rm_worklike, not working the same as upstream, in a way that causes everything to be good18:03
rm_worklol18:03
rm_workall passes here18:03
rm_workbut i usually run -t, i'll run wide-open18:04
johnsomYeah, this might just be past runs left something or ...  let me rebuild18:05
openstackgerritMichael Johnson proposed openstack/octavia master: Add grenade support  https://review.openstack.org/54965418:05
johnsomThere we go.18:07
rm_worko, lol18:07
johnsomAfter that we just need an upgrade doc and we can assert cold upgrades18:07
rm_worknice18:07
rm_workthen we can go for the next level!18:07
johnsomThe next one just needs some minor tweaks to the grenade job to test that we don't hose LBs during the upgrade18:08
johnsomI think I have a decent stab at slides.  I might record a dashboard demo just to have as backup just in case I somehow have extra time18:09
johnsomSent you guys a preview link to both presentations.  Feel free to comment.18:11
johnsomI really haven't done a polish run over them yet18:12
*** kobis has quit IRC18:18
*** kobis has joined #openstack-lbaas18:20
*** yamamoto has joined #openstack-lbaas18:23
*** kobis has quit IRC18:24
johnsomHmm, so this is definitely an problem: OverQuotaClient: Quota exceeded for resources: ['security_group'].18:28
*** yamamoto has quit IRC18:28
johnsomThat is with a fresh devstack, running tempest with no settings, just defaults.18:31
*** yamamoto has joined #openstack-lbaas18:33
*** yamamoto has quit IRC18:38
rm_work<_<18:46
rm_workhmmm mine actually had some issues too running *all tests* in parallel18:47
rm_workneed to figure out what that is18:47
rm_workthere's ... too many ways to run this stuff18:48
rm_workit's frustrating to try to swap between noop and live tests, for scenario and API18:48
rm_workbecause i fix one thing but break it for the other mode, or something, or get confused about which mode is supposed to have what states18:48
rm_workgonna have to figure out a way to streamline this at least18:48
rm_workso i can run in live-mode and noop-mode at the same time, not have to switch VMs constantly (which involves setup work every time)18:49
rm_workerg so mine runs member tests fine but is failing on pool tests somehow now18:50
*** yamamoto has joined #openstack-lbaas18:54
*** leitan has quit IRC18:58
johnsomWith concurrency 2 I don't get the quota issue, but still have failures:18:58
johnsomhttps://www.irccloud.com/pastebin/9AZ7mwT5/18:58
*** leitan has joined #openstack-lbaas18:58
johnsomI am going to run grab a sandwich and will be back in a bit to dig into this.18:58
*** yamamoto has quit IRC19:01
*** leitan has quit IRC19:03
rm_workhmmm one pool API test seems wrong19:05
rm_work"# Operating status for pools will always be offline without members"19:06
rm_workI thought that was true19:06
rm_workmaybe it's not apparently19:06
*** yamamoto has joined #openstack-lbaas19:17
*** yamamoto has quit IRC19:22
*** salmankhan has joined #openstack-lbaas19:31
*** salmankhan has quit IRC19:36
*** yamamoto has joined #openstack-lbaas19:38
*** yamamoto has quit IRC19:43
johnsomGrrrr, DIB http://logs.openstack.org/54/549654/32/check/octavia-grenade/25d9c16/logs/grenade.sh.txt.gz#_2018-05-18_18_37_32_51219:50
*** yamamoto has joined #openstack-lbaas19:59
*** yamamoto has quit IRC20:05
johnsomSo, for whatever reason my LB is not going ONLINE. Like it didn't get a heartbeat.20:26
johnsomummm, hm, no HM log...20:27
rm_workerm20:32
rm_workwhich patches do you have20:32
rm_workyou need a few20:32
rm_workhttps://review.openstack.org/568711 and https://review.openstack.org/56795520:32
rm_workspeaking of https://review.openstack.org/#/c/568711/ that could use a merge I think20:33
rm_worklet me figure out this pool issue20:33
johnsomYeah, I think the HM issue was biting me20:34
*** yamamoto has joined #openstack-lbaas20:37
*** yamamoto has quit IRC20:41
*** samccann has quit IRC20:45
*** yamamoto has joined #openstack-lbaas20:53
*** yamamoto has quit IRC20:59
rm_worktoday has been meetings -> errands -> meetings21:00
rm_workugh21:00
johnsomYeah, So after fixing my HM, it's just this on: octavia_tempest_plugin.tests.api.v2.test_pool.PoolAPITest.test_pool_create_with_listener21:04
rm_workyes same21:06
rm_workthat's the one i'm working21:06
johnsomOk21:06
rm_worki think it's a legit bad expectation21:06
rm_work[12:06:36]  <rm_work>"# Operating status for pools will always be offline without members"21:07
rm_work[12:06:39]  <rm_work>I thought that was true21:07
rm_work[12:06:42]  <rm_work>maybe it's not apparently21:07
rm_workbut i literally have not had time to more than read that line since this morning <_<21:07
rm_workstill in a meeting21:07
johnsomI'll let you look at that and I will try to figure out why image building for the grenade jobs is failing with out of disk space21:07
rm_workk21:07
*** yamamoto has joined #openstack-lbaas21:07
rm_workdo you know offhand the answer to that?21:07
rm_workare pools supposed to go ONLINE with no members? O_o21:08
rm_workgonna try to dig through now21:08
rm_workI see pool_dict[constants.OPERATING_STATUS] = constants.OFFLINE21:09
rm_workon the API side21:09
rm_workso i assume it'd have to be in update_db21:09
johnsomI don't know off my head21:10
rm_workI guess if it has a listener... it gets processed21:10
rm_workinteresting21:10
*** yamamoto has quit IRC21:12
johnsomOk, so figured it out.21:18
johnsomstable/queens image building is broken21:18
rm_work<_<21:18
rm_workright21:18
johnsomThe ever bloating image has grown21:19
rm_workdon't we have that patch outstanding21:19
johnsomyet again21:19
rm_workah different thing?21:19
johnsomYeah, new/different21:19
johnsomI made master better by switching to ubuntu-minimal21:19
*** yamamoto has joined #openstack-lbaas21:19
*** yamamoto has quit IRC21:24
*** yamamoto has joined #openstack-lbaas21:32
*** yamamoto has quit IRC21:38
rm_workjohnsom: ok, testing all the PoolAPITest stuff in serial and then parallel21:43
rm_workand then adding in the member tests on top21:44
rm_workand then trying it all in parallel together21:44
johnsomOk, I'm still noodling on how to fix stable21:44
rm_workmaybe fixed the bad assumption21:44
rm_workthis is like, a matrix21:44
rm_workhas_listener * test_with_noop21:44
*** yamamoto has joined #openstack-lbaas21:47
*** yamamoto has quit IRC21:52
*** yamamoto has joined #openstack-lbaas21:58
*** yamamoto has quit IRC22:02
rm_workerrr, right. i think it's just not going to work for me to run the tests in parallel with *live amps*22:15
johnsomI can22:15
johnsomThe gotcha for me was the neutron SG quota needs to be bumped if I run concurrency 422:16
*** yamamoto has joined #openstack-lbaas22:18
rm_workah i'm running pre-traffic which means it doesn't have my optimizations for the members22:22
rm_workso it spins members for each class22:22
*** yamamoto has quit IRC22:23
*** yamamoto has joined #openstack-lbaas22:28
rm_workyeah my machine just can't do parallel at all22:33
*** yamamoto has quit IRC22:33
johnsomPost it and I can give it a spin22:33
rm_workwhat are you running these on22:34
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for members  https://review.openstack.org/56619922:34
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create basic traffic balancing scenario test  https://review.openstack.org/56670022:34
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for healthmonitors  https://review.openstack.org/56768822:34
rm_workhere's a rebased chain22:34
rm_worki didn't actually fix anything besides the one pool bug22:34
rm_worki know you had some comments on the traffic22:35
johnsomMy dev box, it's a vmware VM. 4 vcpu, 20GB RAM, 250GB disk22:35
rm_workah22:35
rm_worki've got 8G of RAM in my stack VM, lol22:35
rm_worki think22:36
rm_workmaybe 622:36
johnsomYeah, that is tight22:36
rm_worknope, 4g lol22:36
johnsomOk, I have a workaround for the stables. If I uninstall the snapd stuff it frees up enough for DIB to continue.  The apt cache purge is like the last step, so there is a lot of build bloat still there in the non-minimal images22:37
rm_worklol22:37
johnsomI didn't know you could run a devstack in 4GB22:37
rm_workyes! :P22:38
rm_workand you can even have ONE LB and two member VMs! :P22:38
rm_work(this is why i complained about starting so many tiny member VMs)22:38
*** yamamoto has joined #openstack-lbaas22:38
johnsomhttps://review.openstack.org/56953122:40
johnsom^^^ that is my stable branch image build fix22:40
rm_worklol k22:42
johnsomDid you add something to bump the neutron SG quota limit?22:42
*** yamamoto has quit IRC22:43
johnsomRemoving snapd drops about 77MB22:43
rm_workno22:46
rm_workbut it's not that22:46
rm_workit can't boot enough amps lol22:46
rm_workit dies just trying to get a third amp to come up22:46
rm_work(combo of two classes + one of the tests that spins a second one)22:46
johnsomYeah, no, I know. I was talking about the stable branch issue22:46
rm_workoh22:46
rm_workhmmm22:47
rm_workwell, in gates they run in -t22:47
rm_workdon't they?22:47
johnsomno22:47
rm_workconsidering they all pass, they must be?22:47
rm_workbecause... they all pass in gates22:47
rm_work<_<22:47
rm_workAPI are run in noop in the gate22:48
rm_workand the most SG we could ever need in scenarios is like...22:48
rm_worknot many22:48
johnsomIt looks like the gates run concurrency 222:48
*** yamamoto has joined #openstack-lbaas22:48
johnsomBut it is based off the number of cores on the host22:49
johnsominstance I mean22:49
johnsomYeah, but if we don't set that people pulling this down are going to hit it and have a bad experience...22:49
rm_workhmm22:50
rm_workcan look into it22:50
johnsomI mean, this is for testing that a cloud is working right?  They are going to run it live, probably on some beefy hardware22:50
johnsomhttps://www.irccloud.com/pastebin/KkpQzsvo/22:50
johnsomhttps://github.com/openstack/tempest/blob/master/tempest/lib/services/network/quotas_client.py#L2022:51
rm_workyeah, i run it fully concurrent on my cloud22:52
rm_workbut ...22:52
rm_workit doesn't run into that issue because in my cloud the users have sane quotas22:52
* rm_work shrugs22:52
johnsomYeah, devstack default is 10 for SG22:53
rm_workright so if they're running it on their cloud, they aren't going to be using devstack22:55
*** yamamoto has quit IRC22:55
rm_workso i'm not sure why this would be our problem :P22:56
johnsomYeah, but joe developer will do what I did....22:56
rm_workwell, whatever, we can change these quotas easy enough right?22:56
rm_worki assume tempest just has some setting for it we need to use22:56
johnsomYeah, we can just use the neutron service client like we do for nets and stuff. I can whip up a patch if you don't want to mess with it22:58
johnsomHowever, the pools test still needs work: https://github.com/openstack/octavia/blob/master/octavia/controller/healthmanager/health_drivers/update_db.py#L24722:58
johnsomIt's failing on line 14722:58
johnsomtesttools.matchers._impl.MismatchError: 'ONLINE' != u'OFFLINE'22:58
rm_workerr22:58
rm_worklast time it failed because of OFFLINE != ONLINE22:59
rm_work<_<22:59
johnsomI think I left that comment in there about no members being up22:59
rm_workright22:59
rm_workoriginally my comment said exactly that22:59
rm_workand just tested for OFFLINE22:59
rm_workbut running the test, I get a fail because it DOES come ONLINE23:00
rm_workand i looked through the logic, and i think maybe I see how it could23:00
rm_workso23:00
johnsomThe patch posted only checks for OFFLINE23:00
rm_workerr23:00
rm_work...23:00
johnsomYou didn't post a new pools patch23:00
*** yamamoto has joined #openstack-lbaas23:01
rm_workAUGH did i fix it in Members23:01
rm_workI might have23:01
rm_workyep23:01
rm_worklol k well.23:01
rm_workgive me a few minutes23:01
johnsomDid I pull the wrong thing down?23:02
rm_workhttps://review.openstack.org/#/c/566199/20/octavia_tempest_plugin/tests/api/v2/test_pool.py23:02
rm_worki fixed it in the NEXT patch23:02
rm_workby accident23:02
rm_workanyway, you should be running closer to the end23:02
rm_workthere's fixes for random stuff23:02
rm_worklike, the internals23:02
rm_workfor example, you won't spin a bunch of member VMs for the API tests if you have at least to https://review.openstack.org/#/c/566700/23:03
johnsomYeah, I pulled the HM down23:03
rm_workok well, you have the fix then23:03
rm_worknotice it's looking for ONLINE23:03
rm_workbut getting OFFLINE23:03
rm_work[15:58:44]  <johnsom>testtools.matchers._impl.MismatchError: 'ONLINE' != u'OFFLINE'23:04
rm_workExpected != Observed23:04
rm_worklol yeah i can't even run the LB-List API test *alone* in serial mode23:04
johnsomSo I'm giving you differing results?23:05
rm_worksince it spins up 4 LBs total23:05
rm_workyeah your result shows that it DIDN'T come online23:05
*** yamamoto has quit IRC23:05
rm_workbut from what I understand and what you linked me in update_db, it should be coming ONLINE23:05
rm_workprobably I just need to add a Waiter23:05
rm_workanyway, let me fix this patch in a minute23:06
johnsomYeah, no matter how I run it I'm getting OFFLINE23:06
johnsomboth singe or parallel23:06
rm_workyeah prolly just it takes too long23:06
rm_workone moment23:07
rm_workwaiting for test run to finish23:07
rm_workprolly this:23:08
johnsomMay 18 16:08:41 devstackpy27-2 octavia-health-manager[117665]: DEBUG octavia.controller.healthmanager.health_drivers.update_db [-] pool d498ad08-3f1f-4cb0-8ab7-6854b0f2d70b status has changed from OFFLINE to ONLINE. Updating db and sending event. {{(pid=118557) _update_status_and_emit_event /opt/stack/octavia/octavia/controller/healthmanager/health_drivers/update_db.py:64}}23:09
johnsomYeah, it's got to be a timing thing, I see in HM with debug it does go online23:09
rm_workyep23:09
rm_workone sec23:09
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for pools  https://review.openstack.org/56564023:09
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for members  https://review.openstack.org/56619923:09
rm_workhad to type yes23:09
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create basic traffic balancing scenario test  https://review.openstack.org/56670023:09
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for healthmonitors  https://review.openstack.org/56768823:09
rm_workthat fixes it23:09
rm_workjust needs to do a Wait if it's going to actually compare a HM based status23:10
rm_worki didn't design the API tests to run in live mode, really23:10
rm_workwhich IS admittedly a problem23:10
rm_workso i'll make sure there's no other weird stuff there23:10
rm_worki think this was the only one23:10
*** yamamoto has joined #openstack-lbaas23:11
johnsomPasses in single, will let it rip now23:12
johnsomDo you want me to create a neutron quota patch?23:12
rm_worksure why not23:12
rm_worki'm currently calling tire places trying to figure out who has tires in stock and can put them on my car *tomorrow* so I can get to Vancouver, lol23:13
*** yamamoto has quit IRC23:15
johnsomJoy23:16
*** hyang has joined #openstack-lbaas23:19
*** yamamoto has joined #openstack-lbaas23:21
hyangHi there, I ran into some devstack issues of Queens version of Octavia and after some investigation I found it is already fixed in master branch. May I know are there any plans to back-port these fixes to previous 'stable' branches?23:22
hyanghttps://github.com/openstack/octavia/commit/53dc41d06f41c78f13fcdaef4d9e9a08609d8b3423:22
hyanghttps://github.com/openstack/octavia/commit/1020a3bceb296610c442ed0d445bba594f27b3de23:22
rm_workanyone can propose backports23:23
hyangany instruction for how? thanks23:24
rm_workyou can find the change in gerrit by searching for the change-id23:24
rm_workwhich is in the commit message there23:24
rm_workfor example, if we search: I724f5064309d07fe05f86fcf2c7a488d9319e54c23:24
rm_workit brings us to https://review.openstack.org/#/c/550487/23:24
rm_workyou can then click the "cherry-pick" button, and type "stable/queens"23:25
rm_workand click the button at the bottom left to do the cherry-pick!23:25
rm_workthat's all it takes23:25
hyangthank you so much! will do that23:25
*** yamamoto has quit IRC23:26
johnsomrm_work I think I need to do this in the devstack plugin and not tempest. I would need to know the octavia service project_id to do it in tempest.23:30
rm_workerr23:31
rm_workbut the users are created by tempest23:31
rm_workassuming dynamic creds23:31
johnsomYeah, the SGs are created by octavia23:31
rm_workOH23:31
rm_workyou mean THOSE SGs23:31
johnsomyep23:31
johnsomIt pukes in o-cw23:31
rm_worklol I was totally not on the same page23:32
rm_workthen yes23:32
rm_workbut uhhh23:32
rm_workif "tempest testing* is causing their octavia to break due to SG quota23:32
rm_workI think they have a problem23:32
rm_worklol23:32
johnsomyea23:32
johnsomhttps://www.irccloud.com/pastebin/1gpwBPaZ/23:33
rm_workwoo, 42 passes :P23:33
johnsomLGTM, ship it23:33
rm_workyeah we really need to merge this stuff so I can start doing cleanup as I go, instead of juggling a bunch of patches23:34
rm_workand then we can start merging your stuff23:34
rm_worklet's see... i think there was one more comment you had23:34
rm_workDetails: (TrafficOperationsScenarioTest:test_healthmonitor_traffic) show_member operating_status failed to update to OFFLINE within the required time 60. Current status of show_member: NO_MONITOR23:35
rm_workis that still a problem?23:35
rm_workor no?23:35
rm_workdid you run the scenarios just now too?23:35
johnsomI ran everything, it's fixed23:35
rm_workok. I mean, i didn't change anything, so i guess it must have been an issue with your HM stuff23:36
rm_work?23:36
johnsomYeah, I was missing some other patches23:36
rm_workk23:36
*** yamamoto has joined #openstack-lbaas23:40
*** hyang has quit IRC23:40
*** yamamoto has quit IRC23:47
*** yamamoto has joined #openstack-lbaas23:53
*** yamamoto has quit IRC23:57

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!