Monday, 2019-10-07

*** goldyfruit_ has joined #openstack-lbaas00:02
*** yamamoto has joined #openstack-lbaas01:33
*** AustinR has quit IRC02:55
*** AustinR has joined #openstack-lbaas02:56
*** ricolin_ has joined #openstack-lbaas03:18
*** psachin has joined #openstack-lbaas03:26
*** gcheresh has joined #openstack-lbaas04:22
*** gcheresh has quit IRC04:35
*** ramishra has joined #openstack-lbaas04:41
*** ramishra has quit IRC05:02
*** gcheresh has joined #openstack-lbaas05:05
*** ramishra has joined #openstack-lbaas05:14
*** AlexStaf has quit IRC05:41
*** gcheresh has quit IRC06:15
*** gcheresh has joined #openstack-lbaas06:22
*** pcaruana has joined #openstack-lbaas06:22
*** yamamoto has quit IRC06:26
*** ramishra has quit IRC06:34
*** ramishra has joined #openstack-lbaas06:43
*** ccamposr has joined #openstack-lbaas06:47
*** maciejjozefczyk has joined #openstack-lbaas06:53
*** gcheresh has quit IRC07:08
*** gcheresh has joined #openstack-lbaas07:12
*** gcheresh has quit IRC07:23
*** rpittau|afk is now known as rpittau07:30
*** gcheresh has joined #openstack-lbaas07:37
*** gcheresh has quit IRC07:38
*** gcheresh has joined #openstack-lbaas07:39
*** AlexStaf has joined #openstack-lbaas07:50
*** numans has joined #openstack-lbaas07:51
*** pcaruana has quit IRC08:13
*** pcaruana has joined #openstack-lbaas08:18
*** tkajinam has quit IRC08:21
*** ricolin_ is now known as ricolin09:29
*** yamamoto has joined #openstack-lbaas09:30
*** yamamoto has quit IRC09:36
*** yamamoto has joined #openstack-lbaas09:39
*** yamamoto has quit IRC09:41
*** yamamoto has joined #openstack-lbaas09:41
*** yamamoto has quit IRC09:43
*** salmankhan has joined #openstack-lbaas09:54
rm_workYes that error looks really familiar, I think Carlos is right and that patch will solve it for you10:04
*** yamamoto has joined #openstack-lbaas10:19
*** yamamoto has quit IRC10:39
*** yamamoto has joined #openstack-lbaas10:41
*** yamamoto has quit IRC10:46
*** yamamoto has joined #openstack-lbaas10:49
*** yamamoto has quit IRC10:53
*** pcaruana has quit IRC11:11
*** maciejjozefczyk is now known as mjozefcz|lunch11:27
*** yamamoto has joined #openstack-lbaas11:37
*** pcaruana has joined #openstack-lbaas11:42
cgoncalvesFYI, new releases: Queens 2.1.2, Rocky 3.2.0, Stein 4.1.0 and Train 5.0.0.0rc2.11:49
*** yamamoto has quit IRC12:02
*** yamamoto has joined #openstack-lbaas12:03
*** yamamoto has quit IRC12:03
*** yamamoto has joined #openstack-lbaas12:04
*** mjozefcz|lunch is now known as mjozefcz12:10
*** yamamoto has quit IRC12:12
*** goldyfruit_ has quit IRC12:21
*** AustinR has quit IRC12:42
*** AustinR has joined #openstack-lbaas12:43
*** kklimonda has joined #openstack-lbaas12:50
*** ramishra has quit IRC12:51
kklimondacan I find in debug logs the reason for LB status being changed to PENDING_UPDATE? I'm trying to pass `test_active_standby_vrrp_failover` test but it gets stuck while waiting for LB to change from PENDING_UPDATE->ACTIVE here: https://github.com/openstack/octavia-tempest-plugin/blob/master/octavia_tempest_plugin/tests/act_stdby_scenario/v2/test_active_standby_iptables.py#L28613:13
kklimondaat the same time I get this in my logs: `Load balancer 64c96b26-6f92-458c-8711-2f697226614b is in immutable state PENDING_UPDATE. Skipping failover.` and it doesn't seem to recover13:14
*** Vorrtex has joined #openstack-lbaas13:23
*** goldyfruit_ has joined #openstack-lbaas13:35
*** ramishra has joined #openstack-lbaas13:47
openstackgerritCarlos Goncalves proposed openstack/octavia-tempest-plugin master: Fix placement of VRRP failover check  https://review.opendev.org/68705914:06
cgoncalveskklimonda, ^ this might solve the issue14:06
*** jrosser has quit IRC14:14
*** jrosser has joined #openstack-lbaas14:15
*** baffle has quit IRC14:20
openstackgerritCarlos Goncalves proposed openstack/octavia-tempest-plugin master: Enable tempest jobs for stable/train  https://review.opendev.org/68656514:31
kklimonda@cgoncalves thanks, that seems to fix the failure but I don't understand why is LB transitioning into PENDING_UPDATE and getting stuck there when we call check_members_balanced().14:45
kklimonda@cgoncalves is some change being triggered by one of amphoras when other amphora gets deleted? If so, why would it be stuck there?14:45
johnsomPENDING_UPDATE means one of the controllers has ownership of the load balancer. It will stay that way until the controller completes the action or gives up retrying.14:48
*** yamamoto has joined #openstack-lbaas14:56
*** yamamoto has quit IRC15:01
*** AlexStaf has quit IRC15:17
openstackgerritCarlos Goncalves proposed openstack/octavia-tempest-plugin master: Remove act-stdby-iptables scenario jobs in Stein+  https://review.opendev.org/68708615:51
openstackgerritCarlos Goncalves proposed openstack/octavia-tempest-plugin master: Extend active-standby scenario test coverage to CentOS 7  https://review.opendev.org/68708715:51
*** ramishra has quit IRC15:52
openstackgerritCarlos Goncalves proposed openstack/octavia-tempest-plugin master: Extend active-standby scenario test coverage to CentOS 7  https://review.opendev.org/68708715:53
*** rpittau is now known as rpittau|adk16:02
*** rpittau|adk is now known as rpittau|afk16:02
*** gcheresh has quit IRC16:05
*** goldyfruit___ has joined #openstack-lbaas16:09
*** goldyfruit_ has quit IRC16:11
*** trident has quit IRC16:33
*** baffle has joined #openstack-lbaas16:34
*** trident has joined #openstack-lbaas16:37
*** salmankhan has quit IRC16:43
*** gcheresh has joined #openstack-lbaas17:21
*** pcaruana has quit IRC17:21
*** psachin has quit IRC17:22
*** gcheresh has quit IRC17:36
*** pcaruana has joined #openstack-lbaas18:00
*** goldyfruit___ has quit IRC18:03
*** goldyfruit has joined #openstack-lbaas18:06
*** ricolin has quit IRC18:21
*** mjozefcz has quit IRC18:36
*** spartakos has joined #openstack-lbaas19:16
*** gcheresh has joined #openstack-lbaas19:28
*** pcaruana has quit IRC19:32
*** gcheresh has quit IRC19:44
*** spartakos has quit IRC19:50
*** spartakos has joined #openstack-lbaas19:59
*** spartakos has quit IRC20:02
jrosserwould it be possible to produce stable branch test images?20:23
jrosseror more specifically something with the v0.5 api in it20:23
*** yamamoto has joined #openstack-lbaas20:59
*** yamamoto has quit IRC21:04
*** goldyfruit_ has joined #openstack-lbaas21:05
*** goldyfruit has quit IRC21:08
*** goldyfruit_ has quit IRC21:22
johnsomWell, the amp api 0.5 is going away. It's not used in stein or train.  Which branch are you looking for?  Can't you just build the image you need?21:25
openstackgerritMichael Johnson proposed openstack/octavia-tempest-plugin master: Enable fail-fast on the gate queue  https://review.opendev.org/68218521:36
rm_workjrosser: yeah you can build your own -- see also my patch: https://review.opendev.org/#/c/686227/21:40
johnsomOr the README file that lists the environment variables21:40
rm_workyeah21:42
rm_workmnaser: https://review.opendev.org/#/c/683028/21:42
rm_workresponded21:42
johnsomrm_work FYI, I have a problem using that method for DIB. There is a cache that keeps old versions of the amphora-agent checked out that causes this method with git references to fail21:42
rm_workreally? i've never had that issue21:42
rm_workI this isn't really designed for the gate tho, it's more for people who just want it to "work"21:43
johnsomYeah, if there is a cached version that is older than the one checked out, it blows up.21:43
jrosserrm_work: I was really meaning that could be picked up by other projects stable branch CI jobs21:43
johnsomProbably because you blow away your devstack all the time... lol21:43
rm_workwell the first version of the patch used a tag/sha21:43
rm_workwould you prefer that?21:43
johnsomThe gates work fine with our devstack script as there is no cache21:43
rm_worki mean, i use the same git repo checkout for like... a year21:44
rm_workand i don't know what kind of cache you're talking about21:44
rm_workif i have something checked out, then when i use that code, it gets the current SHA21:44
johnsomI haven't booted up my VMs yet. I will get you the path when I do.21:44
rm_workpath? of what, the user's cloned octavia dir?21:45
johnsomYeah, it *should* use the local path, when the cache is out of date, but it doesn't, it just bombs21:45
rm_workmine is /home/aharwell/workspace/octavia21:45
johnsomcache path21:45
rm_workwhat cache21:45
rm_worklike something in the .git dir?21:46
johnsomdib or pip or something, I don't remember. I would have to get the path21:46
rm_workthis sets the repo-ref outside of DIB21:46
johnsomMine is in /opt/stack/.cache/<something>/<something>/<something>21:46
rm_workit'd be the same as doing "git log" and then copying the sha and putting it in manually21:46
rm_worki don't know what cache it'd be able to get?21:46
rm_workit wouldn't work if you did that inside the *element* because that runs inside DIB21:47
johnsomI know what it's doing, our devstack plugin has done if for over a year. I'm just saying it's broken.21:47
rm_workif the directory has the correct version of octavia checked out for installation, then it *will* work21:48
cgoncalvesfolks, there's a worrisome problem with our gates and devstack plugin. scenario-dsvm *stable* jobs are checking out python-octaviaclient and diskimage-builder from master. also, TBC our devstack plugin checks out DIB from master unless DISKIMAGE_BUILDER_REPO_REF is correctly set21:48
johnsomWhat you put in "DIB_REPOLOCATION_amphora_agent" doesn't get used if it's in that "cache"21:48
rm_workerr21:48
cgoncalvesah, octavia-lib from master on stable jobs too21:48
rm_workthen that's a problem whether it's set manually or not21:48
rm_workbecause it's literally setting the same var21:48
cgoncalvesdsvm-api jobs are also checking out DIB master21:49
rm_workcgoncalves: well DIB from master is expected but I don't know about the rest21:49
johnsomRight. Both your patch and our devstack plugin is broken when you use those variables. That is my point.21:49
cgoncalvesis it really? it is in upper-constraints.txt21:49
johnsomDIB is branchless isn't it21:49
rm_workcgoncalves: we ran into too many issues with older DIB as other things break21:49
rm_workyeah21:49
rm_workwe used to check it out from a SHA I think21:49
rm_workbut master is 1000% more reliable21:50
cgoncalvesI think the problem jrosser reported wednesday is very much related21:50
cgoncalvesrm_work, the problem is is *stable* branches21:50
jrosseroh well I’m doing two things, fixing up my deployments where I had disk full21:50
jrosserand now with my OSA hat on I am stuck on rocky and earlier, as I have no image to use21:51
jrosserfor CI21:51
jrosserand it’s a fairly massive deal to modify the code to build its own image and port that all the way back to R21:52
cgoncalveshttps://zuul.opendev.org/t/openstack/build/e66019aaea49491abfeabfb3fc4035d1/log/job-output.txt#184-18521:52
cgoncalveswrong link. hold on21:52
johnsomjrosser it's one command to build the image. We do it for every gate run21:53
jrosserI know, but we just don’t have that in our stable branches right now21:53
jrosserand the dependencies are fairly extensive21:54
cgoncalveshmm. erm.21:55
johnsomNot really, it's like four packages and a requirements.txt file with a few things. It's pretty small.21:55
jrosserwe’d need a dedicated venv for it, and on a production deployment make sure it was delegated to the deploy host, it’s quite structural stuff tbh21:55
johnsomYou can also use the tox venv included "build"21:56
jrosserjohnsom: I honestly don’t know what to do, our rocky branches now point to the test image which doesn’t work any more22:00
johnsomcgoncalves So, your topic here. DIB is branchless, so master is always used. Octavia client isn't used in the tests and works against our stable API, so should be fine to pull master. octavia-lib, that one could be a problem.22:01
johnsomjrosser, which jobs are these?22:01
jrosserany patch to openstack-ansible-os_octavia on branches that the prebuilt test amphora is not good for22:02
johnsomCan't you just set octavia_download_artefact false?22:02
*** trident has quit IRC22:03
jrosserit does actually do some kind of proper testing :)22:03
jrosserit checks the LB comes up22:03
johnsomGlad to hear, not to long ago the OSA jobs were.... "basic"22:03
jrosseranyway I need to !computer for today22:03
jrosserive been trying to make them better :)22:04
*** trident has joined #openstack-lbaas22:04
johnsomLooks like the ability for OSA to build an image was removed here: https://github.com/openstack/openstack-ansible-os_octavia/commit/95eee6bc11e97105cb0356d7475bee699d404bda#diff-a773177bc05ddc4d71a73e6ed770eebf22:09
*** goldyfruit_ has joined #openstack-lbaas22:10
*** spatel has joined #openstack-lbaas22:23
*** spatel has quit IRC22:27
johnsomYay, only 200 more e-mails to catch up on....22:47
openstackgerritMichael Johnson proposed openstack/octavia-tempest-plugin master: [train][goal] Define new 'octavia-v2-dsvm-noop-api-ipv6-only' job  https://review.opendev.org/68272622:50
schaneyHey Octavia team! I have a fairly specific question about Octavia's ORM (related to https://storyboard.openstack.org/#!/story/2002907): looking here, https://github.com/openstack/octavia/blob/master/octavia/db/models.py#L315 why is there a relationship to "LoadBalancer" when only the load_balancer_id or pool.load_balancer.id is referenced? I'm trying to see if I can save on the subquery when the loadbalancers field is specified on the pools query23:02
johnsomschaney Hi. Give me a minute to refresh my memory here.23:03
schaneycool, thanks!23:04
schaneyfrom what I can tell, pools only support a single LB, but it looks like some of the API code was built to support multiple?23:06
johnsomOk, so not 100% sure I understand your question, so guide me if I don't answer it. In Octavia data model we have a concept of hierarchy, where the LB is the top parent. Then pool and listeners are a child of the LB parent. This is because pools can not only be children of listeners, but also shared with L7 policies.23:06
johnsomLBs can have one or more pools23:07
schaneynot the other way though, correct?  Pools only have 1 LB?23:08
*** rcernin has joined #openstack-lbaas23:08
johnsomCorrect, there can only be one LB. (it was proposed to have more, but we decided that is out of scope)23:09
johnsomschaney See my slide here: https://youtu.be/BBgP3_qhJ00?t=146523:09
johnsomThe part that isn't shown there is the the pool is always parented by the LB.23:10
johnsomBecause both pools on the diagram could be the same pool.23:10
johnsomThis ERD may or may not be helpful for you as well: https://docs.openstack.org/octavia/latest/_images/erd.svg23:11
schaneyok gotcha, that part makes sense, thanks!.  I am also wondering why the model has the full relationship (pool has the relationship to loadbalancer) when it also has the load_balancer_id?23:12
johnsomIt's a bit hard to read if you aren't used to those however....23:12
johnsomYeah, that is probably due to the organic history there.....23:13
schaneyah, ok =)23:13
*** Vorrtex has quit IRC23:13
johnsomIt does make it nice for quick queries though23:13
johnsomSo, if you don't need to pull in the LB for your query, use the join override option.23:14
johnsomschaney Example: https://github.com/openstack/octavia/blob/master/octavia/db/repositories.py#L76823:16
johnsomThat "noload" cause the ORM to not bother with the related tables that aren't listed in the subqueryload.23:16
*** goldyfruit_ has quit IRC23:20
schaneycool cool, I have been looking at adding a load_only('column') and then subqueryload('relationship') for any actual relationships as long as the fields would call for.  For the specific case on Pool, I think it could make sense to not subqueryload('LoadBalancer') since only the loadbalancer.id is ever needed, and the pool table already has it23:20
johnsomYep, makes sense23:21
schaneyok, thank you for the info!23:21
*** AustinR has quit IRC23:53
*** AustinR has joined #openstack-lbaas23:54
*** goldyfruit_ has joined #openstack-lbaas23:56

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!