*** goldyfruit_ has joined #openstack-lbaas | 00:02 | |
*** yamamoto has joined #openstack-lbaas | 01:33 | |
*** AustinR has quit IRC | 02:55 | |
*** AustinR has joined #openstack-lbaas | 02:56 | |
*** ricolin_ has joined #openstack-lbaas | 03:18 | |
*** psachin has joined #openstack-lbaas | 03:26 | |
*** gcheresh has joined #openstack-lbaas | 04:22 | |
*** gcheresh has quit IRC | 04:35 | |
*** ramishra has joined #openstack-lbaas | 04:41 | |
*** ramishra has quit IRC | 05:02 | |
*** gcheresh has joined #openstack-lbaas | 05:05 | |
*** ramishra has joined #openstack-lbaas | 05:14 | |
*** AlexStaf has quit IRC | 05:41 | |
*** gcheresh has quit IRC | 06:15 | |
*** gcheresh has joined #openstack-lbaas | 06:22 | |
*** pcaruana has joined #openstack-lbaas | 06:22 | |
*** yamamoto has quit IRC | 06:26 | |
*** ramishra has quit IRC | 06:34 | |
*** ramishra has joined #openstack-lbaas | 06:43 | |
*** ccamposr has joined #openstack-lbaas | 06:47 | |
*** maciejjozefczyk has joined #openstack-lbaas | 06:53 | |
*** gcheresh has quit IRC | 07:08 | |
*** gcheresh has joined #openstack-lbaas | 07:12 | |
*** gcheresh has quit IRC | 07:23 | |
*** rpittau|afk is now known as rpittau | 07:30 | |
*** gcheresh has joined #openstack-lbaas | 07:37 | |
*** gcheresh has quit IRC | 07:38 | |
*** gcheresh has joined #openstack-lbaas | 07:39 | |
*** AlexStaf has joined #openstack-lbaas | 07:50 | |
*** numans has joined #openstack-lbaas | 07:51 | |
*** pcaruana has quit IRC | 08:13 | |
*** pcaruana has joined #openstack-lbaas | 08:18 | |
*** tkajinam has quit IRC | 08:21 | |
*** ricolin_ is now known as ricolin | 09:29 | |
*** yamamoto has joined #openstack-lbaas | 09:30 | |
*** yamamoto has quit IRC | 09:36 | |
*** yamamoto has joined #openstack-lbaas | 09:39 | |
*** yamamoto has quit IRC | 09:41 | |
*** yamamoto has joined #openstack-lbaas | 09:41 | |
*** yamamoto has quit IRC | 09:43 | |
*** salmankhan has joined #openstack-lbaas | 09:54 | |
rm_work | Yes that error looks really familiar, I think Carlos is right and that patch will solve it for you | 10:04 |
---|---|---|
*** yamamoto has joined #openstack-lbaas | 10:19 | |
*** yamamoto has quit IRC | 10:39 | |
*** yamamoto has joined #openstack-lbaas | 10:41 | |
*** yamamoto has quit IRC | 10:46 | |
*** yamamoto has joined #openstack-lbaas | 10:49 | |
*** yamamoto has quit IRC | 10:53 | |
*** pcaruana has quit IRC | 11:11 | |
*** maciejjozefczyk is now known as mjozefcz|lunch | 11:27 | |
*** yamamoto has joined #openstack-lbaas | 11:37 | |
*** pcaruana has joined #openstack-lbaas | 11:42 | |
cgoncalves | FYI, new releases: Queens 2.1.2, Rocky 3.2.0, Stein 4.1.0 and Train 5.0.0.0rc2. | 11:49 |
*** yamamoto has quit IRC | 12:02 | |
*** yamamoto has joined #openstack-lbaas | 12:03 | |
*** yamamoto has quit IRC | 12:03 | |
*** yamamoto has joined #openstack-lbaas | 12:04 | |
*** mjozefcz|lunch is now known as mjozefcz | 12:10 | |
*** yamamoto has quit IRC | 12:12 | |
*** goldyfruit_ has quit IRC | 12:21 | |
*** AustinR has quit IRC | 12:42 | |
*** AustinR has joined #openstack-lbaas | 12:43 | |
*** kklimonda has joined #openstack-lbaas | 12:50 | |
*** ramishra has quit IRC | 12:51 | |
kklimonda | can I find in debug logs the reason for LB status being changed to PENDING_UPDATE? I'm trying to pass `test_active_standby_vrrp_failover` test but it gets stuck while waiting for LB to change from PENDING_UPDATE->ACTIVE here: https://github.com/openstack/octavia-tempest-plugin/blob/master/octavia_tempest_plugin/tests/act_stdby_scenario/v2/test_active_standby_iptables.py#L286 | 13:13 |
kklimonda | at the same time I get this in my logs: `Load balancer 64c96b26-6f92-458c-8711-2f697226614b is in immutable state PENDING_UPDATE. Skipping failover.` and it doesn't seem to recover | 13:14 |
*** Vorrtex has joined #openstack-lbaas | 13:23 | |
*** goldyfruit_ has joined #openstack-lbaas | 13:35 | |
*** ramishra has joined #openstack-lbaas | 13:47 | |
openstackgerrit | Carlos Goncalves proposed openstack/octavia-tempest-plugin master: Fix placement of VRRP failover check https://review.opendev.org/687059 | 14:06 |
cgoncalves | kklimonda, ^ this might solve the issue | 14:06 |
*** jrosser has quit IRC | 14:14 | |
*** jrosser has joined #openstack-lbaas | 14:15 | |
*** baffle has quit IRC | 14:20 | |
openstackgerrit | Carlos Goncalves proposed openstack/octavia-tempest-plugin master: Enable tempest jobs for stable/train https://review.opendev.org/686565 | 14:31 |
kklimonda | @cgoncalves thanks, that seems to fix the failure but I don't understand why is LB transitioning into PENDING_UPDATE and getting stuck there when we call check_members_balanced(). | 14:45 |
kklimonda | @cgoncalves is some change being triggered by one of amphoras when other amphora gets deleted? If so, why would it be stuck there? | 14:45 |
johnsom | PENDING_UPDATE means one of the controllers has ownership of the load balancer. It will stay that way until the controller completes the action or gives up retrying. | 14:48 |
*** yamamoto has joined #openstack-lbaas | 14:56 | |
*** yamamoto has quit IRC | 15:01 | |
*** AlexStaf has quit IRC | 15:17 | |
openstackgerrit | Carlos Goncalves proposed openstack/octavia-tempest-plugin master: Remove act-stdby-iptables scenario jobs in Stein+ https://review.opendev.org/687086 | 15:51 |
openstackgerrit | Carlos Goncalves proposed openstack/octavia-tempest-plugin master: Extend active-standby scenario test coverage to CentOS 7 https://review.opendev.org/687087 | 15:51 |
*** ramishra has quit IRC | 15:52 | |
openstackgerrit | Carlos Goncalves proposed openstack/octavia-tempest-plugin master: Extend active-standby scenario test coverage to CentOS 7 https://review.opendev.org/687087 | 15:53 |
*** rpittau is now known as rpittau|adk | 16:02 | |
*** rpittau|adk is now known as rpittau|afk | 16:02 | |
*** gcheresh has quit IRC | 16:05 | |
*** goldyfruit___ has joined #openstack-lbaas | 16:09 | |
*** goldyfruit_ has quit IRC | 16:11 | |
*** trident has quit IRC | 16:33 | |
*** baffle has joined #openstack-lbaas | 16:34 | |
*** trident has joined #openstack-lbaas | 16:37 | |
*** salmankhan has quit IRC | 16:43 | |
*** gcheresh has joined #openstack-lbaas | 17:21 | |
*** pcaruana has quit IRC | 17:21 | |
*** psachin has quit IRC | 17:22 | |
*** gcheresh has quit IRC | 17:36 | |
*** pcaruana has joined #openstack-lbaas | 18:00 | |
*** goldyfruit___ has quit IRC | 18:03 | |
*** goldyfruit has joined #openstack-lbaas | 18:06 | |
*** ricolin has quit IRC | 18:21 | |
*** mjozefcz has quit IRC | 18:36 | |
*** spartakos has joined #openstack-lbaas | 19:16 | |
*** gcheresh has joined #openstack-lbaas | 19:28 | |
*** pcaruana has quit IRC | 19:32 | |
*** gcheresh has quit IRC | 19:44 | |
*** spartakos has quit IRC | 19:50 | |
*** spartakos has joined #openstack-lbaas | 19:59 | |
*** spartakos has quit IRC | 20:02 | |
jrosser | would it be possible to produce stable branch test images? | 20:23 |
jrosser | or more specifically something with the v0.5 api in it | 20:23 |
*** yamamoto has joined #openstack-lbaas | 20:59 | |
*** yamamoto has quit IRC | 21:04 | |
*** goldyfruit_ has joined #openstack-lbaas | 21:05 | |
*** goldyfruit has quit IRC | 21:08 | |
*** goldyfruit_ has quit IRC | 21:22 | |
johnsom | Well, the amp api 0.5 is going away. It's not used in stein or train. Which branch are you looking for? Can't you just build the image you need? | 21:25 |
openstackgerrit | Michael Johnson proposed openstack/octavia-tempest-plugin master: Enable fail-fast on the gate queue https://review.opendev.org/682185 | 21:36 |
rm_work | jrosser: yeah you can build your own -- see also my patch: https://review.opendev.org/#/c/686227/ | 21:40 |
johnsom | Or the README file that lists the environment variables | 21:40 |
rm_work | yeah | 21:42 |
rm_work | mnaser: https://review.opendev.org/#/c/683028/ | 21:42 |
rm_work | responded | 21:42 |
johnsom | rm_work FYI, I have a problem using that method for DIB. There is a cache that keeps old versions of the amphora-agent checked out that causes this method with git references to fail | 21:42 |
rm_work | really? i've never had that issue | 21:42 |
rm_work | I this isn't really designed for the gate tho, it's more for people who just want it to "work" | 21:43 |
johnsom | Yeah, if there is a cached version that is older than the one checked out, it blows up. | 21:43 |
jrosser | rm_work: I was really meaning that could be picked up by other projects stable branch CI jobs | 21:43 |
johnsom | Probably because you blow away your devstack all the time... lol | 21:43 |
rm_work | well the first version of the patch used a tag/sha | 21:43 |
rm_work | would you prefer that? | 21:43 |
johnsom | The gates work fine with our devstack script as there is no cache | 21:43 |
rm_work | i mean, i use the same git repo checkout for like... a year | 21:44 |
rm_work | and i don't know what kind of cache you're talking about | 21:44 |
rm_work | if i have something checked out, then when i use that code, it gets the current SHA | 21:44 |
johnsom | I haven't booted up my VMs yet. I will get you the path when I do. | 21:44 |
rm_work | path? of what, the user's cloned octavia dir? | 21:45 |
johnsom | Yeah, it *should* use the local path, when the cache is out of date, but it doesn't, it just bombs | 21:45 |
rm_work | mine is /home/aharwell/workspace/octavia | 21:45 |
johnsom | cache path | 21:45 |
rm_work | what cache | 21:45 |
rm_work | like something in the .git dir? | 21:46 |
johnsom | dib or pip or something, I don't remember. I would have to get the path | 21:46 |
rm_work | this sets the repo-ref outside of DIB | 21:46 |
johnsom | Mine is in /opt/stack/.cache/<something>/<something>/<something> | 21:46 |
rm_work | it'd be the same as doing "git log" and then copying the sha and putting it in manually | 21:46 |
rm_work | i don't know what cache it'd be able to get? | 21:46 |
rm_work | it wouldn't work if you did that inside the *element* because that runs inside DIB | 21:47 |
johnsom | I know what it's doing, our devstack plugin has done if for over a year. I'm just saying it's broken. | 21:47 |
rm_work | if the directory has the correct version of octavia checked out for installation, then it *will* work | 21:48 |
cgoncalves | folks, there's a worrisome problem with our gates and devstack plugin. scenario-dsvm *stable* jobs are checking out python-octaviaclient and diskimage-builder from master. also, TBC our devstack plugin checks out DIB from master unless DISKIMAGE_BUILDER_REPO_REF is correctly set | 21:48 |
johnsom | What you put in "DIB_REPOLOCATION_amphora_agent" doesn't get used if it's in that "cache" | 21:48 |
rm_work | err | 21:48 |
cgoncalves | ah, octavia-lib from master on stable jobs too | 21:48 |
rm_work | then that's a problem whether it's set manually or not | 21:48 |
rm_work | because it's literally setting the same var | 21:48 |
cgoncalves | dsvm-api jobs are also checking out DIB master | 21:49 |
rm_work | cgoncalves: well DIB from master is expected but I don't know about the rest | 21:49 |
johnsom | Right. Both your patch and our devstack plugin is broken when you use those variables. That is my point. | 21:49 |
cgoncalves | is it really? it is in upper-constraints.txt | 21:49 |
johnsom | DIB is branchless isn't it | 21:49 |
rm_work | cgoncalves: we ran into too many issues with older DIB as other things break | 21:49 |
rm_work | yeah | 21:49 |
rm_work | we used to check it out from a SHA I think | 21:49 |
rm_work | but master is 1000% more reliable | 21:50 |
cgoncalves | I think the problem jrosser reported wednesday is very much related | 21:50 |
cgoncalves | rm_work, the problem is is *stable* branches | 21:50 |
jrosser | oh well I’m doing two things, fixing up my deployments where I had disk full | 21:50 |
jrosser | and now with my OSA hat on I am stuck on rocky and earlier, as I have no image to use | 21:51 |
jrosser | for CI | 21:51 |
jrosser | and it’s a fairly massive deal to modify the code to build its own image and port that all the way back to R | 21:52 |
cgoncalves | https://zuul.opendev.org/t/openstack/build/e66019aaea49491abfeabfb3fc4035d1/log/job-output.txt#184-185 | 21:52 |
cgoncalves | wrong link. hold on | 21:52 |
johnsom | jrosser it's one command to build the image. We do it for every gate run | 21:53 |
jrosser | I know, but we just don’t have that in our stable branches right now | 21:53 |
jrosser | and the dependencies are fairly extensive | 21:54 |
cgoncalves | hmm. erm. | 21:55 |
johnsom | Not really, it's like four packages and a requirements.txt file with a few things. It's pretty small. | 21:55 |
jrosser | we’d need a dedicated venv for it, and on a production deployment make sure it was delegated to the deploy host, it’s quite structural stuff tbh | 21:55 |
johnsom | You can also use the tox venv included "build" | 21:56 |
jrosser | johnsom: I honestly don’t know what to do, our rocky branches now point to the test image which doesn’t work any more | 22:00 |
johnsom | cgoncalves So, your topic here. DIB is branchless, so master is always used. Octavia client isn't used in the tests and works against our stable API, so should be fine to pull master. octavia-lib, that one could be a problem. | 22:01 |
johnsom | jrosser, which jobs are these? | 22:01 |
jrosser | any patch to openstack-ansible-os_octavia on branches that the prebuilt test amphora is not good for | 22:02 |
johnsom | Can't you just set octavia_download_artefact false? | 22:02 |
*** trident has quit IRC | 22:03 | |
jrosser | it does actually do some kind of proper testing :) | 22:03 |
jrosser | it checks the LB comes up | 22:03 |
johnsom | Glad to hear, not to long ago the OSA jobs were.... "basic" | 22:03 |
jrosser | anyway I need to !computer for today | 22:03 |
jrosser | ive been trying to make them better :) | 22:04 |
*** trident has joined #openstack-lbaas | 22:04 | |
johnsom | Looks like the ability for OSA to build an image was removed here: https://github.com/openstack/openstack-ansible-os_octavia/commit/95eee6bc11e97105cb0356d7475bee699d404bda#diff-a773177bc05ddc4d71a73e6ed770eebf | 22:09 |
*** goldyfruit_ has joined #openstack-lbaas | 22:10 | |
*** spatel has joined #openstack-lbaas | 22:23 | |
*** spatel has quit IRC | 22:27 | |
johnsom | Yay, only 200 more e-mails to catch up on.... | 22:47 |
openstackgerrit | Michael Johnson proposed openstack/octavia-tempest-plugin master: [train][goal] Define new 'octavia-v2-dsvm-noop-api-ipv6-only' job https://review.opendev.org/682726 | 22:50 |
schaney | Hey Octavia team! I have a fairly specific question about Octavia's ORM (related to https://storyboard.openstack.org/#!/story/2002907): looking here, https://github.com/openstack/octavia/blob/master/octavia/db/models.py#L315 why is there a relationship to "LoadBalancer" when only the load_balancer_id or pool.load_balancer.id is referenced? I'm trying to see if I can save on the subquery when the loadbalancers field is specified on the pools query | 23:02 |
johnsom | schaney Hi. Give me a minute to refresh my memory here. | 23:03 |
schaney | cool, thanks! | 23:04 |
schaney | from what I can tell, pools only support a single LB, but it looks like some of the API code was built to support multiple? | 23:06 |
johnsom | Ok, so not 100% sure I understand your question, so guide me if I don't answer it. In Octavia data model we have a concept of hierarchy, where the LB is the top parent. Then pool and listeners are a child of the LB parent. This is because pools can not only be children of listeners, but also shared with L7 policies. | 23:06 |
johnsom | LBs can have one or more pools | 23:07 |
schaney | not the other way though, correct? Pools only have 1 LB? | 23:08 |
*** rcernin has joined #openstack-lbaas | 23:08 | |
johnsom | Correct, there can only be one LB. (it was proposed to have more, but we decided that is out of scope) | 23:09 |
johnsom | schaney See my slide here: https://youtu.be/BBgP3_qhJ00?t=1465 | 23:09 |
johnsom | The part that isn't shown there is the the pool is always parented by the LB. | 23:10 |
johnsom | Because both pools on the diagram could be the same pool. | 23:10 |
johnsom | This ERD may or may not be helpful for you as well: https://docs.openstack.org/octavia/latest/_images/erd.svg | 23:11 |
schaney | ok gotcha, that part makes sense, thanks!. I am also wondering why the model has the full relationship (pool has the relationship to loadbalancer) when it also has the load_balancer_id? | 23:12 |
johnsom | It's a bit hard to read if you aren't used to those however.... | 23:12 |
johnsom | Yeah, that is probably due to the organic history there..... | 23:13 |
schaney | ah, ok =) | 23:13 |
*** Vorrtex has quit IRC | 23:13 | |
johnsom | It does make it nice for quick queries though | 23:13 |
johnsom | So, if you don't need to pull in the LB for your query, use the join override option. | 23:14 |
johnsom | schaney Example: https://github.com/openstack/octavia/blob/master/octavia/db/repositories.py#L768 | 23:16 |
johnsom | That "noload" cause the ORM to not bother with the related tables that aren't listed in the subqueryload. | 23:16 |
*** goldyfruit_ has quit IRC | 23:20 | |
schaney | cool cool, I have been looking at adding a load_only('column') and then subqueryload('relationship') for any actual relationships as long as the fields would call for. For the specific case on Pool, I think it could make sense to not subqueryload('LoadBalancer') since only the loadbalancer.id is ever needed, and the pool table already has it | 23:20 |
johnsom | Yep, makes sense | 23:21 |
schaney | ok, thank you for the info! | 23:21 |
*** AustinR has quit IRC | 23:53 | |
*** AustinR has joined #openstack-lbaas | 23:54 | |
*** goldyfruit_ has joined #openstack-lbaas | 23:56 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!