Wednesday, 2025-01-15

opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add noble to molecule testing  https://review.opendev.org/c/openstack/openstack-ansible/+/93930601:04
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Move zuul preparation for role/collection bootstrap  https://review.opendev.org/c/openstack/openstack-ansible/+/93915101:07
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Optimize generation of required roles/collections  https://review.opendev.org/c/openstack/openstack-ansible/+/93922101:07
noonedeadpunkso now https://zuul.opendev.org/t/openstack/build/d1f8413286d9499e813b8c5e99392e8d fails very differently at least01:09
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Pin ansible-compat up to 25.0.0  https://review.opendev.org/c/openstack/openstack-ansible/+/93927401:09
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add httpd role to a-r-r  https://review.opendev.org/c/openstack/openstack-ansible/+/93827101:10
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Molecule to respect depends-on for test-requirements update  https://review.opendev.org/c/openstack/openstack-ansible/+/93929001:10
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add noble to molecule testing  https://review.opendev.org/c/openstack/openstack-ansible/+/93930601:10
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Ensure repo_server proper username  https://review.opendev.org/c/openstack/openstack-ansible/+/93827501:12
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Ensure repo_server proper username  https://review.opendev.org/c/openstack/openstack-ansible/+/93827501:15
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Return upgrade jobs back to voting  https://review.opendev.org/c/openstack/openstack-ansible/+/93930701:16
noonedeadpunksweet - https://zuul.opendev.org/t/openstack/build/438766d363fe4fd09d2fb16b4fd8552f08:04
noonedeadpunknoble testing, finally08:04
noonedeadpunkalso it seems that indeed an issue was in circular dependency for repo server httpd role: https://review.opendev.org/c/openstack/openstack-ansible/+/93930708:10
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_skyline master: Use standalone httpd role  https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/93829708:10
jrosseroh wow you did the molecule requirements file already \o/08:31
noonedeadpunkI had a bad sleep tonight08:32
noonedeadpunkI'm also a bit confused of why we're getting 503 for our cached constraints on repo server even with https://review.opendev.org/c/openstack/openstack-ansible/+/939054?08:43
jrosseroh i see `zuul@172.18.0.2` in molecule, that forcing to be root08:43
jrosser*that needs08:43
noonedeadpunkyeah08:43
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-lxc_container_create master: Re-introduce functional tests with molecule  https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/93925708:44
noonedeadpunkI wonder if we should do OOMScoreAdjust for apache to prevent this. But makes sense to do only with httpd role then, I guess, as otherwise it's gonna be very annoying to achieve08:46
noonedeadpunk(at least for CI only)08:47
jrosserdo you see OOM for that failure?08:47
noonedeadpunkI don't but it sometimes not obvious somehow in ubuntu to look for it08:47
noonedeadpunkbut also what else could it be....08:47
noonedeadpunkalso like - `Jan 13 10:23:26 aio1 apache2: 172.29.236.100 - - [13/Jan/2025:10:23:26 +0000] "HEAD /healthcheck HTTP/1.0" 204 312 "-" "osa-haproxy-healthcheck"` and then 503 is at 10:23:2308:50
noonedeadpunkhttps://zuul.opendev.org/t/openstack/build/94c00dc5f3c243609f798b18ffb531ed/log/logs/host/syslog.txt#68734-6874808:51
noonedeadpunkoh08:51
noonedeadpunkso it's actually a race condition08:51
jrosserwe don;t have some restart vs reload error smewhere?08:52
jrosserbecasue we shouldnt need to actually restart it perhaps08:53
noonedeadpunkso this error is exact same time when u-c is fetched: https://zuul.opendev.org/t/openstack/build/94c00dc5f3c243609f798b18ffb531ed/log/job-output.txt#11110-1111108:53
noonedeadpunkthere's haproxy reload only around the time08:53
noonedeadpunksi I'd stick back with OOM :)08:54
jrosserwelll08:55
jrosserthe line immediately before https://zuul.opendev.org/t/openstack/build/94c00dc5f3c243609f798b18ffb531ed/log/logs/host/syslog.txt#68734-6874808:55
jrosser`Jan 13 10:23:21 aio1 libapache2-mod-wsgi-py3: apache2_invoke: Enable module wsgi`08:56
jrosserthats pretty suspicious no?08:56
noonedeadpunkSo you mean that during package installation, apt does restart the service08:57
noonedeadpunkas it's timestamp around `Install distro packages`08:57
jrosserwell i'm not sure - is that the point the package is installed08:57
noonedeadpunkand - https://opendev.org/openstack/openstack-ansible-os_horizon/src/branch/master/vars/debian.yml#L3108:57
noonedeadpunkso wsgi is smth isntalled08:57
noonedeadpunkI would not expect for it to be enabled on installation08:58
jrosserok so this is all in the current world with many things competing for managing apache09:00
jrosserwith the new role would we make it such that `libapache2-mod-wsgi-py3` is installed up front the first time the role is run, then there should be no need to restart apache for that when we get to the horizon role09:02
jrosserthis seems to be a problem specific to metal jobs where all use of apache is collapsed onto the same instance09:03
jrosserit would not have this race for an lxc job09:03
noonedeadpunkI think it depends...10:07
noonedeadpunkWe actually can prevent it from restart during package installation10:07
noonedeadpunklike we do for... rabbitmq?10:07
jrosserwell - i was just seeing that we dont have a patch yet to make horizon use the new apache role10:09
jrosserand perhaps once that is done it makes this issue just go away?10:09
noonedeadpunkyeah, could be for sure10:09
noonedeadpunkI'm actually not 100% sure on how to handle extra modules installation. I was thinking that roles that needs extra modules - will supply a list, and they will be installed10:10
jrosserif the first time apache is deployed, it knows that module is needed somehow, then its there and ready when we get to the horizon role so no restart is necesary10:10
noonedeadpunkbut that's not yet fully implemented in httpd role10:10
noonedeadpunklike - do we want to install all oidc modules regardlessly is the question10:11
jrosserthat sounds like a think perhaps for group_vars rather than putting in role10:11
jrosserthen we can gather some variable by prefix httpd_apache_modules_* or whatever10:12
jrosserand then it would would out correctly with only the necessary modules10:12
noonedeadpunkI made a `httpd_extra_modules` and was thinking to pass the list to extend during httpd role includes10:12
jrosseri.e just the ones needed for horizon in a lxc horizon container10:12
noonedeadpunkie L20 https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/938297/2/tasks/skyline_apache.yml10:13
noonedeadpunkI think we can just disable service restarts like we do in some other placess during module installation10:14
noonedeadpunkbut ofc probaby we can do that through vars lookup as well10:14
jrosserand we have a handler to do it in a controlled way?10:14
* noonedeadpunk trying to find10:26
noonedeadpunkah. this thing: https://github.com/openstack/openstack-ansible-rabbitmq_server/blob/b09ac39cbca11f7a5a14731e583246fe6a6c420e/tasks/rabbitmq_upgrade_prep.yml#L16-L2410:29
noonedeadpunkso in role we can add policy-rc.d, install extra modules, drop file, execute handler10:32
noonedeadpunkbut ofc we can do as you say with collecting all required modules based on groups10:40
jrosserwe'd just have to be sure that idea works10:40
jrosseras in the play you'd be targetting repo_all or somthing first10:41
noonedeadpunkyup10:41
jrosserand would want to also have all the vars for the other services in scope, in a metal AIO10:41
noonedeadpunkso it has to be in extra-vars10:41
jrosserand i'm not sure if they would be10:41
noonedeadpunkoh well.10:41
jrosseryeah, so that would not be so helpful for LXC i suppose10:42
jrosseras we'd get all the modules everywhere which is not ideal10:42
jrosserso it does sounds simpler to do like with did with rabbitmq then10:42
noonedeadpunkso we can place all httpd vars in group_vars/all, and then like `httpd_extra_modules_skyline: "{{ groups['skyline_all'] | ternary(skyline_modules, []) }}"10:43
noonedeadpunkbut it adds complexity, sure10:43
noonedeadpunkas then outside of say keystone role we have to know decision choices10:45
jrosserthere is likley still some left over race condition of restarting apache -> haproxy health check -> haproxy ready to serve10:46
jrosservs. whatever tasks come next10:46
noonedeadpunkso  I tried to to reloads instead of restarts on vhosts changes10:46
noonedeadpunkand reloads should be less disruptive10:46
noonedeadpunkthough still, in case of Django, they will be disruptive10:47
jrosserperhaps another way in group vars is to use is_metal for the services with troublesome apache modules to decide if they get enabled globally or not10:50
noonedeadpunkwell.... dunno... what if the case would be, that each service is deployed on a separate VM? So it's kinda metal, but in fact that logic won't be great10:51
jrossertrue10:51
noonedeadpunkI just frankly didn't know that apache would do that. As I was under impression on ubuntu, that all extra installed modules have to be explicitly enabled first10:55
noonedeadpunkthrough a2enmod10:56
noonedeadpunkso that module isntallation triggers restart in the meanwhile - was a bit of surprise10:56
noonedeadpunkbtw, skyline is passing now with proper depends-on: https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/93829710:57
jrosserwe could intersect the current host with the skyline/repo/keystone/horizon host groups10:57
jrosserand then retrieve all the modules via something derived from `{{ hostvars[groups['horizon_all'][0]]['httpd_extra_modules'] }}`10:58
jrosserand build a list of modules for all the things using apache on the current host10:58
noonedeadpunkI guess main question is - if it's worth it10:59
jrosserthats a difficult question11:00
jrosserbecause for real deployments is really not much of an issue11:01
noonedeadpunkas having policy-rc.d still worth to have in case of any changes to any of roles11:01
jrosserbut in CI we have to have a super-high level of reliability thats otherwise unnecessary11:01
noonedeadpunkI think we still need to ensure that apache is not restarted when it's not expected. As for production deployments - keystone depends on this as well11:02
noonedeadpunkah, but yes, backend should be disabled I assume for production11:02
jrosserwell perhaps, but in a real metal deploy there could still be a shared apache11:03
noonedeadpunktrue11:03
jrosserand you could get an unexpected restart from maybe horizon, restarting * services11:03
noonedeadpunkyeah11:03
jrosserso there are different concerns maybe - we need to protect against any 503 type situations in CI for job reliability11:04
jrosserand then we also need to protect against unexpected restarts of apache for production metal deploys11:04
noonedeadpunkI'd also need to check if actualy restart is needed for new module installation. I guess it is, and just realod won't pick up a new module11:04
jrosser^ so this is an argument to get the modules installed up-front, for production11:04
noonedeadpunkyeah11:05
jrosserotherwise you may never have a way to deploy horizon without breaking keystone11:05
noonedeadpunkand then you don't disable/drain keystone backend either, so it just dies on you11:05
jrosseryes thats an excellent thing to verify, if restart is actually needed for new modules11:06
noonedeadpunkok, fair point11:06
noonedeadpunkfor now I'm thinking to add retries to the task :D11:11
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_horizon master: Add retries to u_c fetch  https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/93933011:14
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_horizon master: Add retries to u_c fetch  https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/93933011:14
noonedeadpunkthis should be backportable and solve CI issue at least11:16
opendevreviewAndrew Bonney proposed openstack/openstack-ansible stable/2024.2: Fix inventory adjustment for legacy container naming  https://review.opendev.org/c/openstack/openstack-ansible/+/93934112:42
opendevreviewAndrew Bonney proposed openstack/openstack-ansible stable/2024.1: Fix inventory adjustment for legacy container naming  https://review.opendev.org/c/openstack/openstack-ansible/+/93934212:43
jrosserhmmm 2024.1 CI is still not running the expected jobs12:50
jrosserand how does the linter pass for check and fail for gate here https://review.opendev.org/c/openstack/openstack-ansible/+/939070?tab=change-view-tab-header-zuul-results-summary12:53
jrosserthat does not make much sense12:53
andrewbonneyFor CI I think https://review.opendev.org/c/openstack/openstack-ansible/+/939070 needs to merge13:06
mgariepywhy only gate has this one ?  args[module]: Unsupported parameters for (basic.py) 13:24
mgariepyfailed: installed ansible-compat-25.0.0 ansible-core-2.17.7 ansible-lint-6.19.0 13:29
mgariepysucces : installed ansible-compat-24.10.0 ansible-lint-6.19.013:29
mgariepynot the same test why is the pkg selection not the same ?13:29
jrossermgariepy: which jobs is that?13:31
mgariepygate vs not gate linter13:31
mgariepythe one you posted 40 minutes ago :D13:31
jrosseroooh right13:32
mgariepyunless something merged between the runs ?13:32
jrosserwell also ansible-compat 25 was released yesterday13:33
mgariepyha13:35
mgariepyha. the new check is also failed, i guess it's the new version of ansible-compat.13:35
mgariepyyep, consistent error.13:36
mgariepyhttps://zuul.opendev.org/t/openstack/build/be658454dd5243ea967a0849ca04bd6813:36
jrosseri think that its `args[module]: Unsupported parameters for (basic.py) module: get_md5` in os_swift13:37
jrosserwhich is fixed here https://review.opendev.org/c/openstack/openstack-ansible-os_swift/+/92228313:39
jrosserok this is certainly a problem https://zuul.opendev.org/t/openstack/build/be658454dd5243ea967a0849ca04bd68/log/job-output.txt#1889-189813:42
jrossernoonedeadpunk: ^ what seems to happen there is we bootstrap-ansible (correct versions of ansible) in a zuul pre job, and then later gate-check-commit.sh does it again installing later/incorrect versions for the branch13:46
jrosserit's as if this has picked up the master branch version of test-requirements.yml https://opendev.org/openstack/openstack-ansible/src/branch/stable/2024.1/scripts/gate-check-commit.sh#L129-L13214:05
alvinstarrIs there a way to set the IP addresses on the public endpoints.14:26
alvinstarrThe default seems to be the external_lb_vip_address which in my case is behind a fierwall.14:26
alvinstarrCurrently I am updating the public URLs after the playbook has finished.14:26
jrosseralvinstarr: you can look in the service role defaults, like for glance here https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/tasks/main.yml#L15714:33
jrosserand https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/defaults/main.yml#L19814:33
jrosseryou can override the value of that in your user_variables.yml14:34
jrosserso if you set that var for the services that you need, the public endpoints in the service catalog can be set to whatever you need14:35
alvinstarrThanks.14:39
alvinstarrIs it safe to assume that it will always be xxx_service_publicurl14:39
jrosseranything in one of the role defaults files should be stable, the intention of all of those variables is that they are exposed for you to customise the deployment14:41
jrosserif they ever get changed there will be a release note (which you should be reading anyway at upgrade time :) )14:42
noonedeadpunkjrosser: um. so gate-check-commit does try to install from test-requirements.txt there14:58
jrosseryes it does, the code looks right14:59
jrosserwhat looks wrong is that it installs a test-requirements file that seems to be master?14:59
noonedeadpunkabout bootstrap - it's a bit arguable I guess if it should happen or not at all14:59
noonedeadpunkcommand is - /opt/ansible-runtime/bin/pip install --isolated --index-url http://mirror.sjc3.raxflex.opendev.org/pypi/simple --trusted-host mirror.sjc3.raxflex.opendev.org --extra-index-url http://mirror.sjc3.raxflex.opendev.org/wheel/ubuntu-22.04-x86_64 -r /home/zuul/src/opendev.org/openstack/openstack-ansible/test-requirements.txt14:59
jrosserit gets the exact version of ansible-core that we just committed to master for the molecule stuff14:59
noonedeadpunkansible-lint-6.19.0 - I think it's for 2024.1?15:00
jrosseron 2024.1 there is no ansible-core defined in test-requirements.txt15:00
noonedeadpunkansible-lint==24.12.2 on master15:00
noonedeadpunkwe just don't specify anything in 2024.1 for ansible-core15:00
jrosserok, well perhaps the first thing to look at is this https://zuul.opendev.org/t/openstack/build/be658454dd5243ea967a0849ca04bd68/log/job-output.txt#1889-189815:00
noonedeadpunkso it gets ltest15:00
noonedeadpunkwhat I see there is - `ansible-lint-6.19.0`15:01
noonedeadpunkwhich points to https://opendev.org/openstack/openstack-ansible/src/branch/stable/2024.1/test-requirements.txt#L1415:01
noonedeadpunkwhy does it drop existing ansible....15:01
jrosserthat should not be happening, uninstall ansible-core ansible-core-2.15.9 and install ansible-core-2.17.715:01
noonedeadpunkI don't see nothing in ansible-lint requirements that would do that: https://github.com/ansible/ansible-lint/blob/v6.19.0/.config/requirements.in#L3-L415:02
noonedeadpunkso while I agree, I'm not sure why this is happening actually15:03
jrosseractually i was wrong15:03
noonedeadpunkeasy thing to do, is to define ansible-core in test-requirements15:04
jrosserwe have 2.17.5 in test-requriements on master15:04
jrossernot 2.17.715:04
noonedeadpunkyeah, so it gets latest15:04
jrosserno, that would be 2.18.115:04
noonedeadpunkbut 2.18 is not compatible with python 3.8?15:05
noonedeadpunkso latest for jammy is 2.1715:05
jrosserah right15:05
jrosserok so here https://zuul.opendev.org/t/openstack/build/be658454dd5243ea967a0849ca04bd68/log/job-output.txt#1809-181015:06
jrosserso is it that we are not constraining ansible-core in test-requirements.txt to be matching with what scripts/bootstrap-ansible.sh defines as the version for that branch15:07
lowercaseHey guys, working on an issue. Rabbitmq has fallen behind and im looking to push it forward. The course of action that has been decided is to standup a second pair of rabbitmq containers, provision and then flip over. The issue with this approach is currently im not able to double list rabbit servers, and i imagine both. Do you guys have any suggestions?15:13
lowercasehttps://paste.centos.org/view/c839108d15:14
lowercaserabbit_mq servers are hosted on 3 dedicated servers, which have 3 rabbitmq lxc containers. I would like to increase it to 6. and then delete the original three after i have moved all the openstack services to the new rabbitmq servers.15:18
opendevreviewJonathan Rosser proposed openstack/openstack-ansible stable/2024.2: Use the same version of ansible for linters as is used for deployments.  https://review.opendev.org/c/openstack/openstack-ansible/+/93935615:18
jrosserlowercase: is there a reason you are not treating it like an OSA major version upgrade would, which also moved forward the version of the rabbitmq install?15:19
jrosserregarding your paste, that would add extra nodes to the existing rabbitmq cluster which does not sound like what you describe15:20
lowercaseI looked at the rabbitmq upgrade procedure. Rabbitmq recommends a step upgrade. I need to go to 3.7.7 to 3.7.18, then 3.8.x, then 3.9.x. All the way to 3.1415:20
jrosserwell, you know what we test this pretty robustly in our CI, and we do not follow that approach15:21
lowercaseIn your experience, what upgrade path would you follow?15:22
jrosserthis must be a very very old deployment?15:22
lowercasevery.15:22
lowercaseEverything is currently 2023.1, except for rabbit and neutron.15:23
lowercaseneutron is because we are memory bound and awaiting new hardware.15:23
noonedeadpunkjrosser: to be fair - we hardly was jumping between many rabbitmq releases at a time15:26
noonedeadpunkI kinda wonder if it's even easier to just rollout completely new rabbitmq cluster and re-configure services to use it....15:26
lowercaseThat is our current thought as well15:27
lowercasemy original question was around how to accomplish that in the openstack_inventory.15:27
noonedeadpunkjust drop existing containers from openstack_inventory.json I guess15:27
lowercaseoh that will work?15:28
noonedeadpunkand then running playbooks will generate a complete new set of containers15:28
lowercaseone sec15:28
noonedeadpunkand configure services to use them15:28
noonedeadpunkyou'd need to manually destroy old containers once migration is done15:28
jrosser^ that would do it15:30
jrossertake a backup of the inventory first of course15:31
lowercasesure did.15:31
lowercasecause whatever edit i just did bombed the inventory script15:31
noonedeadpunklowercase: you coud run /opt/openstack-ansible/inventory-manage.py -r <rabbitmq_container_name>15:33
lowercasethis looks like its doing the trick15:38
opendevreviewJonathan Rosser proposed openstack/openstack-ansible stable/2024.1: Use the same version of ansible for linters as is used for deployments.  https://review.opendev.org/c/openstack/openstack-ansible/+/93936215:46
opendevreviewJonathan Rosser proposed openstack/openstack-ansible stable/2024.1: Use the same version of ansible for linters as is used for deployments.  https://review.opendev.org/c/openstack/openstack-ansible/+/93936215:46
jrosseroh lol what a mess /o\15:47
* jrosser steps away for fresh air15:47
opendevreviewJonathan Rosser proposed openstack/openstack-ansible stable/2024.1: Remove senlin/sahara/murano roles from required project  https://review.opendev.org/c/openstack/openstack-ansible/+/93907015:52
noonedeadpunkI'd guess that main issue is not ansible-core16:15
noonedeadpunkbut ansible-compat16:15
jrosserthe job is just running for 2024.1 so we shall see16:16
jrosserbut 2024.2 is a bit wtf as it doesnt run the linter job at all16:16
noonedeadpunktrue....16:17
jrossermaybe its missing from the job template or something so obvious like that16:18
jrosseri've probably been looking for too-complicated reasons16:18
jrosserright good linters are passing on https://review.opendev.org/c/openstack/openstack-ansible/+/939070 now16:19
noonedeadpunkactually I think linters are no triggered every time16:23
jrosseri think we need to backport this to 2024.2 https://review.opendev.org/c/openstack/openstack-ansible/+/93821616:23
noonedeadpunkok, it's not true https://opendev.org/openstack/openstack-ansible/src/branch/stable/2024.2/zuul.d/jobs.yaml#L34816:23
jrosserthe job definition is wrong16:23
noonedeadpunkah16:23
jrosseri mean the template definition includes jammy linter job, not noble16:24
noonedeadpunkpartially backport I'd say16:24
noonedeadpunkthough ansible-lint==24.12.2 was quite fine16:24
jrosserso which bit not to backport?16:25
opendevreviewJonathan Rosser proposed openstack/openstack-ansible stable/2024.2: Update test-requirements  https://review.opendev.org/c/openstack/openstack-ansible/+/93936516:27
jrosser^ thats all of it cherry-picked, we can edit as needed16:27
jrosserlinters have run and passed for 939365 too, it didnt change the ansible-core version even though we don't define it in test-requirements.txt16:41
noonedeadpunkok, nice then16:45
opendevreviewJonathan Rosser proposed openstack/openstack-ansible stable/2024.2: Use the same version of ansible for linters as is used for deployments.  https://review.opendev.org/c/openstack/openstack-ansible/+/93935616:46
jrosser^ then rebase this on top, and it means the version should not change in future16:47
noonedeadpunkseems backport is also needed to 2023.216:48
opendevreviewJonathan Rosser proposed openstack/openstack-ansible master: Bump ansible-core to 2.17.7  https://review.opendev.org/c/openstack/openstack-ansible/+/93936816:49
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/2023.2: Use the same version of ansible for linters as is used for deployments.  https://review.opendev.org/c/openstack/openstack-ansible/+/93936916:49
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/2023.2: Use the same version of ansible for linters as is used for deployments.  https://review.opendev.org/c/openstack/openstack-ansible/+/93936916:50
-opendevstatus- NOTICE: The paste service at paste.opendev.org will have a short (15-20) minute outage momentarily to replace the underlying server.17:07
opendevreviewJonathan Rosser proposed openstack/openstack-ansible master: Molecule to respect depends-on for test-requirements update  https://review.opendev.org/c/openstack/openstack-ansible/+/93929017:18
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add noble to molecule testing  https://review.opendev.org/c/openstack/openstack-ansible/+/93930617:18
jrosserintroduced a depends-on 939330 to those ^ as its already failed the horizon u-c fech once17:19
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_skyline master: Implement TLS backend coverage for Skyline  https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/93829817:22
jrossernoonedeadpunk: should we combine https://review.opendev.org/c/openstack/openstack-ansible/+/939369 and https://review.opendev.org/c/openstack/openstack-ansible/+/939072 ?17:29
opendevreviewJonathan Rosser proposed openstack/openstack-ansible stable/2023.2: Remove senlin/sahara/murano roles from required project  https://review.opendev.org/c/openstack/openstack-ansible/+/93907217:33
jrosseryep i've done that to make the patches look the same on 2023.2 and 2024.117:34
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-lxc_container_create master: Re-introduce functional tests with molecule  https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/93925717:39
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-ops master: Upgrade magnum-cluster-api to 0.24.2  https://review.opendev.org/c/openstack/openstack-ansible-ops/+/93622917:50
noonedeadpunklooks like this, yes18:15
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/2024.1: Bump SHAs for 2024.1  https://review.opendev.org/c/openstack/openstack-ansible/+/93891918:48
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/2023.2: Bump SHAs for 2023.2  https://review.opendev.org/c/openstack/openstack-ansible/+/93892820:57
jrosserthese are good to go now https://review.opendev.org/c/openstack/openstack-ansible/+/939070 https://review.opendev.org/c/openstack/openstack-ansible/+/93907220:58
jrosserthats interesting, repo server 503 even with retries https://zuul.opendev.org/t/openstack/build/b59a963bf94e4bfd9ac5ec1f4bf47fb0/log/job-output.txt#10394-1040021:02

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!