opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Add noble to molecule testing https://review.opendev.org/c/openstack/openstack-ansible/+/939306 | 01:04 |
---|---|---|
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Move zuul preparation for role/collection bootstrap https://review.opendev.org/c/openstack/openstack-ansible/+/939151 | 01:07 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Optimize generation of required roles/collections https://review.opendev.org/c/openstack/openstack-ansible/+/939221 | 01:07 |
noonedeadpunk | so now https://zuul.opendev.org/t/openstack/build/d1f8413286d9499e813b8c5e99392e8d fails very differently at least | 01:09 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Pin ansible-compat up to 25.0.0 https://review.opendev.org/c/openstack/openstack-ansible/+/939274 | 01:09 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Add httpd role to a-r-r https://review.opendev.org/c/openstack/openstack-ansible/+/938271 | 01:10 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Molecule to respect depends-on for test-requirements update https://review.opendev.org/c/openstack/openstack-ansible/+/939290 | 01:10 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Add noble to molecule testing https://review.opendev.org/c/openstack/openstack-ansible/+/939306 | 01:10 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Ensure repo_server proper username https://review.opendev.org/c/openstack/openstack-ansible/+/938275 | 01:12 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Ensure repo_server proper username https://review.opendev.org/c/openstack/openstack-ansible/+/938275 | 01:15 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Return upgrade jobs back to voting https://review.opendev.org/c/openstack/openstack-ansible/+/939307 | 01:16 |
noonedeadpunk | sweet - https://zuul.opendev.org/t/openstack/build/438766d363fe4fd09d2fb16b4fd8552f | 08:04 |
noonedeadpunk | noble testing, finally | 08:04 |
noonedeadpunk | also it seems that indeed an issue was in circular dependency for repo server httpd role: https://review.opendev.org/c/openstack/openstack-ansible/+/939307 | 08:10 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_skyline master: Use standalone httpd role https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/938297 | 08:10 |
jrosser | oh wow you did the molecule requirements file already \o/ | 08:31 |
noonedeadpunk | I had a bad sleep tonight | 08:32 |
noonedeadpunk | I'm also a bit confused of why we're getting 503 for our cached constraints on repo server even with https://review.opendev.org/c/openstack/openstack-ansible/+/939054? | 08:43 |
jrosser | oh i see `zuul@172.18.0.2` in molecule, that forcing to be root | 08:43 |
jrosser | *that needs | 08:43 |
noonedeadpunk | yeah | 08:43 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-lxc_container_create master: Re-introduce functional tests with molecule https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/939257 | 08:44 |
noonedeadpunk | I wonder if we should do OOMScoreAdjust for apache to prevent this. But makes sense to do only with httpd role then, I guess, as otherwise it's gonna be very annoying to achieve | 08:46 |
noonedeadpunk | (at least for CI only) | 08:47 |
jrosser | do you see OOM for that failure? | 08:47 |
noonedeadpunk | I don't but it sometimes not obvious somehow in ubuntu to look for it | 08:47 |
noonedeadpunk | but also what else could it be.... | 08:47 |
noonedeadpunk | also like - `Jan 13 10:23:26 aio1 apache2: 172.29.236.100 - - [13/Jan/2025:10:23:26 +0000] "HEAD /healthcheck HTTP/1.0" 204 312 "-" "osa-haproxy-healthcheck"` and then 503 is at 10:23:23 | 08:50 |
noonedeadpunk | https://zuul.opendev.org/t/openstack/build/94c00dc5f3c243609f798b18ffb531ed/log/logs/host/syslog.txt#68734-68748 | 08:51 |
noonedeadpunk | oh | 08:51 |
noonedeadpunk | so it's actually a race condition | 08:51 |
jrosser | we don;t have some restart vs reload error smewhere? | 08:52 |
jrosser | becasue we shouldnt need to actually restart it perhaps | 08:53 |
noonedeadpunk | so this error is exact same time when u-c is fetched: https://zuul.opendev.org/t/openstack/build/94c00dc5f3c243609f798b18ffb531ed/log/job-output.txt#11110-11111 | 08:53 |
noonedeadpunk | there's haproxy reload only around the time | 08:53 |
noonedeadpunk | si I'd stick back with OOM :) | 08:54 |
jrosser | welll | 08:55 |
jrosser | the line immediately before https://zuul.opendev.org/t/openstack/build/94c00dc5f3c243609f798b18ffb531ed/log/logs/host/syslog.txt#68734-68748 | 08:55 |
jrosser | `Jan 13 10:23:21 aio1 libapache2-mod-wsgi-py3: apache2_invoke: Enable module wsgi` | 08:56 |
jrosser | thats pretty suspicious no? | 08:56 |
noonedeadpunk | So you mean that during package installation, apt does restart the service | 08:57 |
noonedeadpunk | as it's timestamp around `Install distro packages` | 08:57 |
jrosser | well i'm not sure - is that the point the package is installed | 08:57 |
noonedeadpunk | and - https://opendev.org/openstack/openstack-ansible-os_horizon/src/branch/master/vars/debian.yml#L31 | 08:57 |
noonedeadpunk | so wsgi is smth isntalled | 08:57 |
noonedeadpunk | I would not expect for it to be enabled on installation | 08:58 |
jrosser | ok so this is all in the current world with many things competing for managing apache | 09:00 |
jrosser | with the new role would we make it such that `libapache2-mod-wsgi-py3` is installed up front the first time the role is run, then there should be no need to restart apache for that when we get to the horizon role | 09:02 |
jrosser | this seems to be a problem specific to metal jobs where all use of apache is collapsed onto the same instance | 09:03 |
jrosser | it would not have this race for an lxc job | 09:03 |
noonedeadpunk | I think it depends... | 10:07 |
noonedeadpunk | We actually can prevent it from restart during package installation | 10:07 |
noonedeadpunk | like we do for... rabbitmq? | 10:07 |
jrosser | well - i was just seeing that we dont have a patch yet to make horizon use the new apache role | 10:09 |
jrosser | and perhaps once that is done it makes this issue just go away? | 10:09 |
noonedeadpunk | yeah, could be for sure | 10:09 |
noonedeadpunk | I'm actually not 100% sure on how to handle extra modules installation. I was thinking that roles that needs extra modules - will supply a list, and they will be installed | 10:10 |
jrosser | if the first time apache is deployed, it knows that module is needed somehow, then its there and ready when we get to the horizon role so no restart is necesary | 10:10 |
noonedeadpunk | but that's not yet fully implemented in httpd role | 10:10 |
noonedeadpunk | like - do we want to install all oidc modules regardlessly is the question | 10:11 |
jrosser | that sounds like a think perhaps for group_vars rather than putting in role | 10:11 |
jrosser | then we can gather some variable by prefix httpd_apache_modules_* or whatever | 10:12 |
jrosser | and then it would would out correctly with only the necessary modules | 10:12 |
noonedeadpunk | I made a `httpd_extra_modules` and was thinking to pass the list to extend during httpd role includes | 10:12 |
jrosser | i.e just the ones needed for horizon in a lxc horizon container | 10:12 |
noonedeadpunk | ie L20 https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/938297/2/tasks/skyline_apache.yml | 10:13 |
noonedeadpunk | I think we can just disable service restarts like we do in some other placess during module installation | 10:14 |
noonedeadpunk | but ofc probaby we can do that through vars lookup as well | 10:14 |
jrosser | and we have a handler to do it in a controlled way? | 10:14 |
* noonedeadpunk trying to find | 10:26 | |
noonedeadpunk | ah. this thing: https://github.com/openstack/openstack-ansible-rabbitmq_server/blob/b09ac39cbca11f7a5a14731e583246fe6a6c420e/tasks/rabbitmq_upgrade_prep.yml#L16-L24 | 10:29 |
noonedeadpunk | so in role we can add policy-rc.d, install extra modules, drop file, execute handler | 10:32 |
noonedeadpunk | but ofc we can do as you say with collecting all required modules based on groups | 10:40 |
jrosser | we'd just have to be sure that idea works | 10:40 |
jrosser | as in the play you'd be targetting repo_all or somthing first | 10:41 |
noonedeadpunk | yup | 10:41 |
jrosser | and would want to also have all the vars for the other services in scope, in a metal AIO | 10:41 |
noonedeadpunk | so it has to be in extra-vars | 10:41 |
jrosser | and i'm not sure if they would be | 10:41 |
noonedeadpunk | oh well. | 10:41 |
jrosser | yeah, so that would not be so helpful for LXC i suppose | 10:42 |
jrosser | as we'd get all the modules everywhere which is not ideal | 10:42 |
jrosser | so it does sounds simpler to do like with did with rabbitmq then | 10:42 |
noonedeadpunk | so we can place all httpd vars in group_vars/all, and then like `httpd_extra_modules_skyline: "{{ groups['skyline_all'] | ternary(skyline_modules, []) }}" | 10:43 |
noonedeadpunk | but it adds complexity, sure | 10:43 |
noonedeadpunk | as then outside of say keystone role we have to know decision choices | 10:45 |
jrosser | there is likley still some left over race condition of restarting apache -> haproxy health check -> haproxy ready to serve | 10:46 |
jrosser | vs. whatever tasks come next | 10:46 |
noonedeadpunk | so I tried to to reloads instead of restarts on vhosts changes | 10:46 |
noonedeadpunk | and reloads should be less disruptive | 10:46 |
noonedeadpunk | though still, in case of Django, they will be disruptive | 10:47 |
jrosser | perhaps another way in group vars is to use is_metal for the services with troublesome apache modules to decide if they get enabled globally or not | 10:50 |
noonedeadpunk | well.... dunno... what if the case would be, that each service is deployed on a separate VM? So it's kinda metal, but in fact that logic won't be great | 10:51 |
jrosser | true | 10:51 |
noonedeadpunk | I just frankly didn't know that apache would do that. As I was under impression on ubuntu, that all extra installed modules have to be explicitly enabled first | 10:55 |
noonedeadpunk | through a2enmod | 10:56 |
noonedeadpunk | so that module isntallation triggers restart in the meanwhile - was a bit of surprise | 10:56 |
noonedeadpunk | btw, skyline is passing now with proper depends-on: https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/938297 | 10:57 |
jrosser | we could intersect the current host with the skyline/repo/keystone/horizon host groups | 10:57 |
jrosser | and then retrieve all the modules via something derived from `{{ hostvars[groups['horizon_all'][0]]['httpd_extra_modules'] }}` | 10:58 |
jrosser | and build a list of modules for all the things using apache on the current host | 10:58 |
noonedeadpunk | I guess main question is - if it's worth it | 10:59 |
jrosser | thats a difficult question | 11:00 |
jrosser | because for real deployments is really not much of an issue | 11:01 |
noonedeadpunk | as having policy-rc.d still worth to have in case of any changes to any of roles | 11:01 |
jrosser | but in CI we have to have a super-high level of reliability thats otherwise unnecessary | 11:01 |
noonedeadpunk | I think we still need to ensure that apache is not restarted when it's not expected. As for production deployments - keystone depends on this as well | 11:02 |
noonedeadpunk | ah, but yes, backend should be disabled I assume for production | 11:02 |
jrosser | well perhaps, but in a real metal deploy there could still be a shared apache | 11:03 |
noonedeadpunk | true | 11:03 |
jrosser | and you could get an unexpected restart from maybe horizon, restarting * services | 11:03 |
noonedeadpunk | yeah | 11:03 |
jrosser | so there are different concerns maybe - we need to protect against any 503 type situations in CI for job reliability | 11:04 |
jrosser | and then we also need to protect against unexpected restarts of apache for production metal deploys | 11:04 |
noonedeadpunk | I'd also need to check if actualy restart is needed for new module installation. I guess it is, and just realod won't pick up a new module | 11:04 |
jrosser | ^ so this is an argument to get the modules installed up-front, for production | 11:04 |
noonedeadpunk | yeah | 11:05 |
jrosser | otherwise you may never have a way to deploy horizon without breaking keystone | 11:05 |
noonedeadpunk | and then you don't disable/drain keystone backend either, so it just dies on you | 11:05 |
jrosser | yes thats an excellent thing to verify, if restart is actually needed for new modules | 11:06 |
noonedeadpunk | ok, fair point | 11:06 |
noonedeadpunk | for now I'm thinking to add retries to the task :D | 11:11 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_horizon master: Add retries to u_c fetch https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/939330 | 11:14 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_horizon master: Add retries to u_c fetch https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/939330 | 11:14 |
noonedeadpunk | this should be backportable and solve CI issue at least | 11:16 |
opendevreview | Andrew Bonney proposed openstack/openstack-ansible stable/2024.2: Fix inventory adjustment for legacy container naming https://review.opendev.org/c/openstack/openstack-ansible/+/939341 | 12:42 |
opendevreview | Andrew Bonney proposed openstack/openstack-ansible stable/2024.1: Fix inventory adjustment for legacy container naming https://review.opendev.org/c/openstack/openstack-ansible/+/939342 | 12:43 |
jrosser | hmmm 2024.1 CI is still not running the expected jobs | 12:50 |
jrosser | and how does the linter pass for check and fail for gate here https://review.opendev.org/c/openstack/openstack-ansible/+/939070?tab=change-view-tab-header-zuul-results-summary | 12:53 |
jrosser | that does not make much sense | 12:53 |
andrewbonney | For CI I think https://review.opendev.org/c/openstack/openstack-ansible/+/939070 needs to merge | 13:06 |
mgariepy | why only gate has this one ? args[module]: Unsupported parameters for (basic.py) | 13:24 |
mgariepy | failed: installed ansible-compat-25.0.0 ansible-core-2.17.7 ansible-lint-6.19.0 | 13:29 |
mgariepy | succes : installed ansible-compat-24.10.0 ansible-lint-6.19.0 | 13:29 |
mgariepy | not the same test why is the pkg selection not the same ? | 13:29 |
jrosser | mgariepy: which jobs is that? | 13:31 |
mgariepy | gate vs not gate linter | 13:31 |
mgariepy | the one you posted 40 minutes ago :D | 13:31 |
jrosser | oooh right | 13:32 |
mgariepy | unless something merged between the runs ? | 13:32 |
jrosser | well also ansible-compat 25 was released yesterday | 13:33 |
mgariepy | ha | 13:35 |
mgariepy | ha. the new check is also failed, i guess it's the new version of ansible-compat. | 13:35 |
mgariepy | yep, consistent error. | 13:36 |
mgariepy | https://zuul.opendev.org/t/openstack/build/be658454dd5243ea967a0849ca04bd68 | 13:36 |
jrosser | i think that its `args[module]: Unsupported parameters for (basic.py) module: get_md5` in os_swift | 13:37 |
jrosser | which is fixed here https://review.opendev.org/c/openstack/openstack-ansible-os_swift/+/922283 | 13:39 |
jrosser | ok this is certainly a problem https://zuul.opendev.org/t/openstack/build/be658454dd5243ea967a0849ca04bd68/log/job-output.txt#1889-1898 | 13:42 |
jrosser | noonedeadpunk: ^ what seems to happen there is we bootstrap-ansible (correct versions of ansible) in a zuul pre job, and then later gate-check-commit.sh does it again installing later/incorrect versions for the branch | 13:46 |
jrosser | it's as if this has picked up the master branch version of test-requirements.yml https://opendev.org/openstack/openstack-ansible/src/branch/stable/2024.1/scripts/gate-check-commit.sh#L129-L132 | 14:05 |
alvinstarr | Is there a way to set the IP addresses on the public endpoints. | 14:26 |
alvinstarr | The default seems to be the external_lb_vip_address which in my case is behind a fierwall. | 14:26 |
alvinstarr | Currently I am updating the public URLs after the playbook has finished. | 14:26 |
jrosser | alvinstarr: you can look in the service role defaults, like for glance here https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/tasks/main.yml#L157 | 14:33 |
jrosser | and https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/defaults/main.yml#L198 | 14:33 |
jrosser | you can override the value of that in your user_variables.yml | 14:34 |
jrosser | so if you set that var for the services that you need, the public endpoints in the service catalog can be set to whatever you need | 14:35 |
alvinstarr | Thanks. | 14:39 |
alvinstarr | Is it safe to assume that it will always be xxx_service_publicurl | 14:39 |
jrosser | anything in one of the role defaults files should be stable, the intention of all of those variables is that they are exposed for you to customise the deployment | 14:41 |
jrosser | if they ever get changed there will be a release note (which you should be reading anyway at upgrade time :) ) | 14:42 |
noonedeadpunk | jrosser: um. so gate-check-commit does try to install from test-requirements.txt there | 14:58 |
jrosser | yes it does, the code looks right | 14:59 |
jrosser | what looks wrong is that it installs a test-requirements file that seems to be master? | 14:59 |
noonedeadpunk | about bootstrap - it's a bit arguable I guess if it should happen or not at all | 14:59 |
noonedeadpunk | command is - /opt/ansible-runtime/bin/pip install --isolated --index-url http://mirror.sjc3.raxflex.opendev.org/pypi/simple --trusted-host mirror.sjc3.raxflex.opendev.org --extra-index-url http://mirror.sjc3.raxflex.opendev.org/wheel/ubuntu-22.04-x86_64 -r /home/zuul/src/opendev.org/openstack/openstack-ansible/test-requirements.txt | 14:59 |
jrosser | it gets the exact version of ansible-core that we just committed to master for the molecule stuff | 14:59 |
noonedeadpunk | ansible-lint-6.19.0 - I think it's for 2024.1? | 15:00 |
jrosser | on 2024.1 there is no ansible-core defined in test-requirements.txt | 15:00 |
noonedeadpunk | ansible-lint==24.12.2 on master | 15:00 |
noonedeadpunk | we just don't specify anything in 2024.1 for ansible-core | 15:00 |
jrosser | ok, well perhaps the first thing to look at is this https://zuul.opendev.org/t/openstack/build/be658454dd5243ea967a0849ca04bd68/log/job-output.txt#1889-1898 | 15:00 |
noonedeadpunk | so it gets ltest | 15:00 |
noonedeadpunk | what I see there is - `ansible-lint-6.19.0` | 15:01 |
noonedeadpunk | which points to https://opendev.org/openstack/openstack-ansible/src/branch/stable/2024.1/test-requirements.txt#L14 | 15:01 |
noonedeadpunk | why does it drop existing ansible.... | 15:01 |
jrosser | that should not be happening, uninstall ansible-core ansible-core-2.15.9 and install ansible-core-2.17.7 | 15:01 |
noonedeadpunk | I don't see nothing in ansible-lint requirements that would do that: https://github.com/ansible/ansible-lint/blob/v6.19.0/.config/requirements.in#L3-L4 | 15:02 |
noonedeadpunk | so while I agree, I'm not sure why this is happening actually | 15:03 |
jrosser | actually i was wrong | 15:03 |
noonedeadpunk | easy thing to do, is to define ansible-core in test-requirements | 15:04 |
jrosser | we have 2.17.5 in test-requriements on master | 15:04 |
jrosser | not 2.17.7 | 15:04 |
noonedeadpunk | yeah, so it gets latest | 15:04 |
jrosser | no, that would be 2.18.1 | 15:04 |
noonedeadpunk | but 2.18 is not compatible with python 3.8? | 15:05 |
noonedeadpunk | so latest for jammy is 2.17 | 15:05 |
jrosser | ah right | 15:05 |
jrosser | ok so here https://zuul.opendev.org/t/openstack/build/be658454dd5243ea967a0849ca04bd68/log/job-output.txt#1809-1810 | 15:06 |
jrosser | so is it that we are not constraining ansible-core in test-requirements.txt to be matching with what scripts/bootstrap-ansible.sh defines as the version for that branch | 15:07 |
lowercase | Hey guys, working on an issue. Rabbitmq has fallen behind and im looking to push it forward. The course of action that has been decided is to standup a second pair of rabbitmq containers, provision and then flip over. The issue with this approach is currently im not able to double list rabbit servers, and i imagine both. Do you guys have any suggestions? | 15:13 |
lowercase | https://paste.centos.org/view/c839108d | 15:14 |
lowercase | rabbit_mq servers are hosted on 3 dedicated servers, which have 3 rabbitmq lxc containers. I would like to increase it to 6. and then delete the original three after i have moved all the openstack services to the new rabbitmq servers. | 15:18 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible stable/2024.2: Use the same version of ansible for linters as is used for deployments. https://review.opendev.org/c/openstack/openstack-ansible/+/939356 | 15:18 |
jrosser | lowercase: is there a reason you are not treating it like an OSA major version upgrade would, which also moved forward the version of the rabbitmq install? | 15:19 |
jrosser | regarding your paste, that would add extra nodes to the existing rabbitmq cluster which does not sound like what you describe | 15:20 |
lowercase | I looked at the rabbitmq upgrade procedure. Rabbitmq recommends a step upgrade. I need to go to 3.7.7 to 3.7.18, then 3.8.x, then 3.9.x. All the way to 3.14 | 15:20 |
jrosser | well, you know what we test this pretty robustly in our CI, and we do not follow that approach | 15:21 |
lowercase | In your experience, what upgrade path would you follow? | 15:22 |
jrosser | this must be a very very old deployment? | 15:22 |
lowercase | very. | 15:22 |
lowercase | Everything is currently 2023.1, except for rabbit and neutron. | 15:23 |
lowercase | neutron is because we are memory bound and awaiting new hardware. | 15:23 |
noonedeadpunk | jrosser: to be fair - we hardly was jumping between many rabbitmq releases at a time | 15:26 |
noonedeadpunk | I kinda wonder if it's even easier to just rollout completely new rabbitmq cluster and re-configure services to use it.... | 15:26 |
lowercase | That is our current thought as well | 15:27 |
lowercase | my original question was around how to accomplish that in the openstack_inventory. | 15:27 |
noonedeadpunk | just drop existing containers from openstack_inventory.json I guess | 15:27 |
lowercase | oh that will work? | 15:28 |
noonedeadpunk | and then running playbooks will generate a complete new set of containers | 15:28 |
lowercase | one sec | 15:28 |
noonedeadpunk | and configure services to use them | 15:28 |
noonedeadpunk | you'd need to manually destroy old containers once migration is done | 15:28 |
jrosser | ^ that would do it | 15:30 |
jrosser | take a backup of the inventory first of course | 15:31 |
lowercase | sure did. | 15:31 |
lowercase | cause whatever edit i just did bombed the inventory script | 15:31 |
noonedeadpunk | lowercase: you coud run /opt/openstack-ansible/inventory-manage.py -r <rabbitmq_container_name> | 15:33 |
lowercase | this looks like its doing the trick | 15:38 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible stable/2024.1: Use the same version of ansible for linters as is used for deployments. https://review.opendev.org/c/openstack/openstack-ansible/+/939362 | 15:46 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible stable/2024.1: Use the same version of ansible for linters as is used for deployments. https://review.opendev.org/c/openstack/openstack-ansible/+/939362 | 15:46 |
jrosser | oh lol what a mess /o\ | 15:47 |
* jrosser steps away for fresh air | 15:47 | |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible stable/2024.1: Remove senlin/sahara/murano roles from required project https://review.opendev.org/c/openstack/openstack-ansible/+/939070 | 15:52 |
noonedeadpunk | I'd guess that main issue is not ansible-core | 16:15 |
noonedeadpunk | but ansible-compat | 16:15 |
jrosser | the job is just running for 2024.1 so we shall see | 16:16 |
jrosser | but 2024.2 is a bit wtf as it doesnt run the linter job at all | 16:16 |
noonedeadpunk | true.... | 16:17 |
jrosser | maybe its missing from the job template or something so obvious like that | 16:18 |
jrosser | i've probably been looking for too-complicated reasons | 16:18 |
jrosser | right good linters are passing on https://review.opendev.org/c/openstack/openstack-ansible/+/939070 now | 16:19 |
noonedeadpunk | actually I think linters are no triggered every time | 16:23 |
jrosser | i think we need to backport this to 2024.2 https://review.opendev.org/c/openstack/openstack-ansible/+/938216 | 16:23 |
noonedeadpunk | ok, it's not true https://opendev.org/openstack/openstack-ansible/src/branch/stable/2024.2/zuul.d/jobs.yaml#L348 | 16:23 |
jrosser | the job definition is wrong | 16:23 |
noonedeadpunk | ah | 16:23 |
jrosser | i mean the template definition includes jammy linter job, not noble | 16:24 |
noonedeadpunk | partially backport I'd say | 16:24 |
noonedeadpunk | though ansible-lint==24.12.2 was quite fine | 16:24 |
jrosser | so which bit not to backport? | 16:25 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible stable/2024.2: Update test-requirements https://review.opendev.org/c/openstack/openstack-ansible/+/939365 | 16:27 |
jrosser | ^ thats all of it cherry-picked, we can edit as needed | 16:27 |
jrosser | linters have run and passed for 939365 too, it didnt change the ansible-core version even though we don't define it in test-requirements.txt | 16:41 |
noonedeadpunk | ok, nice then | 16:45 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible stable/2024.2: Use the same version of ansible for linters as is used for deployments. https://review.opendev.org/c/openstack/openstack-ansible/+/939356 | 16:46 |
jrosser | ^ then rebase this on top, and it means the version should not change in future | 16:47 |
noonedeadpunk | seems backport is also needed to 2023.2 | 16:48 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible master: Bump ansible-core to 2.17.7 https://review.opendev.org/c/openstack/openstack-ansible/+/939368 | 16:49 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/2023.2: Use the same version of ansible for linters as is used for deployments. https://review.opendev.org/c/openstack/openstack-ansible/+/939369 | 16:49 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/2023.2: Use the same version of ansible for linters as is used for deployments. https://review.opendev.org/c/openstack/openstack-ansible/+/939369 | 16:50 |
-opendevstatus- NOTICE: The paste service at paste.opendev.org will have a short (15-20) minute outage momentarily to replace the underlying server. | 17:07 | |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible master: Molecule to respect depends-on for test-requirements update https://review.opendev.org/c/openstack/openstack-ansible/+/939290 | 17:18 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Add noble to molecule testing https://review.opendev.org/c/openstack/openstack-ansible/+/939306 | 17:18 |
jrosser | introduced a depends-on 939330 to those ^ as its already failed the horizon u-c fech once | 17:19 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_skyline master: Implement TLS backend coverage for Skyline https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/938298 | 17:22 |
jrosser | noonedeadpunk: should we combine https://review.opendev.org/c/openstack/openstack-ansible/+/939369 and https://review.opendev.org/c/openstack/openstack-ansible/+/939072 ? | 17:29 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible stable/2023.2: Remove senlin/sahara/murano roles from required project https://review.opendev.org/c/openstack/openstack-ansible/+/939072 | 17:33 |
jrosser | yep i've done that to make the patches look the same on 2023.2 and 2024.1 | 17:34 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-lxc_container_create master: Re-introduce functional tests with molecule https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/939257 | 17:39 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-ops master: Upgrade magnum-cluster-api to 0.24.2 https://review.opendev.org/c/openstack/openstack-ansible-ops/+/936229 | 17:50 |
noonedeadpunk | looks like this, yes | 18:15 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/2024.1: Bump SHAs for 2024.1 https://review.opendev.org/c/openstack/openstack-ansible/+/938919 | 18:48 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/2023.2: Bump SHAs for 2023.2 https://review.opendev.org/c/openstack/openstack-ansible/+/938928 | 20:57 |
jrosser | these are good to go now https://review.opendev.org/c/openstack/openstack-ansible/+/939070 https://review.opendev.org/c/openstack/openstack-ansible/+/939072 | 20:58 |
jrosser | thats interesting, repo server 503 even with retries https://zuul.opendev.org/t/openstack/build/b59a963bf94e4bfd9ac5ec1f4bf47fb0/log/job-output.txt#10394-10400 | 21:02 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!