opendevreview | Kevin Carter proposed openstack/openstack-ansible-lxc_hosts master: Add option to disable lxc interface management https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/861676 | 00:09 |
---|---|---|
opendevreview | OpenStack Proposal Bot proposed openstack/openstack-ansible master: Imported Translations from Zanata https://review.opendev.org/c/openstack/openstack-ansible/+/861697 | 04:14 |
*** ysandeep|out is now known as ysandeep | 05:47 | |
jrosser_ | morning | 07:17 |
*** ysandeep is now known as ysandeep|lunch | 08:16 | |
noonedeadpunk | o/ | 08:19 |
damiandabrowski | hi! | 08:19 |
noonedeadpunk | have you seen glance security advisory that was sent yestarday? | 08:20 |
noonedeadpunk | https://wiki.openstack.org/wiki/OSSN/OSSN-0090 | 08:22 |
noonedeadpunk | quite old thing though, but we do have https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/templates/glance-api.conf.j2#L27-L29 | 08:26 |
noonedeadpunk | so sounds like we should do smth with that | 08:28 |
PTO | I have some issues on ussuri/latest with cinder volume service, which goes down: volume service is down. (host: os-infra-b-01.srv.aau.dk@RBD) - any clue why this happens? | 08:30 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts master: Replace ifupdown with native ip-link https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/828120 | 08:32 |
noonedeadpunk | PTO no without logs from cinder-volume | 08:34 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts master: Replace ifupdown with native ip-link https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/828120 | 08:35 |
PTO | cinder-volime: https://pastebin.com/YMyAvGuU | 08:37 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts master: Replace ifupdown with native ip-link https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/828120 | 08:37 |
noonedeadpunk | PTO: there're too much connection errors to rabbitmq. | 08:41 |
noonedeadpunk | is rabbit cluster healthy? | 08:41 |
damiandabrowski | noonedeadpunk: interesting, kolla just informs about the potential security risk: https://docs.openstack.org/kolla-ansible/xena/reference/storage/external-ceph-guide.html | 08:42 |
damiandabrowski | glance docs also consider revealing show_multiple_locations as a "GRAVE SECURITY RISK" | 08:42 |
noonedeadpunk | yup | 08:42 |
damiandabrowski | so maybe we just want to do the same - update docs? | 08:42 |
noonedeadpunk | well, if you read OSSN - there's a better proposal actually | 08:43 |
noonedeadpunk | startup 2 glance-api isntances with different config | 08:43 |
jrosser_ | ^ we should do that i think | 08:43 |
noonedeadpunk | for internal and public usage. | 08:43 |
noonedeadpunk | What I'm not sure about - how exactly to do that including upgrade path | 08:44 |
jrosser_ | ceph is common enough use case that we should cover this properly | 08:44 |
damiandabrowski | i read that, i just wonder if we want to add more complexity because of this | 08:44 |
noonedeadpunk | eventually we would need to define extra haproxy frontend/backend set | 08:44 |
jrosser_ | i was thinking we should run two glance services in the glance container (maybe systemd smarts can help us here) | 08:45 |
damiandabrowski | but ceph is not affected, right? | 08:45 |
noonedeadpunk | oh, yes, nice idea! | 08:45 |
jrosser_ | on different ports, then external endpoint gets one set of backends and internal endpoint gets a different set | 08:45 |
jrosser_ | so the upgrade should be pretty seamless | 08:45 |
noonedeadpunk | damiandabrowski: well... no in teory, but enabling web-pload is also quite common and then you're in trouble | 08:45 |
jrosser_ | damiandabrowski: https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/defaults/main.yml#L85-L86 | 08:46 |
noonedeadpunk | *theory / web-upload | 08:46 |
noonedeadpunk | I still wish that was solved with policies tbh | 08:47 |
jrosser_ | i think we can also be smart and not need to have two templates | 08:47 |
noonedeadpunk | And here I was thinking how to do that and I'm kind of clueless | 08:48 |
noonedeadpunk | except apply some default_rbd_overrides | 08:48 |
jrosser_ | like combine with glance_show_image_direct_url: {} to remove the key in one of them | 08:48 |
damiandabrowski | do we need to glance_show_multiple_locations enable for ceph as we do nowadays? ceph docs doesn't say anything about that: https://docs.ceph.com/en/latest/rbd/rbd-openstack/ | 08:48 |
jrosser_ | so we use the default value in on of the services | 08:48 |
damiandabrowski | to enable glance_show_multiple_locations * | 08:48 |
noonedeadpunk | Actually, I was also wondering if we do. I haven't tried that | 08:48 |
noonedeadpunk | But was thinking it's worth checking out | 08:48 |
noonedeadpunk | but I guess show_image_direct_url is needed regardless | 08:49 |
damiandabrowski | btw. what is this web upload you mention? | 08:51 |
noonedeadpunk | jrosser_: we still will have 2 tasks/iterations for placing config - so we indeed can combine there. But I'd rather add things when they needed then remove | 08:51 |
noonedeadpunk | damiandabrowski: sorry, web-download :D | 08:52 |
noonedeadpunk | https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html#configuring-the-web-download-method | 08:52 |
noonedeadpunk | the only thing I haven't checked yet if haproxy role will allow us to do that easily | 08:56 |
noonedeadpunk | as we need same backends but with different port on backend, but same on frontend | 08:56 |
damiandabrowski | noonedeadpunk: thanks, is there any document describing how enabling web-download makes ceph vulnerable? | 08:58 |
PTO | noonedeadpunk: Yes, the rabbit cluster is stable and I cant find any issues with it | 08:59 |
noonedeadpunk | It does not make ceph vulnerable - it makes web-download vulnerable | 08:59 |
noonedeadpunk | or any other backend added to ceph for $reasons | 08:59 |
noonedeadpunk | PTO: do you use TLS for rabbit? | 09:00 |
PTO | noonedeadpunk: https://ctrl.vi/i/XFE820BiL | 09:00 |
PTO | noonedeadpunk: Yes, i use TLS | 09:00 |
PTO | noonedeadpunk: I had 5 rabbitmq & galera nodes, but I have scaled it down to 3 as galera had slow writes due to sync. | 09:00 |
noonedeadpunk | ah, but I guess services still configured to have 5 backends? | 09:01 |
PTO | noonedeadpunk: I have re deployed everything and openstack services only use 3 nodes now | 09:02 |
PTO | noonedeadpunk: The problem where the same before | 09:02 |
noonedeadpunk | if you attach to cinder-volume journald (ie journalctl -f -u cinder-volume) and restart service - won't it show some more meaningful error about failure to initilize backend? | 09:03 |
PTO | noonedeadpunk: cinder-volume comes up fine (all five of them) and it works, but from time to time, block device mapping fails and cinder-volume needs a restart | 09:04 |
PTO | is 5 instances of cinder-volime too much? | 09:04 |
noonedeadpunk | and what backend you use? | 09:05 |
damiandabrowski | well, probably I still don't understand it :D This sentence may suggest that disabling show_image_locations solves the issue: | 09:05 |
damiandabrowski | "So far the focus has been on 'show_multiple_locations'. When that setting is disabled in Glance, it is not possible to manipulate the locations via the OpenStack Images API" | 09:05 |
damiandabrowski | So if we don't need it for ceph(worth checking, but we probably don't, then for most cases(i assume ceph is most common storage backend) we won't really need multiple glance-apis. | 09:05 |
damiandabrowski | Or maybe I missed something... | 09:05 |
PTO | noonedeadpunk: rbd | 09:05 |
noonedeadpunk | damiandabrowski: well yeah, I don't really know. Might be worth bugging them, but they have PTG same time as we do... | 09:06 |
damiandabrowski | :/ | 09:06 |
noonedeadpunk | PTO: oh, well. I think it might be coordination issue ( ͡~ ͜ʖ ͡~) | 09:07 |
noonedeadpunk | damiandabrowski: the things is they were quite explicit about recommended way.... | 09:07 |
PTO | noonedeadpunk: Can I scale it down to 2-3 instances without breaking it? | 09:07 |
noonedeadpunk | PTO: eventually, for RBD and active/active setup corrdination service like etcd or zookepeer is needed. And we aware of the issue with default setups now | 09:08 |
noonedeadpunk | Yes, you can scale down, but before that you will need to migrate volumes to other services | 09:09 |
PTO | noonedeadpunk: Would it be better to run a single instance of cinder-volume? | 09:09 |
noonedeadpunk | `cinder-manage volume update_host --currenthost <current host> --newhost <new host>` | 09:09 |
noonedeadpunk | after that you will be able to delete some servies `cinder-manage service remove <service> <host>` | 09:10 |
noonedeadpunk | cinder-manage can be found in cinder-api/volume container inside venv | 09:10 |
PTO | noonedeadpunk: I dont understand why the migration is needed? Cant i just stop the extra cinder-volume services and remove the backend services? | 09:10 |
noonedeadpunk | PTO: well, depends if you care about redundancy | 09:10 |
noonedeadpunk | No, you can't? | 09:10 |
PTO | noonedeadpunk: oh | 09:11 |
noonedeadpunk | As if you check as admin `openstack volume show` you will notice that volume is "binded" to some service | 09:11 |
PTO | noonedeadpunk: Seems pretty risky then | 09:11 |
noonedeadpunk | though it's "cluster" but tool and DB won't allow to drop service that is being referenced | 09:11 |
PTO | noonedeadpunk: So just to be clear, you suggest to create a new backend and migrate all volumes to it? | 09:12 |
noonedeadpunk | I'd say it's not very risky. But yes, needs to be careful :) | 09:12 |
PTO | noonedeadpunk: the cluster have currently 1200 volumes :-( | 09:13 |
noonedeadpunk | Um, no. What I meant is that for scaling down you need 1. stop cinder-volume you want to drop 2. move volumes from it to any other active service that you want to keep 3. delete service of choice | 09:13 |
PTO | noonedeadpunk: I guess ceph would need to copy the data to another pool? | 09:13 |
noonedeadpunk | no? | 09:13 |
noonedeadpunk | or we're talking about different things here :) | 09:14 |
PTO | noonedeadpunk: Got it not! Its only the volumes attached to the cinder-volumes which are to be deleted ( os-vol-host-attr:host) | 09:14 |
PTO | will this issue be fixed in a newer openstack version? | 09:15 |
noonedeadpunk | I guess we will discuss it today on PTG | 09:15 |
noonedeadpunk | at least it's on agenda | 09:15 |
PTO | noonedeadpunk: The cluster have been working for 2 years with the 5 cinder-volume configuration, but lately it gone very bad and it had failed allot of times | 09:16 |
noonedeadpunk | I'm not 100% sure it's about that. but it can be related at very least | 09:17 |
noonedeadpunk | What in fact very confuses me a lot of connection issues to rabbitmq in logs | 09:17 |
PTO | noonedeadpunk: Do you thing sacling to three cinder-volume will improve the stability or should it be lower? | 09:17 |
noonedeadpunk | and it feels like this is more likely to be an issue | 09:17 |
PTO | I am no rabbit expert, but i have spent allot of time digging in the logs | 09:18 |
noonedeadpunk | Well, we have 3 copies and more volumes spread across them without much troubles. But I think we might not see some intermittent ones due to absence of coordination | 09:19 |
PTO | The best I have found is: 2022-10-18 09:17:13.583 [error] <0.16539.21> closing AMQP connection <0.16539.21> (172.21.212.14:55116 -> 172.21.213.23:5671 - cinder-volume:1420715:653738af-7cc3-4343-8538-684d094bd8e6): | 09:19 |
PTO | noonedeadpunk: Will rabbit tell, if its out of connections or resources? | 09:20 |
noonedeadpunk | in logs? no idea | 09:21 |
PTO | maybe raising the connection limit will help? | 09:22 |
noonedeadpunk | why it's connecting to 172.21.213.79 twice? | 09:22 |
noonedeadpunk | don't you happen to have same host mentioned twice in a row in cinder.conf? | 09:23 |
PTO | I have quite some "closing AMPQ connection" in the logs (last 500): https://pastebin.com/JAFVzidy | 09:24 |
noonedeadpunk | yeah and all these are "error" | 09:25 |
PTO | noonedeadpunk: So you know how to increase connections on rabbitmq? | 09:29 |
PTO | noonedeadpunk: cinder.conf https://pastebin.com/NDZRCYMp | 09:30 |
noonedeadpunk | I never tuned that so can't say for sure except look in docs | 09:36 |
PTO | noonedeadpunk: According to rabbitmq status, I have very far from any limit :-/ | 09:38 |
PTO | noonedeadpunk: I am planning an upgrade of ussuri and hopefully this will help | 09:39 |
noonedeadpunk | have you checked if some queue in some vhost have tons of unread messages? | 09:39 |
noonedeadpunk | ok, and what you're running now?:) | 09:40 |
noonedeadpunk | as jsut some rabbit versions are quite shitty | 09:40 |
noonedeadpunk | Like 3.7 iirc | 09:41 |
noonedeadpunk | can't recall about 3.8 | 09:41 |
PTO | noonedeadpunk: All channels are empty or hav a few messages. | 09:41 |
PTO | noonedeadpunk: I am running Ussuri 21.2.10 latest | 09:42 |
PTO | noonedeadpunk: I am running rabbit 3.8.2 | 09:42 |
noonedeadpunk | ok. | 09:43 |
noonedeadpunk | well, I can't really recall about beginning of 3.8 branch | 09:44 |
noonedeadpunk | iirc it had some improvements over previous ones, but not sure if improvements came in at the very beginning | 09:44 |
PTO | noonedeadpunk: What are your suggestions moving forward? Troubleshooting on rabbit, reducing cinder-volume og upgrade to newer OS release? | 09:46 |
noonedeadpunk | tbh whenever I have some weird issues with rabbit I just run openstack-ansible rabbitmq-install.yml -e rabbitmq_upgrade=true without changing any vriables - it re-installs rabbit and re-builds cluster. In 90% it jsut solves the issue and I don't spend any more time on investigation | 09:48 |
noonedeadpunk | jrosser_: we have a problem with rabbit on U which I brought recently in code ;( | 09:48 |
noonedeadpunk | https://opendev.org/openstack/openstack-ansible-rabbitmq_server/src/branch/stable/ussuri/vars/debian.yml#L17 | 09:49 |
noonedeadpunk | https://opendev.org/openstack/openstack-ansible-rabbitmq_server/commit/768000373edde8e472bb9d64960325cad6ef2839 brought too extreme bump of rabbit version... | 09:50 |
noonedeadpunk | and now it's higher then in V or W | 09:50 |
noonedeadpunk | Though it was done after moving to U to EM... | 09:51 |
noonedeadpunk | So maybe we can move that back to smth reasonable without much hussle... | 09:51 |
noonedeadpunk | But not sure if it's worth bump back U or update V/W.... | 09:58 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/ussuri: Bump rabbitmq version back https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/861729 | 10:06 |
PTO | noonedeadpunk: Thank you very much for you advice! It was very helpful! I will try and deploy rabbit again and bump the OS verion. Which OS release would you aim for in a production cluster? | 10:08 |
noonedeadpunk | I guess under OS you mean Operating System, not OpenStack ? :D | 10:09 |
noonedeadpunk | Well, we currently run Ubuntu 20.04. Debian 11 should be really good choice as well IMO. IF you like rhel derivatives - would suggest Rocky Linux | 10:10 |
noonedeadpunk | That is available only since Yoga though | 10:10 |
* noonedeadpunk trying to fix EM branches | 10:15 | |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_rally stable/ussuri: Move rally details to constraints https://review.opendev.org/c/openstack/openstack-ansible-os_rally/+/861730 | 10:20 |
PTO | noonedeadpunk: I was reffering to OpenStack | 10:21 |
noonedeadpunk | Any that is supported? :D | 10:22 |
noonedeadpunk | Yoga is actually good candidate, that allows you to upgrade OS and will have upgrade path to Antelope then | 10:23 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/ussuri: Bump SHA for galera, rabbitmq and rally roles https://review.opendev.org/c/openstack/openstack-ansible/+/853029 | 10:25 |
*** ysandeep|lunch is now known as ysandeep | 10:37 | |
anskiy | noonedeadpunk: hey! so, this thing: https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/855829. I've recreated my comment here: https://paste.opendev.org/show/bHiaspEIZBCahvDiwELe/ | 11:07 |
noonedeadpunk | anskiy: sorry I indeed didn't have time to look into that but I do remember about it. I saw you added it also to the PTG etherpad so it's good | 11:18 |
noonedeadpunk | what I don't like about neutron_needs_openvswitch is that it not only controls package isntallation but also if ovs services are started | 11:19 |
noonedeadpunk | So if we change condition as mentioned in first option - ovsdb will be started on northd which we want to avoid | 11:20 |
noonedeadpunk | maybe last one is closest to what I was thinking | 11:21 |
anskiy | noonedeadpunk: okay, I'll try to think that option more thoroughly | 11:22 |
noonedeadpunk | So basically control installation and startup kind of independently | 11:23 |
noonedeadpunk | Nasty part is indeed that we need that for quite small usecase... | 11:23 |
noonedeadpunk | One annoying corner case... | 11:24 |
anskiy | noonedeadpunk: yeah, got your point. Could you please leave a comment for me in the change? Or I can do it myself if you want. | 11:24 |
noonedeadpunk | However, with that https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/857652 it might be more unified | 11:24 |
noonedeadpunk | Like we can install ovs with current logic, but leave service disabled/masked for ovn | 11:25 |
noonedeadpunk | yeah, sure | 11:29 |
noonedeadpunk | anskiy: eventually, why just not to mask service here on northd https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/855829/3/tasks/providers/ovn_config.yml#49 ? | 11:34 |
anskiy | noonedeadpunk: this sounds better, than policy-rc.d, TBH, as the later one would look a little bit off for this kind of logic | 11:36 |
anskiy | thank you! | 11:37 |
noonedeadpunk | it will be coupled with rc.d | 11:37 |
noonedeadpunk | as rc.d will prevent inital startup of service after package install | 11:37 |
anskiy | ah, yes, okay :) | 11:38 |
noonedeadpunk | and ovs packages are kind of still needed - they're jsut brought as dependencies... | 11:38 |
noonedeadpunk | but yes, for deb there will be some extra ones... | 11:38 |
*** dviroel|out is now known as dviroel | 11:41 | |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-tests stable/ussuri: Switch linters job to focal https://review.opendev.org/c/openstack/openstack-ansible-tests/+/861742 | 11:51 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_rally stable/ussuri: Move rally details to constraints https://review.opendev.org/c/openstack/openstack-ansible-os_rally/+/861730 | 11:52 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts stable/ussuri: Use legacy image retrieval for CentOS 7 https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/861744 | 12:00 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-tests stable/ussuri: Restrict pyOpenSSL to less then 20.0.0 https://review.opendev.org/c/openstack/openstack-ansible-tests/+/861742 | 12:10 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts stable/ussuri: Use legacy image retrieval for CentOS 7 https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/861744 | 12:14 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-tests stable/ussuri: Restrict pyOpenSSL to less then 20.0.0 https://review.opendev.org/c/openstack/openstack-ansible-tests/+/861742 | 12:19 |
*** ysandeep is now known as ysandeep|away | 12:20 | |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/ussuri: Bump SHA for galera, rabbitmq and rally roles https://review.opendev.org/c/openstack/openstack-ansible/+/853029 | 12:36 |
* noonedeadpunk wonders why he spends time on that actually | 12:36 | |
noonedeadpunk | it's soooooo circular.... | 12:39 |
noonedeadpunk | Really thinking jsut to drop all jobs, and make another patch that will depend on everything to see if it's safe to land | 12:39 |
noonedeadpunk | Or we should just EOL all that is not passing... | 12:40 |
jrosser_ | i think there is a danger of spending a huuuuge amount of time on this | 13:03 |
jrosser_ | better maybe to fix up a bunch of stuff locally and push it all with jobs dropped | 13:04 |
jrosser_ | then we can revert that when the know broken things are merged | 13:04 |
opendevreview | Merged openstack/openstack-ansible-galera_server stable/yoga: Bump mariadb version to 10.6.10 https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/860655 | 13:31 |
opendevreview | Merged openstack/openstack-ansible-plugins master: Use ansible_facts[] instead of injected fact vars https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/860424 | 13:33 |
noonedeadpunk | It actually not _that_ bad at the end... | 13:49 |
jrosser_ | one hour till ptg isnt is? | 14:02 |
mgariepy | now ? | 14:02 |
mgariepy | 14 utc is not ? isn't it ? | 14:03 |
mgariepy | is now** | 14:03 |
jrosser_ | oh yes your right | 14:03 |
mgariepy | at least in my TZ :P | 14:03 |
noonedeadpunk | It's now, yes:) | 14:03 |
noonedeadpunk | #link https://www.openinfra.dev/ptg/rooms/essex | 14:04 |
opendevreview | Merged openstack/openstack-ansible-lxc_container_create master: Replace usage of networkd template with role https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/861391 | 14:28 |
*** ysandeep|away is now known as ysandeep | 14:31 | |
*** dviroel is now known as dviroel|dr_appt | 14:52 | |
anskiy | regarding the distro support (I've never tried it): from what I've heard, it looks like it should be on some separate release cycle and only for stable branches?.. | 15:26 |
*** ysandeep is now known as ysandeep|out | 15:44 | |
noonedeadpunk | nah, it's ame release cycle. but we can try them only once it's about time for us to release as well, which leaves very little time to test them | 15:53 |
opendevreview | Merged openstack/openstack-ansible-os_neutron master: Avoid ovs restart during package upgrade https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/857652 | 16:16 |
jrosser_ | https://github.com/rabbitmq/rabbitmq-server/releases | 16:48 |
*** dviroel|dr_appt is now known as dviroel | 16:49 | |
jrosser_ | https://launchpad.net/~rabbitmq/+archive/ubuntu/rabbitmq-erlang | 16:52 |
jrosser_ | mgariepy: noonedeadpunk: here what we do to avoid updates https://paste.opendev.org/show/bZQ0YcCWTEtnvUw1ZXxa/ | 17:13 |
jrosser_ | give the URL to the precise snap to install rather than have it work anything out | 17:13 |
noonedeadpunk | hm. is there any way do discover it relatively easily? | 17:14 |
noonedeadpunk | *to | 17:14 |
noonedeadpunk | *to discover | 17:14 |
jrosser_ | not sure, i'd have to dig a bit | 17:15 |
jrosser_ | but likley not | 17:15 |
noonedeadpunk | as we could put that url to local facts and update only when some flag is provided | 17:16 |
noonedeadpunk | like lxd_update=true | 17:16 |
noonedeadpunk | the question is how to find new url though | 17:16 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-tests stable/ussuri: Restrict pyOpenSSL to less then 20.0.0 https://review.opendev.org/c/openstack/openstack-ansible-tests/+/861742 | 17:21 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-tests stable/ussuri: Restrict pyOpenSSL to less then 20.0.0 https://review.opendev.org/c/openstack/openstack-ansible-tests/+/861742 | 17:22 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-tests stable/ussuri: Restrict pyOpenSSL to less then 20.0.0 https://review.opendev.org/c/openstack/openstack-ansible-tests/+/861742 | 18:00 |
noonedeadpunk | as eventually passing url is not that nasty as non-existent proxy | 19:06 |
mgariepy | it's different indeed ;) | 19:06 |
noonedeadpunk | fwiw ussuri is almost fixed with minimum temporary disabled jobs. Though it's stuck on upgrade from Train which becomes a bit annoying | 19:08 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts stable/ussuri: Use legacy image retrieval for CentOS 7 https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/861744 | 19:36 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/ussuri: Bump SHA for galera, rabbitmq and rally roles https://review.opendev.org/c/openstack/openstack-ansible/+/853029 | 19:36 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-tests stable/ussuri: Return CentOS 7 jobs to voting https://review.opendev.org/c/openstack/openstack-ansible-tests/+/861788 | 19:38 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts stable/ussuri: Return CentOS 7 jobs to voting https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/861789 | 19:39 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_rally stable/ussuri: Return jobs to voting https://review.opendev.org/c/openstack/openstack-ansible-os_rally/+/861790 | 19:41 |
noonedeadpunk | I wonder how long we should maintain upgrade jobs.... As they make things really annoying for EM branches | 19:44 |
noonedeadpunk | And eventually we will gate to the EOL one day which we won't be able to fix anyway | 19:45 |
noonedeadpunk | *we will get to the EOL one | 19:45 |
noonedeadpunk | Might make sense to get rid of upgrade jobs with transition to EM | 19:46 |
noonedeadpunk | as basically that would be last "release" and everything that will come later is best effort | 19:46 |
jrosser_ | sounds like a good idea | 19:46 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/ussuri: Bump rabbitmq version back https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/861729 | 19:59 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/train: Use cloudsmith repo for rabbit and erlang https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/861794 | 20:10 |
NeilHanlon | tw | 20:24 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!