Tuesday, 2022-10-18

opendevreviewKevin Carter proposed openstack/openstack-ansible-lxc_hosts master: Add option to disable lxc interface management  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/86167600:09
opendevreviewOpenStack Proposal Bot proposed openstack/openstack-ansible master: Imported Translations from Zanata  https://review.opendev.org/c/openstack/openstack-ansible/+/86169704:14
*** ysandeep|out is now known as ysandeep05:47
jrosser_morning07:17
*** ysandeep is now known as ysandeep|lunch08:16
noonedeadpunko/08:19
damiandabrowskihi!08:19
noonedeadpunkhave you seen glance security advisory that was sent yestarday?08:20
noonedeadpunkhttps://wiki.openstack.org/wiki/OSSN/OSSN-009008:22
noonedeadpunkquite old thing though, but we do have https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/templates/glance-api.conf.j2#L27-L2908:26
noonedeadpunkso sounds like we should do smth with that 08:28
PTOI have some issues on ussuri/latest with cinder volume service, which goes down:  volume service is down. (host: os-infra-b-01.srv.aau.dk@RBD) - any clue why this happens?08:30
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts master: Replace ifupdown with native ip-link  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/82812008:32
noonedeadpunkPTO no without logs from cinder-volume08:34
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts master: Replace ifupdown with native ip-link  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/82812008:35
PTOcinder-volime: https://pastebin.com/YMyAvGuU08:37
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts master: Replace ifupdown with native ip-link  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/82812008:37
noonedeadpunkPTO: there're too much connection errors to rabbitmq. 08:41
noonedeadpunkis rabbit cluster healthy?08:41
damiandabrowskinoonedeadpunk: interesting, kolla just informs about the potential security risk: https://docs.openstack.org/kolla-ansible/xena/reference/storage/external-ceph-guide.html08:42
damiandabrowskiglance docs also consider revealing show_multiple_locations as a "GRAVE SECURITY RISK"08:42
noonedeadpunkyup08:42
damiandabrowskiso maybe we just want to do the same - update docs?08:42
noonedeadpunkwell, if you read OSSN - there's a better proposal actually08:43
noonedeadpunkstartup 2 glance-api isntances with different config08:43
jrosser_^ we should do that i think08:43
noonedeadpunkfor internal and public usage.08:43
noonedeadpunkWhat I'm not sure about - how exactly to do that including upgrade path08:44
jrosser_ceph is common enough use case that we should cover this properly08:44
damiandabrowskii read that, i just wonder if we want to add more complexity because of this08:44
noonedeadpunkeventually we would need to define extra haproxy frontend/backend set08:44
jrosser_i was thinking we should run two glance services in the glance container (maybe systemd smarts can help us here)08:45
damiandabrowskibut ceph is not affected, right?08:45
noonedeadpunkoh, yes, nice idea!08:45
jrosser_on different ports, then external endpoint gets one set of backends and internal endpoint gets a different set08:45
jrosser_so the upgrade should be pretty seamless08:45
noonedeadpunkdamiandabrowski: well... no in teory, but enabling web-pload is also quite common and then you're in trouble08:45
jrosser_damiandabrowski: https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/defaults/main.yml#L85-L8608:46
noonedeadpunk*theory / web-upload08:46
noonedeadpunkI still wish that was solved with policies tbh08:47
jrosser_i think we can also be smart and not need to have two templates08:47
noonedeadpunkAnd here I was thinking how to do that and I'm kind of clueless08:48
noonedeadpunkexcept apply some default_rbd_overrides08:48
jrosser_like combine with glance_show_image_direct_url: {} to remove the key in one of them08:48
damiandabrowskido we need to glance_show_multiple_locations enable for ceph as we do nowadays? ceph docs doesn't say anything about that: https://docs.ceph.com/en/latest/rbd/rbd-openstack/08:48
jrosser_so we use the default value in on of the services08:48
damiandabrowskito enable glance_show_multiple_locations  *08:48
noonedeadpunkActually, I was also wondering if we do. I haven't tried that08:48
noonedeadpunkBut was thinking it's worth checking out08:48
noonedeadpunkbut I guess show_image_direct_url is needed regardless08:49
damiandabrowskibtw. what is this web upload you mention?08:51
noonedeadpunkjrosser_: we still will have 2 tasks/iterations for placing config - so we indeed can combine there. But I'd rather add things when they needed then remove08:51
noonedeadpunkdamiandabrowski: sorry, web-download :D08:52
noonedeadpunkhttps://docs.openstack.org/glance/latest/admin/interoperable-image-import.html#configuring-the-web-download-method08:52
noonedeadpunkthe only thing I haven't checked yet if haproxy role will allow us to do that easily08:56
noonedeadpunkas we need same backends but with different port on backend, but same on frontend08:56
damiandabrowskinoonedeadpunk: thanks, is there any document describing how enabling web-download makes ceph vulnerable? 08:58
PTOnoonedeadpunk: Yes, the rabbit cluster is stable and I cant find any issues with it08:59
noonedeadpunkIt does not make ceph vulnerable - it makes web-download vulnerable08:59
noonedeadpunkor any other backend added to ceph for $reasons08:59
noonedeadpunkPTO: do you use TLS for rabbit?09:00
PTOnoonedeadpunk: https://ctrl.vi/i/XFE820BiL 09:00
PTOnoonedeadpunk: Yes, i use TLS09:00
PTOnoonedeadpunk: I had 5 rabbitmq & galera nodes, but I have scaled it down to 3 as galera had slow writes due to sync.09:00
noonedeadpunkah, but I guess services still configured to have 5 backends?09:01
PTOnoonedeadpunk: I have re deployed everything and openstack services only use 3 nodes now09:02
PTOnoonedeadpunk: The problem where the same before09:02
noonedeadpunkif you attach to cinder-volume journald (ie journalctl -f -u cinder-volume) and restart service - won't it show some more meaningful error about failure to initilize backend?09:03
PTOnoonedeadpunk: cinder-volume comes up fine (all five of them) and it works, but from time to time, block device mapping fails and cinder-volume needs a restart09:04
PTOis 5 instances of cinder-volime too much?09:04
noonedeadpunkand what backend you use?09:05
damiandabrowskiwell, probably I still don't understand it :D This sentence may suggest that disabling show_image_locations solves the issue:09:05
damiandabrowski"So far the focus has been on 'show_multiple_locations'. When that setting is disabled in Glance, it is not possible to manipulate the locations via the OpenStack Images API"09:05
damiandabrowskiSo if we don't need it for ceph(worth checking, but we probably don't, then for most cases(i assume ceph is most common storage backend) we won't really need multiple glance-apis.09:05
damiandabrowskiOr maybe I missed something...09:05
PTOnoonedeadpunk: rbd09:05
noonedeadpunkdamiandabrowski: well yeah, I don't really know. Might be worth bugging them, but they have PTG same time as we do...09:06
damiandabrowski:/09:06
noonedeadpunkPTO: oh, well. I think it might be coordination issue ( ͡~ ͜ʖ ͡~)09:07
noonedeadpunkdamiandabrowski: the things is they were quite explicit about recommended way....09:07
PTOnoonedeadpunk: Can I scale it down to 2-3 instances without breaking it?09:07
noonedeadpunkPTO: eventually, for RBD and active/active setup corrdination service like etcd or zookepeer is needed. And we aware of the issue with default setups now09:08
noonedeadpunkYes, you can scale down, but before that you will need to migrate volumes to other services09:09
PTOnoonedeadpunk: Would it be better to run a single instance of cinder-volume?09:09
noonedeadpunk`cinder-manage volume update_host --currenthost <current host> --newhost <new host>`09:09
noonedeadpunkafter that you will be able to delete some servies `cinder-manage service remove <service> <host>`09:10
noonedeadpunkcinder-manage can be found in cinder-api/volume container inside venv09:10
PTOnoonedeadpunk: I dont understand why the migration is needed? Cant i just stop the extra cinder-volume services and remove the backend services?09:10
noonedeadpunkPTO: well, depends if you care about redundancy09:10
noonedeadpunkNo, you can't?09:10
PTOnoonedeadpunk: oh09:11
noonedeadpunkAs if you check as admin `openstack volume show` you will notice that volume is "binded" to some service09:11
PTOnoonedeadpunk: Seems pretty risky then09:11
noonedeadpunkthough it's "cluster" but tool and DB won't allow to drop service that is being referenced09:11
PTOnoonedeadpunk: So just to be clear, you suggest to create a new backend and migrate all volumes to it?09:12
noonedeadpunkI'd say it's not very risky. But yes, needs to be careful :)09:12
PTOnoonedeadpunk: the cluster have currently 1200 volumes :-(09:13
noonedeadpunkUm, no. What I meant is that for scaling down you need 1. stop cinder-volume you want to drop 2. move volumes from it to any other active service that you want to keep 3. delete service of choice09:13
PTOnoonedeadpunk: I guess ceph would need to copy the data to another pool?09:13
noonedeadpunkno?09:13
noonedeadpunkor we're talking about different things here :)09:14
PTOnoonedeadpunk: Got it not! Its only the volumes attached to the cinder-volumes which are to be deleted ( os-vol-host-attr:host)09:14
PTOwill this issue be fixed in a newer openstack version?09:15
noonedeadpunkI guess we will discuss it today on PTG09:15
noonedeadpunkat least it's on agenda09:15
PTOnoonedeadpunk: The cluster have been working for 2 years with the 5 cinder-volume configuration, but lately it gone very bad and it had failed allot of times09:16
noonedeadpunkI'm not 100% sure it's about that. but it can be related at very least09:17
noonedeadpunkWhat in fact very confuses me a lot of connection issues to rabbitmq in logs09:17
PTOnoonedeadpunk: Do you thing sacling to three cinder-volume will improve the stability or should it be lower?09:17
noonedeadpunkand it feels like this is more likely to be an issue09:17
PTOI am no rabbit expert, but i have spent allot of time digging in the logs09:18
noonedeadpunkWell, we have 3 copies and more volumes spread across them without much troubles. But I think we might not see some intermittent ones due to absence of coordination09:19
PTOThe best I have found is: 2022-10-18 09:17:13.583 [error] <0.16539.21> closing AMQP connection <0.16539.21> (172.21.212.14:55116 -> 172.21.213.23:5671 - cinder-volume:1420715:653738af-7cc3-4343-8538-684d094bd8e6):09:19
PTOnoonedeadpunk: Will rabbit tell, if its out of connections or resources?09:20
noonedeadpunkin logs? no idea09:21
PTOmaybe raising the connection limit will help?09:22
noonedeadpunkwhy it's connecting to 172.21.213.79 twice?09:22
noonedeadpunkdon't you happen to have same host mentioned twice in a row in cinder.conf?09:23
PTOI have quite some "closing AMPQ connection" in the logs (last 500): https://pastebin.com/JAFVzidy09:24
noonedeadpunkyeah and all these are "error"09:25
PTOnoonedeadpunk: So you know how to increase connections on rabbitmq?09:29
PTOnoonedeadpunk: cinder.conf https://pastebin.com/NDZRCYMp09:30
noonedeadpunkI never tuned that so can't say for sure except look in docs09:36
PTOnoonedeadpunk: According to rabbitmq status, I have very far from any limit :-/09:38
PTOnoonedeadpunk: I am planning an upgrade of ussuri and hopefully this will help09:39
noonedeadpunkhave you checked if some queue in some vhost have tons of unread messages?09:39
noonedeadpunkok, and what you're running now?:)09:40
noonedeadpunkas jsut some rabbit versions are quite shitty 09:40
noonedeadpunkLike 3.7 iirc09:41
noonedeadpunkcan't recall about 3.809:41
PTOnoonedeadpunk: All channels are empty or hav a few messages. 09:41
PTOnoonedeadpunk: I am running Ussuri 21.2.10 latest09:42
PTOnoonedeadpunk: I am running rabbit 3.8.209:42
noonedeadpunkok.09:43
noonedeadpunkwell, I can't really recall about beginning of 3.8 branch09:44
noonedeadpunkiirc it had some improvements over previous ones, but not sure if improvements came in at the very beginning09:44
PTOnoonedeadpunk: What are your suggestions moving forward? Troubleshooting on rabbit, reducing cinder-volume og upgrade to newer OS release?09:46
noonedeadpunktbh whenever I have some weird issues with rabbit I just run openstack-ansible rabbitmq-install.yml -e rabbitmq_upgrade=true without changing any vriables - it re-installs rabbit and re-builds cluster. In 90% it jsut solves the issue and I don't spend any more time on investigation 09:48
noonedeadpunkjrosser_: we have a problem with rabbit on U which I brought recently in code ;(09:48
noonedeadpunkhttps://opendev.org/openstack/openstack-ansible-rabbitmq_server/src/branch/stable/ussuri/vars/debian.yml#L1709:49
noonedeadpunkhttps://opendev.org/openstack/openstack-ansible-rabbitmq_server/commit/768000373edde8e472bb9d64960325cad6ef2839 brought too extreme bump of rabbit version...09:50
noonedeadpunkand now it's higher then in V or W09:50
noonedeadpunkThough it was done after moving to U to EM...09:51
noonedeadpunkSo maybe we can move that back to smth reasonable without much hussle...09:51
noonedeadpunkBut not sure if it's worth bump back U or update V/W....09:58
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/ussuri: Bump rabbitmq version back  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/86172910:06
PTOnoonedeadpunk: Thank you very much for you advice! It was very helpful! I will try and deploy rabbit again and bump the OS verion. Which OS release would you aim for in a production cluster?10:08
noonedeadpunkI guess under OS you mean Operating System, not OpenStack ? :D10:09
noonedeadpunkWell, we currently run Ubuntu 20.04. Debian 11 should be really good choice as well IMO. IF you like rhel derivatives - would suggest Rocky Linux 10:10
noonedeadpunkThat is available only since Yoga though10:10
* noonedeadpunk trying to fix EM branches10:15
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_rally stable/ussuri: Move rally details to constraints  https://review.opendev.org/c/openstack/openstack-ansible-os_rally/+/86173010:20
PTOnoonedeadpunk: I was reffering to OpenStack 10:21
noonedeadpunkAny that is supported? :D10:22
noonedeadpunkYoga is actually good candidate, that allows you to upgrade OS and will have upgrade path to Antelope then10:23
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/ussuri: Bump SHA for galera, rabbitmq and rally roles  https://review.opendev.org/c/openstack/openstack-ansible/+/85302910:25
*** ysandeep|lunch is now known as ysandeep10:37
anskiynoonedeadpunk: hey! so, this thing: https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/855829. I've recreated my comment here: https://paste.opendev.org/show/bHiaspEIZBCahvDiwELe/11:07
noonedeadpunkanskiy: sorry I indeed didn't have time to look into that but I do remember about it. I saw you added it also to the PTG etherpad so it's good11:18
noonedeadpunkwhat I don't like about neutron_needs_openvswitch is that it not only controls package isntallation but also if ovs services are started11:19
noonedeadpunkSo if we change condition as mentioned in first option - ovsdb will be started on northd which we want to avoid11:20
noonedeadpunkmaybe last one is closest to what I was thinking11:21
anskiynoonedeadpunk: okay, I'll try to think that option more thoroughly11:22
noonedeadpunkSo basically control installation and startup kind of independently11:23
noonedeadpunkNasty part is indeed that we need that for quite small usecase...11:23
noonedeadpunkOne annoying corner case...11:24
anskiynoonedeadpunk: yeah, got your point. Could you please leave a comment for me in the change? Or I can do it myself if you want.11:24
noonedeadpunkHowever, with that https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/857652 it might be more unified11:24
noonedeadpunkLike we can install ovs with current logic, but leave service disabled/masked for ovn11:25
noonedeadpunkyeah, sure11:29
noonedeadpunkanskiy: eventually, why just not to mask service here on northd https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/855829/3/tasks/providers/ovn_config.yml#49 ?11:34
anskiynoonedeadpunk: this sounds better, than policy-rc.d, TBH, as the later one would look a little bit off for this kind of logic11:36
anskiythank you!11:37
noonedeadpunkit will be coupled with rc.d11:37
noonedeadpunkas rc.d will prevent inital startup of service after package install11:37
anskiyah, yes, okay :)11:38
noonedeadpunkand ovs packages are kind of still needed - they're jsut brought as dependencies...11:38
noonedeadpunkbut yes, for deb there will be some extra ones...11:38
*** dviroel|out is now known as dviroel11:41
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-tests stable/ussuri: Switch linters job to focal  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/86174211:51
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_rally stable/ussuri: Move rally details to constraints  https://review.opendev.org/c/openstack/openstack-ansible-os_rally/+/86173011:52
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts stable/ussuri: Use legacy image retrieval for CentOS 7  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/86174412:00
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-tests stable/ussuri: Restrict pyOpenSSL to less then 20.0.0  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/86174212:10
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts stable/ussuri: Use legacy image retrieval for CentOS 7  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/86174412:14
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-tests stable/ussuri: Restrict pyOpenSSL to less then 20.0.0  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/86174212:19
*** ysandeep is now known as ysandeep|away12:20
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/ussuri: Bump SHA for galera, rabbitmq and rally roles  https://review.opendev.org/c/openstack/openstack-ansible/+/85302912:36
* noonedeadpunk wonders why he spends time on that actually12:36
noonedeadpunkit's soooooo circular....12:39
noonedeadpunkReally thinking jsut to drop all jobs, and make another patch that will depend on everything to see if it's safe to land12:39
noonedeadpunkOr we should just EOL all that is not passing...12:40
jrosser_i think there is a danger of spending a huuuuge amount of time on this13:03
jrosser_better maybe to fix up a bunch of stuff locally and push it all with jobs dropped13:04
jrosser_then we can revert that when the know broken things are merged13:04
opendevreviewMerged openstack/openstack-ansible-galera_server stable/yoga: Bump mariadb version to 10.6.10  https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/86065513:31
opendevreviewMerged openstack/openstack-ansible-plugins master: Use ansible_facts[] instead of injected fact vars  https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/86042413:33
noonedeadpunkIt actually not _that_ bad at the end...13:49
jrosser_one hour till ptg isnt is?14:02
mgariepynow ?14:02
mgariepy14 utc is not ? isn't it ?14:03
mgariepyis now**14:03
jrosser_oh yes your right14:03
mgariepyat least in my TZ :P14:03
noonedeadpunkIt's now, yes:)14:03
noonedeadpunk#link https://www.openinfra.dev/ptg/rooms/essex14:04
opendevreviewMerged openstack/openstack-ansible-lxc_container_create master: Replace usage of networkd template with role  https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/86139114:28
*** ysandeep|away is now known as ysandeep14:31
*** dviroel is now known as dviroel|dr_appt14:52
anskiyregarding the distro support (I've never tried it): from what I've heard, it looks like it should be on some separate release cycle and only for stable branches?..15:26
*** ysandeep is now known as ysandeep|out15:44
noonedeadpunknah, it's ame release cycle. but we can try them only once it's about time for us to release as well, which leaves very little time to test them15:53
opendevreviewMerged openstack/openstack-ansible-os_neutron master: Avoid ovs restart during package upgrade  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/85765216:16
jrosser_https://github.com/rabbitmq/rabbitmq-server/releases16:48
*** dviroel|dr_appt is now known as dviroel16:49
jrosser_https://launchpad.net/~rabbitmq/+archive/ubuntu/rabbitmq-erlang16:52
jrosser_mgariepy: noonedeadpunk: here what we do to avoid updates https://paste.opendev.org/show/bZQ0YcCWTEtnvUw1ZXxa/17:13
jrosser_give the URL to the precise snap to install rather than have it work anything out17:13
noonedeadpunkhm. is there any way do discover it relatively easily?17:14
noonedeadpunk*to17:14
noonedeadpunk*to discover17:14
jrosser_not sure, i'd have to dig a bit17:15
jrosser_but likley not17:15
noonedeadpunkas we could put that url to local facts and update only when some flag is provided17:16
noonedeadpunklike lxd_update=true17:16
noonedeadpunkthe question is how to find new url though17:16
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-tests stable/ussuri: Restrict pyOpenSSL to less then 20.0.0  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/86174217:21
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-tests stable/ussuri: Restrict pyOpenSSL to less then 20.0.0  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/86174217:22
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-tests stable/ussuri: Restrict pyOpenSSL to less then 20.0.0  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/86174218:00
noonedeadpunkas eventually passing url is not that nasty as non-existent proxy19:06
mgariepyit's different indeed ;)19:06
noonedeadpunkfwiw ussuri is almost fixed with minimum temporary disabled jobs. Though it's stuck on upgrade from Train which becomes a bit annoying19:08
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts stable/ussuri: Use legacy image retrieval for CentOS 7  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/86174419:36
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/ussuri: Bump SHA for galera, rabbitmq and rally roles  https://review.opendev.org/c/openstack/openstack-ansible/+/85302919:36
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-tests stable/ussuri: Return CentOS 7 jobs to voting  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/86178819:38
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts stable/ussuri: Return CentOS 7 jobs to voting  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/86178919:39
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_rally stable/ussuri: Return jobs to voting  https://review.opendev.org/c/openstack/openstack-ansible-os_rally/+/86179019:41
noonedeadpunkI wonder how long we should maintain upgrade jobs.... As they make things really annoying for EM branches19:44
noonedeadpunkAnd eventually we will gate to the EOL one day which we won't be able to fix anyway19:45
noonedeadpunk*we will get to the EOL one19:45
noonedeadpunkMight make sense to get rid of upgrade jobs with transition to EM19:46
noonedeadpunkas basically that would be last "release" and everything that will come later is best effort19:46
jrosser_sounds like a good idea19:46
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/ussuri: Bump rabbitmq version back  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/86172919:59
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/train: Use cloudsmith repo for rabbit and erlang  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/86179420:10
NeilHanlontw20:24

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!