Tuesday, 2024-01-09

opendevreviewMerged openstack/ansible-role-systemd_networkd stable/2023.2: Fix defenition of multiple static routes for network  https://review.opendev.org/c/openstack/ansible-role-systemd_networkd/+/90375400:35
opendevreviewMerged openstack/openstack-ansible-ceph_client stable/2023.2: Add backwards compatibility of ceph_components format  https://review.opendev.org/c/openstack/openstack-ansible-ceph_client/+/90484900:52
opendevreviewMerged openstack/openstack-ansible-lxc_hosts master: Fix resolved config on Debian  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/90482800:55
opendevreviewMerged openstack/openstack-ansible stable/2023.1: Skip installing curl for EL  https://review.opendev.org/c/openstack/openstack-ansible/+/90484602:52
opendevreviewMerged openstack/openstack-ansible stable/zed: Skip installing curl for EL  https://review.opendev.org/c/openstack/openstack-ansible/+/90484703:08
opendevreviewAndrew Bonney proposed openstack/openstack-ansible-lxc_hosts stable/2023.2: Fix resolved config on Debian  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/90508108:40
jrossergood morning08:41
noonedeadpunk\o/09:09
noonedeadpunkaccording to the bug tracker, we have couple of new folks rolling out deployments right now09:09
signed@noonedeadpunk For bug #2048751, I attached the correct log (the log attached at first was the other bugs's one)09:34
noonedeadpunksigned: aha, thanks09:46
noonedeadpunkquestion - have you tried to run the playbook I suggested?09:46
noonedeadpunk`openstack-ansible playbooks/lxc-hosts-setup.yml -e lxc_image_cache_refresh=true`?09:46
noonedeadpunkAs I kinda wonder if it's trying to run against old LXC image or smth like that09:47
noonedeadpunkas you're right - I don't also see a reason why the package is not available09:48
signednoonedeadpunk: Saw your answer after destroying the test VM to test AIO (currently in the deploy phase, we're kinda time-limited to roll a test instance), if AIO is validated, I'll try09:48
noonedeadpunkBut, what is concerning, that apt update should list repos in output09:48
noonedeadpunkAnd there's none09:48
signedYeah, this is definitely weird09:49
noonedeadpunkSo output I'd exepct would be like https://paste.ubuntu.com/p/ZGRzfyhg4C/09:50
noonedeadpunk(or smth)09:50
noonedeadpunk(just provided from localhost)09:50
noonedeadpunkSo it felt like there're no repos configured at all for the container09:51
signedShouldn't that fail?09:51
noonedeadpunkI think no....09:51
signedGonna try in Docker 2s09:51
signedDoesn't fail09:53
signedWith no repos set09:53
noonedeadpunkAnd like no output for apt update?09:54
signedhttps://paste.ubuntu.com/p/KNQ73Mrb97/09:54
signedI find that bug puzzling09:55
noonedeadpunkah, it also does `apt-get update` which will provide slightly less output09:55
noonedeadpunkSo yeah, I assume there're simply no repos were configured09:56
signedRunning `apt-get update` on my Docker container returns no repos09:56
noonedeadpunkhuh09:56
signedReturns the same thing*09:56
signedas the bug09:56
noonedeadpunkaha, ok09:56
signedAnd my AIO deployment failed09:58
signed2024-01-09 09:58:38.007 61410 ERROR cinder.service [-] Manager for service cinder-volume aio1@lvm is reporting problems, not sending heartbeat. Service will appear "down".09:58
noonedeadpunkOk, this is more or less "known" I think09:59
signedAny leads on how to fix it? My SCENARIO was "aio_metal"09:59
noonedeadpunkOr at least it's not first time I'm seing it09:59
signedBut that leads to a 50309:59
noonedeadpunk503 on which request?10:00
signedUpdate driver status failed: (config name lvm) is uninitialized.*10:00
noonedeadpunkand what release are you running?10:00
signedMaster10:00
signedae6e59d1e5318cd909771b9bb3a198d321e6a03b10:00
signedI have had so much problems with Openstack on our env. I am hoping I can get those resolved because I feel it's such a great product10:01
noonedeadpunkWhat's in `journalctl -u cinder-volume.service`? There must be some related error I believe10:02
noonedeadpunkThough I'd assume this issue should be gone on master...10:02
signedTable 'cinder.services' doesn't exist10:03
noonedeadpunkUm, can you kindly paste then error from the runtime of ansible?10:03
signedAnsible succeeded10:03
noonedeadpunkThen scroll down the log to latest error :)10:04
noonedeadpunkOr let me try to reproduce it10:04
signedRunning jctl -f leads only cinder for shitting itsel10:04
signedopenstack-ansible setup-openstack.yml ran just fine10:05
signedWithout failures10:05
noonedeadpunkI assume including tempest tests then10:05
signedShould I try os-cinder-install.yml?10:06
signedWould that fix the database?10:06
noonedeadpunkwhat's the output of `openstack volume service list`?10:06
noonedeadpunk503?10:07
noonedeadpunk`source /root/openrc` first10:08
noonedeadpunksigned:  ^10:08
signedSucceeds, I see a cinder-scheduler up and a cinder-volume down10:08
noonedeadpunkfwiw, I don't see failures for cinder-volume in recently spawned AIOs of mine :(10:09
noonedeadpunkThough they;re mainly LXC ones, not metal...10:09
jrosseryou have to me *much* more careful about getting addressing correct for a metal deploy10:09
noonedeadpunkor using ceph...10:09
jrosser*be10:09
signedI didn't modify anything in the var files, so maybe my lvm config is botched10:09
signed(The host is installed on LVM)10:10
signedGonna try redeploy10:10
jrossersigned: the AIO deploy should "just work" without any modification in a VM, thats how we do our CI10:10
noonedeadpunkworks on metal as well....10:11
jrosserfor both metal and lxc scenarios10:11
noonedeadpunkbut that was quite old aio...10:11
noonedeadpunkwell, we not always test cinder in CI with tests.10:11
signedYeah, I feel that... How does the LVM partitionning needs to be?10:11
noonedeadpunk./scripts/bootstrap_aio.sh does LVM partitioning in general10:12
signedAll dep start from a clean 22.04 install with 128GB disk10:12
signedDo I need to shrink Ubuntu PV?10:12
jrosser"it depends"10:12
signedBecause, for all my deps currently I just used use full disk partitionning10:13
noonedeadpunkfrom the loopback device...10:13
jrosserthere is no prescribed way that OSA works, it is more like a toolbox10:13
jrosserthe AIO provides a known reference but you are able to construct your production deployment really any way you like10:13
noonedeadpunkHere's the part that configures "volumes" for AIO: https://opendev.org/openstack/openstack-ansible/src/branch/master/tests/roles/bootstrap-host/tasks/prepare_loopback_cinder.yml#L57-L7610:13
signedYeah, feel that, that's so weird I have so much issues, being with OSA, or with traditional deployment10:14
signedDevStack left me with no VM networking at all, traditional deployment also had network issues (might very well be my config)10:14
noonedeadpunkWell, I still feel that first two issues were pretty much related to cdrom actually10:15
signedProbably10:15
noonedeadpunkBut networking is really a thing10:15
noonedeadpunkYou must very well understand how you want networking to look for your deployment10:15
noonedeadpunkotherwise it will be a mess10:15
signedTbf, it's probably my very bad understanding of it10:15
jrosseri think that this is my point about "toolbox"10:16
jrosserwith OSA pretty much anything is possible, but you have to know what you want10:16
jrosserand then express that in the config10:16
noonedeadpunkimo, networking is one of the most complex thing in openstack10:16
jrosserrather than expect a magical installer to make all those choices for you10:16
signedI don't really want anything for now, it's really a PoC for a company project10:16
signedWe're debating between Hypervisors10:17
jrosserbut you have to make some basic decisions about storage, networking and approach to H/A as a minimum10:17
noonedeadpunkyeah, AIO should generally work. For this purpose I usually havew VMs with 100Gb disc, 4-8 vCPUs and 12+ gb of RAM.10:17
noonedeadpunk(and nested virt)10:18
noonedeadpunkfwiw, you should be also able to spawn instances even without cinder, just using nova ephemeral drives10:18
signedYeah I know10:18
signedBut Horizon spitted me a 50310:19
noonedeadpunkwell. you can drop cinder installation then I guess as a whole... But wonder if this 503 related to cinder10:19
noonedeadpunkor rather we have issue with horizon deployment...10:19
jrosserwould be useful to look at hatop output10:20
signedShould I try on point release instead of Master?10:20
jrossermaster is the current development branch for the next release10:20
jrosser2023.2 is the most recently made stable release for OpenStack Bobcat10:20
noonedeadpunkfrankly speaking I haven't looked into our horizon deployment for a while since we don't use it internally10:21
signedI have a lot of space in the ubuntu PV actuall (Ubuntu only stretches its VG to 64G)10:22
signedSo space is not an issue10:22
jrosseryou can see how that works in the AIO here https://github.com/openstack/openstack-ansible/blob/master/tests/roles/bootstrap-host/templates/user_variables.aio.yml.j2#L33710:23
jrosser`volume_group: cinder-volumes`10:23
signedDestroyed that VM, if that fails again, gonna hyu10:24
opendevreviewMerged openstack/openstack-ansible-os_magnum stable/2023.2: Add missing magnum octavia client configuration  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/90450810:42
hamidlotfi_Hi there,10:46
hamidlotfi_What does the value of the‍ ‍‍‍‍`revision_number` field in the port information section indicate?10:46
hamidlotfi_      It went from `1 to 500` within half an hour and also I have this message on Neutron server:10:46
hamidlotfi_`Successfully bumped revision number for resource 46436042-db65-4c6b-9f15-4b2f8e9ee942 (type: ports) to 500`?10:46
signednoonedeadpunk: With LXC install on AIO I get actual apt updata10:48
jrossersigned: how do you install your operating system - we're likley all doing test environments with ubuntu cloud image based VM10:49
signedFrom ISO10:49
noonedeadpunkhamidlotfi_: I assume that any update of port information, like attach/detach/move between hosts10:51
noonedeadpunkthough it's better to ask neutron folks here as they may know more10:52
signedBut it then fails somehow at the same step10:52
noonedeadpunkCan you kindly paste sources.list with cdrom record so I could try to reproduce that?10:53
signedGod, this one is extra weird10:54
opendevreviewMerged openstack/openstack-ansible-rabbitmq_server stable/zed: Add ability to add custom configuration for RabbitMQ  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/90094110:54
signedIt's getting a broken pipe10:54
noonedeadpunkfor ansible connection? Hard to understand what is the error without seeing what you're talking about10:55
signedhttps://paste.ubuntu.com/p/r77jpJYr8m/10:55
signedAnd it magically succeeded10:56
signedIt tried 20+ times and then succeeded10:56
noonedeadpunkThere could be a reconnect issued afterwards. This is potentially related to SSH persistance connection10:56
noonedeadpunkOr well...10:56
noonedeadpunk20+ times is too much10:56
signedWent way too fast for me to see how many but I've seen 10 tries10:57
signedSo might be more10:57
jrosserit's very helpful to have some wider context in debug pastes so that we can see which tasks are involved10:57
noonedeadpunk++10:58
signedAfter this dep I'll do a `script` record of the ansible output10:59
signedIn a clone of the base VM10:59
noonedeadpunkthere's a log stored in /openstack/log/ansible-logging/ansible.log just in case11:03
signednoonedeadpunk: https://paste.ubuntu.com/p/smw7P4Vyqz/11:10
noonedeadpunksigned: that is expected to retry11:11
signedYeah, I know, but it succeeded, so no sources.list issue on AIO11:12
noonedeadpunkwe do some task in asyc way, and then check state of async. We have like 300 retries for this specific task11:12
signedOh okay11:12
noonedeadpunkIt usually succeed withing 40 or so for reasonably fast connection/cpu11:13
noonedeadpunkin systemd_mount we even have a task that is usually expected to fail, but is rescued, so you won't see any failure in result of the play11:14
opendevreviewMerged openstack/openstack-ansible stable/2023.2: Modify RGW client format  https://review.opendev.org/c/openstack/openstack-ansible/+/90485511:24
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_hosts stable/2023.1: Fix resolved config on Debian  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/90508211:51
noonedeadpunkfwiw, blazar patch depends on openstack_resources implementation (the one that handles aggregate creation)11:53
noonedeadpunkhttps://review.opendev.org/c/openstack/openstack-ansible-os_blazar/+/90487611:53
jrossergiven we are early in the cycle we should review/merge the openstack_resources stuff asap to get it tested11:56
noonedeadpunkthe only thing that fails is somehow TLS upgrade job for magnum: https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/90118511:58
noonedeadpunkI kinda have little idea about reasons to be frank11:58
noonedeadpunk`Cluster type (vm, Unset, kubernetes) not supported`11:58
noonedeadpunkbut how tls is different from non-tls is mistereos for me11:59
noonedeadpunk* mysterious11:59
jrosserthat should be the OS type, ubuntu/coreos or whatever magnum wants11:59
jrosserso perhaps the issue is actaually with the image upload not setting the OS metadata on the image12:00
noonedeadpunkoh, it issues http request to keystone12:00
noonedeadpunkyeah, indeed. 12:01
noonedeadpunkbut regular upgrade job passes at the same time :(12:01
noonedeadpunkBut yeah, I see what you mean...12:02
jrosserdoes the upgrade switch to TLS mode or start that way12:02
noonedeadpunkI think it switches to tls....12:02
noonedeadpunkdamiandabrowski: do you recall?:) ^12:02
jrosserhttp vs https for keystone could be magnum.conf error / not restarted service using old settings?12:03
noonedeadpunksad part is that we somehow don't have /etc in upgrade jobs logs12:04
jrosseri was just going to say, where are they?!12:05
jrosserwe did merge my patch that refactored the log collection12:07
jrosserthat would be no.1 suspect /o\12:07
jrosserlooks like we have the /etc/ logs for not-upgrade jobs12:08
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_heat master: Deprecate and remove heat_deferred_auth_method variable  https://review.opendev.org/c/openstack/openstack-ansible-os_heat/+/90510912:10
damiandabrowskinoonedeadpunk: I fixed similar issue some time ago: https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/89752612:10
damiandabrowskiso I'm surprised it showed up again :/12:11
jrosseroh well does the change to add openstack_resources role remove that wait?12:12
jrossershould be ok https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/90118512:12
noonedeadpunkdamiandabrowski: I think it's different actually, as things pass down to tempest12:15
noonedeadpunkAnd only during tempest run magnum fails to create a cluster12:15
damiandabrowskiah, you're right12:15
noonedeadpunkBut in tempest log I do see it asks keystone over http12:16
noonedeadpunkbut it get's reply as well...12:16
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_magnum master: Move insecure param to keystone_auth section  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/90511012:17
noonedeadpunk^ it's unrelated fwiw12:17
noonedeadpunkwill try to reproduce I guess...12:22
signedAIO LXC deployment succeeded and I have Horizon working12:24
noonedeadpunkactually... it might be needed to explicitly specify Horizon in SCENARIO when doing metal installation... As I guess we don't have it as "base" for metal still12:28
signedI don't know, I'm pretty sure it installed (AFAIK it's the only service that listens on HTTP)12:30
noonedeadpunkwell, haproxy always listen on http recently12:31
noonedeadpunkbut it does not have backends12:31
noonedeadpunkit's made also for service security.txt and some extra mappings if needed12:32
noonedeadpunkSo that smth is listening on 80/443 doesn't mean that horizon is installed12:33
signedYou know better than me, anyway I have a working cluster, thanks a lot12:33
opendevreviewMerged openstack/openstack-ansible stable/2023.1: Modify RGW client format  https://review.opendev.org/c/openstack/openstack-ansible/+/90485612:57
opendevreviewMerged openstack/openstack-ansible-os_magnum stable/2023.1: Add missing magnum octavia client configuration  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/90450913:01
opendevreviewMerged openstack/openstack-ansible-os_blazar master: Fix Blazar authentication and endpoints definition  https://review.opendev.org/c/openstack/openstack-ansible-os_blazar/+/90479113:38
noonedeadpunk#startmeeting openstack_ansible_meeting15:00
opendevmeetMeeting started Tue Jan  9 15:00:16 2024 UTC and is due to finish in 60 minutes.  The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot.15:00
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:00
opendevmeetThe meeting name has been set to 'openstack_ansible_meeting'15:00
noonedeadpunk#topic rollcall15:00
noonedeadpunko/15:00
jrossero/ hello15:01
noonedeadpunk#topic bug triage15:05
noonedeadpunkSo, we have quite some bugs. Some are already patched, but some I'd love to get some input on15:05
noonedeadpunk#link https://bugs.launchpad.net/openstack-ansible/+bug/204865915:06
jrossersounds reasonable15:07
noonedeadpunkso I'm thinkin on what we want to do here15:07
noonedeadpunkLike remove cdrom from lxc conatiner repos is bare minimum15:08
noonedeadpunkBut should we also try to remove that from host repos?15:08
jrosserthere wasnt an example with the cdrom attached to the bug?15:08
noonedeadpunknope15:08
NeilHanlono/ hiya15:09
noonedeadpunkSo likely need to get CDROM installation somewhere/somehow....15:09
noonedeadpunkto get smth to parse15:09
noonedeadpunkgoing next15:13
noonedeadpunk#link https://bugs.launchpad.net/openstack-ansible/+bug/204820915:13
noonedeadpunkIn fact not sure what's going wrong here and was not able to reproduce the issue15:14
jrosser`add-apt-repository --remove <whatever> || true` or something15:14
jrosseranyway15:14
noonedeadpunkI think we should do that only on containers? Not touch metal hosts?15:15
jrosseryeah15:16
jrosserso the prep script could do that if we know the thing to remove15:16
jrosserregarding the magnum roles thing15:17
jrosserandrewbonney has made a magnum test environment yesterday and i don't think we ran into that15:17
jrosserand i agree that this looks like a bug that was fixed long long ago15:17
andrewbonneyYeah, ran with latest 2023.1 tag15:18
noonedeadpunkI will suggest to run a test playbook then that will do pretty much same logic then15:18
jrosserwas there a bug where we handled single roles / lists of roles incorrectly?15:20
jrosserthis https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/89601715:21
jrosserhmm thats not the same at all15:21
noonedeadpunknah, I was thinking about different issue15:22
jrossercould be the SDK version in the utility container somehow messed up15:22
noonedeadpunkI was thinking about this: https://opendev.org/openstack/openstack-ansible-plugins/commit/a4357fbb9a43f44bfee72b01db219f080268fbe715:23
noonedeadpunkbut yeah, what confuses me is that data plugin claims as missing is in args15:23
noonedeadpunkso it pretty much looks like some collection issue15:23
noonedeadpunkthough it was reported as reproducible error15:24
jrosserbecasue the error actually is from the SDK not the collection15:24
noonedeadpunkYeah, could be actually15:24
noonedeadpunkthat is the good point15:24
jrosserwell what i mean is consistency between collection in contoller and SDK in utility host needs to be correct15:24
noonedeadpunkI think, in 27.3.0 I did bumped collection version....15:25
damiandabrowskihi! (sorry, my meeting delayed)15:25
noonedeadpunknah..... only plugins and config_tmeplate https://opendev.org/openstack/openstack-ansible/commit/80f717a1d82bd5561d2a71a820ee4999d7abf87d15:26
noonedeadpunkand u-c seems to be left intact as well....15:26
noonedeadpunkOk, next15:27
noonedeadpunk#link https://bugs.launchpad.net/openstack-ansible/+bug/204828415:28
noonedeadpunkthis is very good one15:28
jrossersuggest to rebuild utility container i think15:28
noonedeadpunkwhich will take us quite some effort to implement I believe15:28
noonedeadpunkSo GPG defenition to install should be part of the repo stanza I assume. 15:29
noonedeadpunkThen I guess we in fact need a module to do apt_key basically15:30
noonedeadpunkexcept it won't use apt_key15:30
noonedeadpunkbut will be able to convert to gpg format when the key is not in it15:30
noonedeadpunkand download/paste regardless of source (data/url/file/etc)15:31
noonedeadpunkAnd I guess we need to cover that sooner then later, as I guess apt-key will be absent in 24.0415:31
jrosserwell i did see in the latest debian that the structure of the /etc/apt directory is very much changed15:34
jrosserit would need looking at but this might make it easier to manage config fragments than before15:34
jrosserexample https://2b411714dff7c938c230-49130948639ed40b079dd8450de896f5.ssl.cf5.rackcdn.com/878794/33/check/openstack-ansible-deploy-infra_lxc-debian-bookworm/c4d34b5/logs/etc/host/apt/trusted.gpg.d/15:36
noonedeadpunkso, I guess idea now, is to have an explicit path to GPG key per repo15:40
noonedeadpunkand have `signed-by`15:41
noonedeadpunkie https://www.digitalocean.com/community/tutorials/how-to-handle-apt-key-and-add-apt-repository-deprecation-using-gpg-to-add-external-repositories-on-ubuntu-22-04#option-1-adding-to-sources-list-directly15:41
noonedeadpunkor you saying that it should be jsut enough to put the key to trusted.gpg.d?15:42
jrosserwell actually i wonder if we should migrate to this everywhere https://docs.ansible.com/ansible/latest/collections/ansible/builtin/deb822_repository_module.html15:45
jrossermaybe the `signed_by` parameter allows us enough flexibility15:46
jrosserif we were to change anything at all it would be worth migrating to the modern ansible module and regularising the data around that15:46
noonedeadpunkAh, I fully missed existance of this module15:47
jrosser`Either a URL to a GPG key, absolute path to a keyring file, one or more fingerprints of keys either in the trusted.gpg keyring or in the keyrings in the trusted.gpg.d/ directory, or an ASCII armored GPG public key block.`15:47
jrosseris that good enough?15:47
noonedeadpunkyes, should be totally fine actually15:48
jrossercool15:48
noonedeadpunkthat really slipped my attention15:48
noonedeadpunk(module name is also slightly cumbersome)15:48
noonedeadpunk#topic office hours15:48
noonedeadpunkI guess we're about out of time for bugs, depsite there're couple still around (less important)15:49
jrosserso i did want to discuss cluster-api a bit15:49
jrosserandrewbonny and i are putting quite some effort into the patches for the vexxhost driver15:50
jrosserand we need to be sure that we do the right thing by putting it in the ops repo15:51
jrosserthe integration for an 'easy' AIO is pretty wild https://review.opendev.org/c/openstack/openstack-ansible-ops/+/90217815:52
jrosserand this is only the start as it will need overridable playbook hooks putting in a bunch of the existing opesntack-ansible/playbooks/*15:53
noonedeadpunkSuggest to place that all to integrated repo from the beginning?15:55
jrosserwell perhaps15:55
jrossereven today there is talk in #openstack-containers of keeping the other driver out of tree15:56
jrosserthough there are probably downsides to putting it in os_magnum as well15:57
jrossermaybe we don't want always to be carrying collection dependancies on the k8s stuff15:57
jrosserthough correct use of include vs import might make that not be an issue15:58
noonedeadpunkas well as install bunch of vexxhost collections for everyone...15:59
jrosserright15:59
noonedeadpunkfrankly speaking I don't know at this point16:00
jrosseri am a bit sad about it all tbh16:00
jrosserbecasue for the first time ever magnum "just worked" in our environment16:01
noonedeadpunkFrom one side I do fully understand and share your concerns. From other I don't want to support or be in any way "responsible" for having that as supported part of the project....16:01
noonedeadpunkAnd at the same time will highly likely install one of capi drivers this year as well16:01
noonedeadpunkso pretty much interested in a good outcome as well16:01
jrosseri can continue with what i'm doing and add a bunch of playbook hooks16:04
jrosserthat might be a good feature anyway16:04
noonedeadpunkIf you think it's unreasonably cumbersome - we in fact may indeed prefer adding all in tree somehow16:07
jrosserthis is why i'd really like someone else to have a go with it16:07
noonedeadpunkmaybe through naming/comment/documentation remove liability for the feature from ourselves....16:07
jrosseri think my tolerance of a complex setup is quite high16:07
jrosserbut that might not be good for everyone16:08
noonedeadpunkthen it should not be me lol16:08
jrosserand ideally it should be trivial to make an AIO16:08
jrosserand trivial to make a CI job16:08
noonedeadpunkyeah....16:08
jrossereventually for a production deployment we are managing really fine with it out of tree, in a collection in the ops repo16:09
jrosserbut making an CI job in os_magnum is pretty challenging16:10
jrosseras we have to "call out" to the collection16:10
jrosserat some point after magnum and before tempest16:11
noonedeadpunk#endmeeting16:28
opendevmeetMeeting ended Tue Jan  9 16:28:09 2024 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:28
opendevmeetMinutes:        https://meetings.opendev.org/meetings/openstack_ansible_meeting/2024/openstack_ansible_meeting.2024-01-09-15.00.html16:28
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/openstack_ansible_meeting/2024/openstack_ansible_meeting.2024-01-09-15.00.txt16:28
opendevmeetLog:            https://meetings.opendev.org/meetings/openstack_ansible_meeting/2024/openstack_ansible_meeting.2024-01-09-15.00.log.html16:28

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!