Tuesday, 2024-03-05

opendevreviewJimmy McCrory proposed openstack/openstack-ansible-galera_server master: Additional TLS configuration options  https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/91100903:24
noonedeadpunkmornings08:05
jrosserhello08:24
jrosseris there anything we should do on v / w / x branches ahead of them becoming unmaintained/<...> ?09:03
noonedeadpunkyes, for sure09:10
noonedeadpunkI was actually going today to spend some time on sorting things aout09:10
noonedeadpunkand already quite due to create new releases for stable branches as well09:10
noonedeadpunkthat was one thing I really wanted to backport, as it messes up 2023.2 quite a lot: https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/91038409:13
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add ovn-bgp-agent to source install requirements  https://review.opendev.org/c/openstack/openstack-ansible/+/90969409:15
noonedeadpunklet's quickly land https://review.opendev.org/c/openstack/openstack-ansible/+/908499 to unblock master branch and renos09:16
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: reno: Update master for unmaintained/yoga  https://review.opendev.org/c/openstack/openstack-ansible/+/90849909:17
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Do not use underscores in container names  https://review.opendev.org/c/openstack/openstack-ansible/+/90543309:17
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Allow env.d to contain underscores in physical_skel  https://review.opendev.org/c/openstack/openstack-ansible/+/90543809:17
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Update upstream SHAs  https://review.opendev.org/c/openstack/openstack-ansible/+/91038609:18
jrosseryou think we can get some of the older branches working again? i've not really had time to look at them much recently09:22
noonedeadpunkI guess yes... They can't be terribly borked, but also time is super limited :(09:35
noonedeadpunkMost likely smth rabbit, smth erlang...09:36
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/zed: Drop upgrade jobs for Zed  https://review.opendev.org/c/openstack/openstack-ansible/+/91105009:44
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/zed: Bump SHAs for Zed  https://review.opendev.org/c/openstack/openstack-ansible/+/91105109:46
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-tests stable/zed: Bump ansible-core to 2.12.8  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/91106409:49
opendevreviewDmitriy Rabotyagov proposed openstack/ansible-role-systemd_networkd stable/zed: Use OriginalName instead of Name in systemd.link  https://review.opendev.org/c/openstack/ansible-role-systemd_networkd/+/90881509:49
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/2023.1: Bump SHAs for 2023.1 (Antelope)  https://review.opendev.org/c/openstack/openstack-ansible/+/91105409:55
opendevreviewMerged openstack/openstack-ansible master: reno: Update master for unmaintained/yoga  https://review.opendev.org/c/openstack/openstack-ansible/+/90849910:01
jrossernoonedeadpunk: did you make an AIO for magnum tls upgrade?11:00
jrosseri am not quite sure how to run that locally as it needs a depends-on for the upgrade11:01
jrosserthis https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/90118511:01
noonedeadpunkugh, well 50% of it...11:15
jrosseri can do it manually, but i was attempting to replicate the gate-check-commit script and got a bit stuck11:23
noonedeadpunknah, you can't do that easily through gate-check-commit11:26
opendevreviewMerged openstack/openstack-ansible-openstack_hosts master: Drop task that deletes old UCA repo  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/90743313:56
g3t1nf0hey there, what services do I need before deploying openstack? DNS, DHCP, NTP, LDAP is what I've found in the docu. Am I missing something? Also can I install it with selinux and firewalld enabled if I'm willing to configure them on the failing states?14:33
jrosserg3t1nf0: assuming we are talking about an openstack-ansible deployment, LDAP is optional, DHCP is at your choice about how you configure networking, selinux is entirely untested and will almost certainly break stuff, and firewalld also will be troublesome14:35
g3t1nf0so in other words way better to move to debian then insisting on using Centos14:36
jrosseroh well there are other reasons not to use centos14:36
g3t1nf0like ? I'm using it because of selinux 14:36
jrosseryou will get to experience all the things that will be in RHEL at some point in the future, regardless of if they are working / break everything14:37
jrosserremember that centos these days is the precursor to RHEL, not a rebuild of it14:37
g3t1nf0yeah I know but still you could go the Rocky way which is a clone 14:38
jrosserwell you did say centos :)14:38
g3t1nf0 so back to the question just DNS and NTP. DHCP and LDAP are optional14:39
jrosserif selinux is mandatory then you will need to put in some work, i think14:39
jrosseropenstack-ansible does not really care about how you configure your networking, so long as the requrements are met14:39
jrosserand keystone can use LDAP, if you choose to configure it that way, else you don't need it at all14:40
g3t1nf0Then my info is wrong. From my understanding I should always have a dedicated LDAP to connect keystone with and not relly just on keystone14:40
jrosserit's entirely up to you as a design decision for your deployment14:41
g3t1nf0s/relly/rely14:41
jrosserout of the box, openstack-ansible will default to users in keystone14:41
g3t1nf0how does openstack ansible differentiate baremetal and lxc ?14:43
jrosserby default, some things run on the bare metal of the hosts and some things are in lxc containers14:44
jrosserif you want it all on bare metal, you can do that optionally14:44
g3t1nf0so I don't need any support infra to launch a cluster?14:44
g3t1nf0just the one deployment host14:45
jrosserat a minimum yes. some way to delete / reprovision the cluster hosts quickly can be very useful when testing14:45
jrosseri always recommend starting with an all-in-one deployment https://docs.openstack.org/openstack-ansible/2023.2/user/aio/quickstart.html14:46
jrosserit can be very beneficial to have that as a reference to compare to first steps on making a production deployment14:47
g3t1nf0haven't thought of doing that but it makes sense14:48
jrosserif you have any way to get a 8G ram / 80G disk VM then it would really be time well spent14:48
jrosserthere are some differences with the way the networking is setup, as it hides everything behind one interface, one IP and a bit of NAT14:49
jrosserbut for eveything else it will be quite close to a non-HA production deployment14:50
g3t1nf0can the AIO be used in production to uttilize the hardware on 100% 14:50
jrosserif you have one server as a homelab and you dont mind if you break your own workload, maybe14:51
jrosserbut for any degree of high availability then a multinode deployment is needed14:51
g3t1nf0guess I'm not explaining it correctly what I want ... So 3 servers with the Control Plane for everything, and when I add a new node it is used for everything like nova, cinder and glance14:56
g3t1nf0something like hyperconverged infra 14:56
noonedeadpunk#startmeeting openstack_ansible_meeting15:01
opendevmeetMeeting started Tue Mar  5 15:01:54 2024 UTC and is due to finish in 60 minutes.  The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot.15:01
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:01
opendevmeetThe meeting name has been set to 'openstack_ansible_meeting'15:01
noonedeadpunk#topic rollcall15:01
noonedeadpunko/15:03
NeilHanlonmorning :) 15:04
jrossero/ hello15:05
noonedeadpunk#topic office hours15:07
noonedeadpunkI was mostly occupied woth OVN BGP Agent implementation last week frankly speaking15:08
noonedeadpunkWhile it's not a scope/prio for the project, having quite a harsh deadlines internally nowadays15:09
noonedeadpunkI wanted to look today also on implementing new variables to control these improvements and push patches towards all reposhttps://review.opendev.org/q/topic:bug-203149715:10
noonedeadpunkalso would be good to see what is broken...15:10
NeilHanlonyeah15:11
noonedeadpunkjrosser: do we have more capi things to be merged? Or we're almost done with the topic?15:12
jrosserwell now really it turns to what is wrong with the magnum tls upgrade jobs15:12
jrossercapi is basically done for something we can say is "experimental" for next cycle15:12
noonedeadpunkyeah, ok, I will get aio done today after all15:12
jrosseri am still trying15:12
noonedeadpunkoh, I think it's almost deadline for the cycle-highlights. Should we discuss what to include there?15:13
noonedeadpunkI guess openstack-resources role is one thing?15:13
noonedeadpunkThen capi as experimental15:13
jrosseryeah, vpnaas and bgp agent are pretty good new features too15:14
noonedeadpunk++15:14
noonedeadpunkwhat else we did...15:14
noonedeadpunkpush for collectification?15:14
noonedeadpunkquorum queues as default?15:14
jrosserdid we make octavia ovn provider work?15:15
noonedeadpunkyup15:15
jrosserand bookworm, is that this cycle?15:15
noonedeadpunktested it in our ovn sandbox and cleaned-up the patch15:15
noonedeadpunkI think bookworm was the previous one15:16
noonedeadpunkovn driver for octavia: https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/86846215:16
jrosserhmm i wonder if there is any kind of test for that15:16
noonedeadpunknope. not yet. 15:17
noonedeadpunkI guess this relates to overall making octavia work for ovn15:17
noonedeadpunkor well. there could be some tempest settings...15:17
noonedeadpunkI need to look at it15:17
opendevreviewDmitriy Rabotyagov proposed openstack/ansible-role-systemd_networkd stable/zed: Use OriginalName instead of Name in systemd.link  https://review.opendev.org/c/openstack/ansible-role-systemd_networkd/+/90881515:26
noonedeadpunkIt pretty much dissapointing that I was not able to work on anything that I planed during last ptg though :(15:28
noonedeadpunkbut we have what we have I guess.15:28
noonedeadpunkalso this seems to be needed to fix functional jobs for Zed: https://review.opendev.org/c/openstack/openstack-ansible-tests/+/91106415:31
noonedeadpunkand this what I want to include into 2023.2 bump for 28.1.0 tag: https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/91038415:33
noonedeadpunkanything else we wanna discuss?15:37
NeilHanlonnot from I.. sorry I have not been as active lately.. hoping to be more involved this spring15:37
noonedeadpunkI also hope to be more involved starting April....15:40
noonedeadpunkOk, then I will wrap this up early15:42
noonedeadpunk#endmeeting15:42
opendevmeetMeeting ended Tue Mar  5 15:42:25 2024 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:42
opendevmeetMinutes:        https://meetings.opendev.org/meetings/openstack_ansible_meeting/2024/openstack_ansible_meeting.2024-03-05-15.01.html15:42
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/openstack_ansible_meeting/2024/openstack_ansible_meeting.2024-03-05-15.01.txt15:42
opendevmeetLog:            https://meetings.opendev.org/meetings/openstack_ansible_meeting/2024/openstack_ansible_meeting.2024-03-05-15.01.log.html15:42
opendevreviewMerged openstack/openstack-ansible master: Update upstream SHAs  https://review.opendev.org/c/openstack/openstack-ansible/+/91038616:07
g3t1nf0jrosser, guess I'm not explaining it correctly what I want ... So 3 servers with the Control Plane for everything, and when I add a new node it is used for everything like nova, cinder and swift, something like hyperconverged infra?16:09
noonedeadpunkg3t1nf0: you can do that, yes16:15
noonedeadpunkthough I guess it's good to define what backend storage you use for cinder16:16
noonedeadpunkas if it's going to be ceph - then you don't need cinder on computes (and don't need swift per say, as ceph rgw has some swift compatability)16:17
noonedeadpunkbasically you manage components that are to be installed and if metal/non-metal with groups defenition in openstack_user_config.yml file16:17
g3t1nf0I'm using 25x ssd hp 380 gen9 servers  and I want to utilize them to the fullest with the cpu and the storage so I want every node to be used for nova, with ceph as the main storage for the cinder and swift 16:17
g3t1nf0then I haven't understood how cinder and swift are working. I thought they use ceph as the backend 16:19
noonedeadpunkg3t1nf0: so. you want swift-swift or object storage which can be provided by Ceph RadosGateway?16:20
noonedeadpunkAnd have compatible with Swift and S3 API?16:20
g3t1nf0I want to have ceph in the backend and allow my users to have the s3 api exactly16:20
g3t1nf0but also block storage16:21
noonedeadpunkok, yeah, then you probably don't need swift per say :) it's just quite a difference on architecture16:21
g3t1nf0so just ceph then ?16:21
noonedeadpunkSo, your cinder-api and cinder-volume should be running just on control plane - no need to have any of cinder service on a compute16:21
noonedeadpunkand cinder-volume can be running in a container then16:22
noonedeadpunkBasically what you'd need to run on computes are - nova-compute, some neutron (ovn? ovs?) agents, and ceph-osds16:22
opendevreviewMerged openstack/openstack-ansible stable/zed: Drop upgrade jobs for Zed  https://review.opendev.org/c/openstack/openstack-ansible/+/91105016:22
opendevreviewMerged openstack/openstack-ansible stable/zed: Bump SHAs for Zed  https://review.opendev.org/c/openstack/openstack-ansible/+/91105116:22
opendevreviewMerged openstack/openstack-ansible stable/2023.1: Bump SHAs for 2023.1 (Antelope)  https://review.opendev.org/c/openstack/openstack-ansible/+/91105416:22
noonedeadpunkg3t1nf0: ceph can provide block storage and object storage (and even shared FS)16:23
g3t1nf0exactly what I was thinking 16:23
noonedeadpunkand you can integrate ceph object sotrage with openstack as well16:23
opendevreviewJimmy McCrory proposed openstack/openstack-ansible master: Add check_hostname option to db healthcheck tasks  https://review.opendev.org/c/openstack/openstack-ansible/+/91115016:23
noonedeadpunkso then you don't need Swift16:23
g3t1nf0okay so to run all that on the CP what kind of resources will I need ?16:24
g3t1nf0I have 48vcpus and 768G ram16:24
opendevreviewJimmy McCrory proposed openstack/openstack-ansible-galera_server master: Additional TLS configuration options  https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/91100916:24
g3t1nf0on all servers but I can lower the ram if its overkill and use it on an extra server as compute16:25
noonedeadpunkso, I'd suggest having not less then 128 for control plane, but that depends on amount of openstack services you'll include16:27
noonedeadpunkI guess I'd go with 256 though16:29
g3t1nf0is the cpu enough? also should I use the drives on the CP as well as Ceph OSDs ?16:30
noonedeadpunkvcpus meaning HT enabled? So like 24cpu 1 socket?16:32
g3t1nf012cpus 2 sockets16:32
noonedeadpunkwell. you might want to tune threads number for services as they're slightly max out by default16:33
noonedeadpunkbut you should be fine16:33
g3t1nf0usually what is the recommended for a CP?16:33
noonedeadpunkThis is very-very tircky question as all depends on what is anticipated API load/amount of interactions with API for your cluster16:35
g3t1nf0services I'd like to run except the storage which will be with ceph as discussed earlier, murano, freezer, senlin, nova, magnum, zun, horizon, trove, designate, ec2-api, keystone, watcher, barbican, octavia, zaqar, neutron, tacker, heat, placement, ceilometer, glance (manila, cinder, swift)16:40
noonedeadpunkwell. this is pretty much optimistic, I would say....16:42
noonedeadpunkiirc ec2-api is already deprecated, murano is not going to be released for 2024.1 and future is unclear, freezer not maintained nicely for last.... 4 cycles? pretty much same for zun.16:43
noonedeadpunkso unless you want to step into maintenance of these projects... I would double-check if you _really_ want them16:44
g3t1nf0https://docs.openstack.org/2023.2/install/ so this page is not updated then?16:47
noonedeadpunkI guess it depends on the prespecitve....16:50
noonedeadpunkbut also - why you'd need ec2-api?16:51
noonedeadpunkg3t1nf0: but I guess I was more reffering to contribution metrics to projects - ie Freezer: https://www.stackalytics.io/?module=freezer-group&metric=commits16:52
noonedeadpunkor zun: https://www.stackalytics.io/?module=zun-group&metric=commits16:52
g3t1nf0those are scary to look at, why are there not so many commits as earlier? are companies moving away from openstack?16:55
noonedeadpunkI would not say from openstack. But some projects get less love/usage then others16:56
noonedeadpunkSome are getting jsut stable and feature complete16:56
noonedeadpunkbut they still should be maintained, but orgs behind these projects may neglect this need16:56
noonedeadpunkfrankly - I have no idea why freezer has that much love16:57
noonedeadpunkbut also worth mentioning, that main arena for openstack is private cloud after all, that has all sort of internal solutions for backups16:57
g3t1nf0yeah true, but companies like metallb offer it so they should give you an option for backup, no?16:58
noonedeadpunkand talking about number of openstack deployments - they just grow. But mostly they're limited to some core components16:58
noonedeadpunkfrankly - I know already 2-3 organizations that were looking at freezer, but then got scare of number of it's maintainers and moved to commercial products like Trilio17:00
noonedeadpunkwhile they could pick up basic maintenance tasks and keep gates alive. Dividing this by these 3 companies would be really small involvment...17:01
g3t1nf0I get what you mean 17:01
jrosserI agree your list of services is optimistic17:02
jrosseryou should start with the minimum core stuff and build up later17:02
jrosserand have a test lab for the lesser used ones17:02
noonedeadpunkthese ones are in a good shape though: nova, magnum, horizon, designate, keystone, barbican, octavia, neutron, heat, placement, ceilometer, glance, cinder, manila.17:04
jrosserthings designate require you to make some design decisions to suit your deployment, and same increasingly goes for magnum17:05
noonedeadpunk(and for manila)17:05
jrosseralso Barbican, to hsm or not is the question there17:05
jrosserand business requirements will perhaps dictate the answer there17:06
noonedeadpunkand actually you can tell about each service, except placement :D17:06
noonedeadpunkand maybe ceilometer, but arguable17:06
jrosserso really it’s best to view all of this as a toolbox, and each tool likley has multiple pluggable backends in some way17:06
noonedeadpunkstill need to choose datastore for it17:06
jrosserso everyone’s deployment ends up somehow different17:06
noonedeadpunk++17:07
noonedeadpunkyeah, so it is very a tough question about anything recommended or suggested17:07
noonedeadpunklike it all very much depends17:07
jrossereven hyper-converged would be too much for me17:08
g3t1nf0so from all the services that jrosser said I have extra left trove watcher and zaqar 17:08
jrossermanaging upgrades would just be too much of a nightmare17:08
noonedeadpunkyeah, hyper-converged is not a good idea when talking about day217:09
jrossergreat example is ceph for that17:09
g3t1nf0trove would be nice for the DBs watcher and zaqar are not that of a high priority17:09
noonedeadpunkit is very appealing/cheap on day0, but quickly will become an utter mess with plenty of exceptions17:09
jrosserceph releases, openstack releases and OS releases all have unique release cadences17:09
noonedeadpunktrove should work, in general. clusterization might need some love there17:10
jrosserand this backs you very quickly into some sort of upgrade deadlock17:10
g3t1nf0or you use katello for repository lock and upgrade at your own pace 17:10
noonedeadpunknot saying about scaling up, where storage/compute resources grow differently17:10
noonedeadpunkand that osds also consume significant amount of memory, which could be "sold" as VMs...17:11
g3t1nf0jrosser: so you would recomend that I skip ceph and go for manila cinder ans swift ?17:11
noonedeadpunkSo profit in price is not as significant as you might think17:11
jrosserg3t1nf0: I am saying that by hyper converging you are making your post deployment life tough17:12
noonedeadpunkmanila and cinder require some storage backend17:12
jrosserbut absolutely you should use ceph17:12
noonedeadpunkand this can be ceph17:12
g3t1nf0so we are back again then on the update deadlock that you mentioned earlier17:12
jrosserManila is kind of an api/orchestrator for file systems17:12
noonedeadpunkso it's not cinder or ceph - it's what to use instead of ceph for cinder17:12
jrosserwhere you get that file system from is another thing17:13
g3t1nf0iscsi would be one idea for block 17:13
jrosserg3t1nf0: you would have less upgrade deadlock, imho, by having dedicated storage nodes17:13
jrosserwe are currently stepping through antelope to bobcat, ceph17:14
jrosserQuincy to reef17:14
jrosserand focal to jammy17:14
jrosserand it’s pretty involved sequencing to make that happen nicely17:14
g3t1nf0then I'll be losing 75 drive trays just on the CP17:14
noonedeadpunkwell, I mean, given you already have hardware, there's probably not much choice.17:15
jrosserlike some wierd HP raid controller?17:16
jrosserthat needs some thought too17:17
g3t1nf0yeah they are on raid controller but I've flashed them for HBA17:17
g3t1nf0the raid controllers :)17:17
jrossernoonedeadpunk makes a good point about how you manage ram on the compute nodes too17:18
jrosserunder error conditions ceph can needs lots of ram17:19
jrosserand how you separate the ram needs of your VM and there OSD is tricky17:19
g3t1nf0768 should be fine if I leave 128 always free 17:19
jrosseras the OOM killer is sitting ready, which should it choose?17:19
jrosseranyway, I think we’re both saying hyper-converged is pretty advanced usage17:22
jrosserboth for resource management and day-2 operations17:22
jrosserceph is never about performance so if you don’t need the Tb of the control plane node disks, just ignore them17:23
g3t1nf0so you would recommend for a one man show just to keep it simple and start this way because when shit hits the fan it only me to blame 17:24
noonedeadpunkyeah, reserving 128 should do the trick indeed. THere's a parameter in nova, which allows you to mark this space as reserved in placement17:25
noonedeadpunkand I don't think HCA as advanced usage, but more as trying to save startup costs and getting a bag full of shit right after it...17:27
noonedeadpunkAs operational cost overcomes potential profit relatively fast17:27
noonedeadpunkbut anyway, if hardware is already there....17:28
noonedeadpunkyou can move things around once grow:)17:28
g3t1nf0my thought exactly to start as small as possible and when I grow to separate everything as expected a cluster with ceph and CP just with openstack plus the worker nodes17:35
g3t1nf0hmm what are the network nodes needed for? Should I have separate network nodes? Isn't it enought if I keep neuron on my CP and worker nodes ?17:47
jrosserg3t1nf0: it is your decision again :)17:50
jrossermine are separate as the public networks are direct on the internet and we have some policy stuff around separating that17:51
g3t1nf0too many decisions :D how to keep it simple when I can take a shitton of decisions :D17:51
g3t1nf0I see so no firewall before openstack >?17:51
jrosserwell mine is like a private(public) cloud, so no17:52
jrosserbut your use case might be different17:52
jrossermost people make the note work nodes also the infra nodes17:53
jrosserand put the api loadbalancer also there17:53
jrosserbut really OSA is a toolbox, not a shrink wrap installer that decides everything for you17:54
jrosserthere’s a reference architecture, and that’s what you get if you don’t tell it otherwise17:54
jrosserbut ultimately pretty much anything is possible17:54
g3t1nf0guess I need a bit more reading before deploying anything 17:55
jrosserand that makes the learning curve kind of steep, for the benefit of flexibility17:55
jrosserif you went to a large vendor of a commercial solution, there may be no choice but to take their defined architecture17:55
jrosserso many ways to achieve and openstack, really17:56
jrosser*an openstack17:56
g3t1nf0yeah but as a starting point as you've said pretty steep learing curve 17:57
jrosserwell that brings us back full circle to the all-in-one17:58
jrosserit makes it’s own config for you, and deploys the defaults automatically17:58
g3t1nf0okay I do see the point in the network node, should it be a single node or again 3x ?17:59
g3t1nf0I can put the LB, DNS and the whole network there so it makes sense 18:00
noonedeadpunkI think wat we mean here under network node - how external access to VMs will be provided.18:17
noonedeadpunkThis brings potentially some limitations, like VMs not being able to have direct connectivity to public nets, only through private network and floating IPs/ src nat and L3 router18:18
noonedeadpunkand then with ovn having separate net nodes makes sense if you for some reason do not want to pass public vlans to computes18:23
noonedeadpunkas with OVS it was also handling L3/DHCP/ namespaces which are not needed today with OVN18:24
noonedeadpunkjrosser: I did reproduce the issue in the sandbox with magnum fwiw18:34
noonedeadpunkI do see 2 images for magnum. different checksums, both having fedora-coreos tag18:37
jrosserI wonder if we don’t differentiate between the images sufficiently across upgrades18:42
jrosserthe cluster template needs to point to the correct things for the upgraded environment18:43
noonedeadpunkoh18:44
noonedeadpunkhow this works at all... https://opendev.org/openstack/magnum-tempest-plugin/src/branch/master/magnum_tempest_plugin/common/datagen.py#L26918:44
noonedeadpunkor why it's called...18:45
noonedeadpunkor why it's not failing otherwise....18:47
noonedeadpunkand it indeed sends the request with null in it: https://paste.openstack.org/show/bFSBYMJ2JOcSE3ZZjhwI/18:56
noonedeadpunkI _think_ cluster_distro should not be null.....18:57
noonedeadpunkor image_id should not be "fedora-coreos-latest"....18:58
noonedeadpunkyeah, ok, so magnum goes for the image19:01
noonedeadpunkhttps://paste.openstack.org/show/bpFRctqvXvTWsFKzSImc/19:01
noonedeadpunkbut indeed there're 2 images... not sure what making troubles though...19:07
noonedeadpunklike they obviously both having correct propery...19:08
noonedeadpunkhm, but once I've hidden 1 image - it's passing....19:09
jrossercalling it -latest is probably not great either, I assume it’s versioned19:10
noonedeadpunkand yes, any image can be hidden for tests to pass /o\19:10
noonedeadpunkI think it's still better...19:10
noonedeadpunkLike latest will always be latest...19:10
noonedeadpunkbetter then fail on user?19:10
jrosseroh right it’s not totally clear what those two images actually are though?19:10
noonedeadpunkBut why there's no failure on non-tls upgrades....19:11
noonedeadpunkso, during upgrade image is switched from 35.20220424.3.0 to 38.20230806.3.019:13
noonedeadpunkbasically... I guess openstack_resources rotation should drop old image with the same name?19:13
noonedeadpunkbut that is also a valid thing to fix in magnum....19:13
noonedeadpunkand also it should not fetch deactiveated images, for instance...19:15
noonedeadpunkso this is actually image is being asked for, but nothing obvious is being raised in the result https://opendev.org/openstack/magnum/src/branch/master/magnum/api/attr_validator.py#L32-L4419:43
noonedeadpunkmnasiadka: what;s our opinion of magnum returning `Cluster type (vm, None, kubernetes) not supported` when there's simply 2 images with os_distro=fedora-cores?19:44
noonedeadpunkShould it try getting only the first one and ignore deactivated images? or just fail in a more meaningful way?19:45
noonedeadpunkok, I guess I got why only TLS is failing... as it;s the only job applying master aio vars on upgrade....20:27
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Enable image rotation for Magnum  https://review.opendev.org/c/openstack/openstack-ansible/+/91137720:39
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_magnum master: Adopt for usage openstack_resources role  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/90118520:40
noonedeadpunk*hopefully* this will fix it20:40
noonedeadpunkalternativelly, we should be able to rename the images as well here https://opendev.org/openstack/openstack-ansible-plugins/src/branch/master/roles/openstack_resources/tasks/image_rotate.yml#L23 - by appending some suffix20:41
noonedeadpunkbut would need think a bit on implementation, as couple of things do depend on image name there....20:42
spatelnoonedeadpunk hey! 21:51
spatelthere21:51
spatelI have stupid NFS question, I have configured cinder-volume with NFS but how does compute nodes vm talk to NFS?  If compute nodes need access of NFS then at what path I should mount them? 21:52
jrosserspatel: it's all here i think https://docs.openstack.org/cinder/latest/admin/nfs-backend.html22:06
spatelHmm! I am on same page and confused :(22:22
spatelHow does kvm know where to pick NFS share?22:24
spatelbecause I have configured cinder-volume on controller and created 10G volume. I have attached to VM but my VM can't see that 10G volume :(22:26
spatelTrying to understand how does all these piece glued 22:27
jrosserdon;t you mount the nfs in some known place and tell cinder about that with nfs_shares_config22:29
jrosserthen you get a file backing the volume in there?22:29
spatelI did mounted NFS on controller nodes and configured cinder-volume service. everything working and I am able to create volume  etc.. 22:30
spatelI can see volume-XXXXXX files on NFS share 22:30
spatelbut question is how does Compute know about NFS and those volume files?22:30
spatelWho will tell libvirt to talk to NFS and attach volume file to VM and how >22:32
jrosserdoesnt nova-compute do that?22:35
spatelI don't know and none of the doc saying anything about compute config22:36
jrosseri think you should look at the nova-compute logs when you try to attach the volume22:38
jrosseryou need the nfs-common package i would expect and also there needs to be correct permissions to mount/read/write the share22:39
spatelLookin g22:42
opendevreviewJimmy McCrory proposed openstack/openstack-ansible master: Add check_hostname option to db healthcheck tasks  https://review.opendev.org/c/openstack/openstack-ansible/+/91115023:07

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!