Monday, 2025-04-28

opendevreviewsean mooney proposed openstack/project-config master: create openstack/grian-ui git repo  https://review.opendev.org/c/openstack/project-config/+/94824910:36
opendevreviewsean mooney proposed openstack/project-config master: Add jobs for openstack/grian-ui repo  https://review.opendev.org/c/openstack/project-config/+/94825010:36
dpawlikfungi tonyb hey folks o/ Do you think that this change is possible to merge: https://review.opendev.org/c/opendev/system-config/+/948330 ?14:41
clarkbdpawlik: until we boot centos stream 10 nodes I'm not sure that is a priority14:44
clarkbthen we need to check https://grafana.opendev.org/d/9871b26303/afs?orgId=1&from=now-6h&to=now&timezone=utc , determine how much space we expect centos 10 stream to consume, check that it will fit on the afs volumes, and bump quotas if necessary14:45
clarkbbut we don't have centos 10 stream nodes yet due to the hardware requirements so I don't think we hsould mirror the packages yet14:45
gboutryHello infra, we have 1 job using `ubuntu-noble-32GB` and fails to request any nodes, we get the following errors: NODE_FAILURE Node(set) request 300-0026902261 failed in 0s14:47
gboutryIs that a temporary failure or is that a longer term outage?14:47
dpawlikack. Thanks clarkb for clarification.14:47
clarkbgboutry: only one cloud provides that size of node os if there is any problem with the cloud you would see those errors. I can check what the specific issue is in a bit14:48
dpawlikclarkb about: "but we don't have centos 10 stream nodes " => I asked Centos admin and they said "t's the same host so if you have already access on the ACL for the ::centos-stream-full module, everything under is good to go :)"14:51
fungidpawlik: no, we currently can't boot centos 10 in our donor cloud providers because centos 10 requires x86_64-v314:53
fungiclarkb is saying it doesn't make much sense for us to mirror packages for a distro we're unable to boot14:53
fungibecause nothing will use the mirrored packages14:54
dpawliknow I get it. Thank you for explaination 14:54
fungiany chance you have pull with the centos 10 maintainers and can convince them to roll back to supporting older hardware again?14:54
clarkbwe know our largest cloud donor cannot boot centos 10 stream. It is unclear to us how many of the other cloud donors we have can boot it14:55
clarkbI own at least one personal machine that isn't that old that also can't boot it14:55
clarkbbecause i bought it with a celeron cpu14:55
clarkbso not old but also not supporting all the necessary cpu features14:56
clarkbI think the very first step here is figuring out where we can and can't boot it14:56
clarkbthen deciding if we have enough capacity to justify maintaining an image for where we can boot it14:56
dpawliknot enough power here fungi :( 14:56
clarkbthen if we decide to do that we would mirror the packages14:56
dpawlikI was not aware about that thing TBH. Need to read mailing list to be more up to date14:59
*** gmaan_pto is now known as gmaan14:59
clarkbok I need to reboot for loacl updates then I'll look at gboutry's node failures15:01
fungidpawlik: yeah, the architctures section of https://www.centos.org/centos10/ links to a red hat document that talks about impact for hypervisors with restricted cpu flags exposed to better support live migrarion, for example15:02
clarkbLast exception: vIOMMU is not supported by Current machine type pc (Architecture: x86_64).15:13
clarkbgboutry: ^ I think this is an error introcuded by a change requested by openstack nova that didn't take effect until we rebuilt our ubuntu noble images recently (they were puased due to a kernel bug that got fixed)15:14
clarkbdansmith: ^ fyi I don't think the functionality you expected to work is working in vexxhost15:14
dansmithugh15:14
dansmithoh15:15
dansmithno,15:15
dansmiththat's because it's not using q35 I imagine15:15
dansmithso the default machine type for those must still be pc (i.e. the super old machine type with the ISA stuff)15:15
fungiis that something we can control through image properties?15:16
dansmithyeah15:16
dansmithopenstack image set --property hw_machine_type=q35 $IMAGE15:16
dansmithis this blocking something such that we need to fix immeditely?15:16
clarkbdansmith: I don't know what gboutry is attempting to do with the 32GB memory nodes in vexxhost (that is the only place where we can boot those large nodes)15:18
clarkbbut its been broken since when I unpaused the image which has been a bout a week15:18
dansmithq35 is our recommended default for several years now15:18
clarkbso rpobably ok to fix properly and request an image rebuild15:18
clarkbdoes q35 apply to amd backed resources?15:19
clarkbI think vexxhost has amd cpus15:19
opendevreviewDan Smith proposed openstack/project-config master: Make viommu disk image use q35 machine type  https://review.opendev.org/c/openstack/project-config/+/94833515:20
dansmithI'm sure it does.. q35 is a set of system chipset resources, not so much related to the CPU AFAIK15:20
clarkbgot it15:20
*** elodille1 is now known as elodilles15:31
clarkbfungi: can you quickly check 948335? I can issue an image rebuild once that lands15:34
fungion a call for a few more minutes, but sure, shortly15:36
fungiaproved it15:42
clarkbthanks15:42
opendevreviewMerged openstack/project-config master: Make viommu disk image use q35 machine type  https://review.opendev.org/c/openstack/project-config/+/94833515:52
clarkbthat has been deployed and https://nb06.opendev.org/ubuntu-noble-cbcac8cb4e8c4cb2840383bfee906444.log is the requested image build16:00
clarkbafter building it has to upload to the various clouds so may still be a few minutes before you can recheck gboutry but things are in their way16:00
dansmithcool, hopefully that works... my bad for not checking that the default was q35 there (or just setting it to be sure)16:03
gboutryThank you very much!16:10
sean-k-mooneyi think even in qemu the defaut might still by i440fx but in things like virt manager they have started to move16:13
sean-k-mooneywe havnt made that change upstream in nova yet (we might never change it)16:13
sean-k-mooneybut we did change the default downstream to q35 some time ago so that might be part of teh confusion16:14
sean-k-mooneywe have been trying to be very conservitive with changes nova actual default but we proably shoudl talk about that again at some point16:15
sean-k-mooneyfungi: regarding  https://review.opendev.org/c/openstack/project-config/+/948249 to we need to wait for the rollcall-vote on https://review.opendev.org/c/openstack/governance/+/948252 before we proceed or are they both independent?17:48
fungithey can proceed independently, looks like the telemetry ptl has +1'd the governance change, so that's sufficient for us to approve the repo creation17:50
fungier, other way around, +1'd the project creation change, but anyway that's enough17:52
sean-k-mooneyack, ya i pingged them this morning but with the power outages in europe i was not sure if they woudl be about today17:52
fungii've approved it now17:53
sean-k-mooneythanks17:53
fungiclarkb: ^ not sure if we've don a project creation since the gerrit server replacement, so we should probably check behind the deploy for that17:54
clarkbI don't think we have17:54
sean-k-mooneythe other person who im going to be working with on this is on pto for the next two weeks. so im hoping in that time i can boot strap the repo with basic zuul jobs and ectra so she can come back to a workign repo but realisticly i likely wont do much more until they are back then stub it out17:55
fungiwe've seen evidence that manage-projects is working correctly in the no-op case, which means everything's probably fine, but... just in case17:55
clarkbya I would've expected ssh errors (host key verification or authentication problems) already if they exist. But agreed lets check after17:55
sean-k-mooneyonce the gerrit groups have been created can i be added to both and ill boot strap in others as needed17:55
sean-k-mooneyonce the repo is created ill recheck the governance and zuul job patches but ill likely do that tomorow17:57
sean-k-mooneyfungi: clarkb as an aside the automation currely create an empty github repo for new projects https://github.com/openstack/devstack-plugin-prometheus17:59
sean-k-mooneyi assume that becasue we dont mirror by default but we create the repo anyway in case its needed?17:59
sean-k-mooneyis that intentional?18:00
opendevreviewMerged openstack/project-config master: create openstack/grian-ui git repo  https://review.opendev.org/c/openstack/project-config/+/94824918:00
clarkbsean-k-mooney: all of that happens through zuul jobs. The repo isn't craeted in github until governance says you are an openstack project18:00
sean-k-mooneyah ok18:00
sean-k-mooneybut its expect to be empty?18:01
clarkbthis leads to problmes where the new repo is deployed in gerrit and you emrge a change or two then a few weeks later the github repo is created and nothing has mreged yet to actually sync the data over18:01
clarkbyou have to merge something in the repo to run the mirror job after the governance update triggers the creation18:01
sean-k-mooneyi think we have18:01
clarkbalso you have to make sure you have the github mirror job configured18:01
sean-k-mooneybut im not sure we have the mirror job configured18:01
sean-k-mooneythat or we dont have the governace patch for the devstack plugin18:02
clarkbmanage-projects is running now18:02
clarkbsean-k-mooney: you don't get a github repo until governance is updated so I think that part is done18:02
clarkband I Think that url would 40418:02
sean-k-mooneyok we dont actully need the mirro i was just wondering about the current state18:03
clarkbyou haven't configured the mirror job18:03
sean-k-mooneyyep18:03
clarkbthere is no devstack-plugin-prometheus in openstack/project-config/zuul.d/projects.yaml18:03
sean-k-mooneyso chandan didnt create the second patch18:04
sean-k-mooneyonly the first to create the repo18:04
sean-k-mooneyand the governace patch18:04
sean-k-mooneyok ill add that to my todolist to go fix18:05
clarkbmanage-projects reports success18:05
sean-k-mooneyhttps://opendev.org/openstack/grian-ui18:05
sean-k-mooneyyep18:05
clarkbhttps://review.opendev.org/admin/repos/openstack/grian-ui,general too18:06
clarkbfungi: ^ this looks like it is working to me18:06
fungiyeah, and https://opendev.org/openstack/grian-ui lgtm too18:06
sean-k-mooneycan i be added to https://review.opendev.org/admin/groups/9df40d91fd8faf45962105507ce09182420fc66f and https://review.opendev.org/admin/groups/d5e713116bab8e1ad13712811545f3bbf651e8d818:06
funginot that i expected it to break, just didn't want to not notice if it did18:06
sean-k-mooneyya it looks liek it all worked fine with the new instnace18:07
fungisean-k-mooney: technically we add the ptl for new openstack repos, is it okay if i put juan in and then he adds you and whoever else he wants?18:07
sean-k-mooneyok sure i can ping them tomorrow18:07
sean-k-mooneyonce the follow up patch to add the zuul import is merged i can use speculaitive execution to boot strap the repo anyway18:08
sean-k-mooneywell i can create the reivews either way but its nice to get the zuul jobs set up as soon as possible.18:08
fungii've added juan to grian-ui-core and grian-ui-release now18:09
sean-k-mooneyfungi ++18:09
simondodsleyfrickler: Possibly your change https://review.opendev.org/c/openstack/requirements/+/948285 has caused the `openstack-tox-functional-py39` job to start failing.18:41
simondodsleyFor example: https://zuul.opendev.org/t/openstack/build/a73d3e5359f24b5ca30e1a98f7d5a6d018:41
frickleryes, that was to be expected18:42
simondodsleybut these are voting jobs18:43
fricklerso cinder needs to either drop those or to invest into maintaining upper-constraints for py3918:44
simondodsleyok - i'll speak to the cores for cinder and glance18:45
simondodsleythanks18:45

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!