opendevreview | sean mooney proposed openstack/project-config master: create openstack/grian-ui git repo https://review.opendev.org/c/openstack/project-config/+/948249 | 10:36 |
---|---|---|
opendevreview | sean mooney proposed openstack/project-config master: Add jobs for openstack/grian-ui repo https://review.opendev.org/c/openstack/project-config/+/948250 | 10:36 |
dpawlik | fungi tonyb hey folks o/ Do you think that this change is possible to merge: https://review.opendev.org/c/opendev/system-config/+/948330 ? | 14:41 |
clarkb | dpawlik: until we boot centos stream 10 nodes I'm not sure that is a priority | 14:44 |
clarkb | then we need to check https://grafana.opendev.org/d/9871b26303/afs?orgId=1&from=now-6h&to=now&timezone=utc , determine how much space we expect centos 10 stream to consume, check that it will fit on the afs volumes, and bump quotas if necessary | 14:45 |
clarkb | but we don't have centos 10 stream nodes yet due to the hardware requirements so I don't think we hsould mirror the packages yet | 14:45 |
gboutry | Hello infra, we have 1 job using `ubuntu-noble-32GB` and fails to request any nodes, we get the following errors: NODE_FAILURE Node(set) request 300-0026902261 failed in 0s | 14:47 |
gboutry | Is that a temporary failure or is that a longer term outage? | 14:47 |
dpawlik | ack. Thanks clarkb for clarification. | 14:47 |
clarkb | gboutry: only one cloud provides that size of node os if there is any problem with the cloud you would see those errors. I can check what the specific issue is in a bit | 14:48 |
dpawlik | clarkb about: "but we don't have centos 10 stream nodes " => I asked Centos admin and they said "t's the same host so if you have already access on the ACL for the ::centos-stream-full module, everything under is good to go :)" | 14:51 |
fungi | dpawlik: no, we currently can't boot centos 10 in our donor cloud providers because centos 10 requires x86_64-v3 | 14:53 |
fungi | clarkb is saying it doesn't make much sense for us to mirror packages for a distro we're unable to boot | 14:53 |
fungi | because nothing will use the mirrored packages | 14:54 |
dpawlik | now I get it. Thank you for explaination | 14:54 |
fungi | any chance you have pull with the centos 10 maintainers and can convince them to roll back to supporting older hardware again? | 14:54 |
clarkb | we know our largest cloud donor cannot boot centos 10 stream. It is unclear to us how many of the other cloud donors we have can boot it | 14:55 |
clarkb | I own at least one personal machine that isn't that old that also can't boot it | 14:55 |
clarkb | because i bought it with a celeron cpu | 14:55 |
clarkb | so not old but also not supporting all the necessary cpu features | 14:56 |
clarkb | I think the very first step here is figuring out where we can and can't boot it | 14:56 |
clarkb | then deciding if we have enough capacity to justify maintaining an image for where we can boot it | 14:56 |
dpawlik | not enough power here fungi :( | 14:56 |
clarkb | then if we decide to do that we would mirror the packages | 14:56 |
dpawlik | I was not aware about that thing TBH. Need to read mailing list to be more up to date | 14:59 |
*** gmaan_pto is now known as gmaan | 14:59 | |
clarkb | ok I need to reboot for loacl updates then I'll look at gboutry's node failures | 15:01 |
fungi | dpawlik: yeah, the architctures section of https://www.centos.org/centos10/ links to a red hat document that talks about impact for hypervisors with restricted cpu flags exposed to better support live migrarion, for example | 15:02 |
clarkb | Last exception: vIOMMU is not supported by Current machine type pc (Architecture: x86_64). | 15:13 |
clarkb | gboutry: ^ I think this is an error introcuded by a change requested by openstack nova that didn't take effect until we rebuilt our ubuntu noble images recently (they were puased due to a kernel bug that got fixed) | 15:14 |
clarkb | dansmith: ^ fyi I don't think the functionality you expected to work is working in vexxhost | 15:14 |
dansmith | ugh | 15:14 |
dansmith | oh | 15:15 |
dansmith | no, | 15:15 |
dansmith | that's because it's not using q35 I imagine | 15:15 |
dansmith | so the default machine type for those must still be pc (i.e. the super old machine type with the ISA stuff) | 15:15 |
fungi | is that something we can control through image properties? | 15:16 |
dansmith | yeah | 15:16 |
dansmith | openstack image set --property hw_machine_type=q35 $IMAGE | 15:16 |
dansmith | is this blocking something such that we need to fix immeditely? | 15:16 |
clarkb | dansmith: I don't know what gboutry is attempting to do with the 32GB memory nodes in vexxhost (that is the only place where we can boot those large nodes) | 15:18 |
clarkb | but its been broken since when I unpaused the image which has been a bout a week | 15:18 |
dansmith | q35 is our recommended default for several years now | 15:18 |
clarkb | so rpobably ok to fix properly and request an image rebuild | 15:18 |
clarkb | does q35 apply to amd backed resources? | 15:19 |
clarkb | I think vexxhost has amd cpus | 15:19 |
opendevreview | Dan Smith proposed openstack/project-config master: Make viommu disk image use q35 machine type https://review.opendev.org/c/openstack/project-config/+/948335 | 15:20 |
dansmith | I'm sure it does.. q35 is a set of system chipset resources, not so much related to the CPU AFAIK | 15:20 |
clarkb | got it | 15:20 |
*** elodille1 is now known as elodilles | 15:31 | |
clarkb | fungi: can you quickly check 948335? I can issue an image rebuild once that lands | 15:34 |
fungi | on a call for a few more minutes, but sure, shortly | 15:36 |
fungi | aproved it | 15:42 |
clarkb | thanks | 15:42 |
opendevreview | Merged openstack/project-config master: Make viommu disk image use q35 machine type https://review.opendev.org/c/openstack/project-config/+/948335 | 15:52 |
clarkb | that has been deployed and https://nb06.opendev.org/ubuntu-noble-cbcac8cb4e8c4cb2840383bfee906444.log is the requested image build | 16:00 |
clarkb | after building it has to upload to the various clouds so may still be a few minutes before you can recheck gboutry but things are in their way | 16:00 |
dansmith | cool, hopefully that works... my bad for not checking that the default was q35 there (or just setting it to be sure) | 16:03 |
gboutry | Thank you very much! | 16:10 |
sean-k-mooney | i think even in qemu the defaut might still by i440fx but in things like virt manager they have started to move | 16:13 |
sean-k-mooney | we havnt made that change upstream in nova yet (we might never change it) | 16:13 |
sean-k-mooney | but we did change the default downstream to q35 some time ago so that might be part of teh confusion | 16:14 |
sean-k-mooney | we have been trying to be very conservitive with changes nova actual default but we proably shoudl talk about that again at some point | 16:15 |
sean-k-mooney | fungi: regarding https://review.opendev.org/c/openstack/project-config/+/948249 to we need to wait for the rollcall-vote on https://review.opendev.org/c/openstack/governance/+/948252 before we proceed or are they both independent? | 17:48 |
fungi | they can proceed independently, looks like the telemetry ptl has +1'd the governance change, so that's sufficient for us to approve the repo creation | 17:50 |
fungi | er, other way around, +1'd the project creation change, but anyway that's enough | 17:52 |
sean-k-mooney | ack, ya i pingged them this morning but with the power outages in europe i was not sure if they woudl be about today | 17:52 |
fungi | i've approved it now | 17:53 |
sean-k-mooney | thanks | 17:53 |
fungi | clarkb: ^ not sure if we've don a project creation since the gerrit server replacement, so we should probably check behind the deploy for that | 17:54 |
clarkb | I don't think we have | 17:54 |
sean-k-mooney | the other person who im going to be working with on this is on pto for the next two weeks. so im hoping in that time i can boot strap the repo with basic zuul jobs and ectra so she can come back to a workign repo but realisticly i likely wont do much more until they are back then stub it out | 17:55 |
fungi | we've seen evidence that manage-projects is working correctly in the no-op case, which means everything's probably fine, but... just in case | 17:55 |
clarkb | ya I would've expected ssh errors (host key verification or authentication problems) already if they exist. But agreed lets check after | 17:55 |
sean-k-mooney | once the gerrit groups have been created can i be added to both and ill boot strap in others as needed | 17:55 |
sean-k-mooney | once the repo is created ill recheck the governance and zuul job patches but ill likely do that tomorow | 17:57 |
sean-k-mooney | fungi: clarkb as an aside the automation currely create an empty github repo for new projects https://github.com/openstack/devstack-plugin-prometheus | 17:59 |
sean-k-mooney | i assume that becasue we dont mirror by default but we create the repo anyway in case its needed? | 17:59 |
sean-k-mooney | is that intentional? | 18:00 |
opendevreview | Merged openstack/project-config master: create openstack/grian-ui git repo https://review.opendev.org/c/openstack/project-config/+/948249 | 18:00 |
clarkb | sean-k-mooney: all of that happens through zuul jobs. The repo isn't craeted in github until governance says you are an openstack project | 18:00 |
sean-k-mooney | ah ok | 18:00 |
sean-k-mooney | but its expect to be empty? | 18:01 |
clarkb | this leads to problmes where the new repo is deployed in gerrit and you emrge a change or two then a few weeks later the github repo is created and nothing has mreged yet to actually sync the data over | 18:01 |
clarkb | you have to merge something in the repo to run the mirror job after the governance update triggers the creation | 18:01 |
sean-k-mooney | i think we have | 18:01 |
clarkb | also you have to make sure you have the github mirror job configured | 18:01 |
sean-k-mooney | but im not sure we have the mirror job configured | 18:01 |
sean-k-mooney | that or we dont have the governace patch for the devstack plugin | 18:02 |
clarkb | manage-projects is running now | 18:02 |
clarkb | sean-k-mooney: you don't get a github repo until governance is updated so I think that part is done | 18:02 |
clarkb | and I Think that url would 404 | 18:02 |
sean-k-mooney | ok we dont actully need the mirro i was just wondering about the current state | 18:03 |
clarkb | you haven't configured the mirror job | 18:03 |
sean-k-mooney | yep | 18:03 |
clarkb | there is no devstack-plugin-prometheus in openstack/project-config/zuul.d/projects.yaml | 18:03 |
sean-k-mooney | so chandan didnt create the second patch | 18:04 |
sean-k-mooney | only the first to create the repo | 18:04 |
sean-k-mooney | and the governace patch | 18:04 |
sean-k-mooney | ok ill add that to my todolist to go fix | 18:05 |
clarkb | manage-projects reports success | 18:05 |
sean-k-mooney | https://opendev.org/openstack/grian-ui | 18:05 |
sean-k-mooney | yep | 18:05 |
clarkb | https://review.opendev.org/admin/repos/openstack/grian-ui,general too | 18:06 |
clarkb | fungi: ^ this looks like it is working to me | 18:06 |
fungi | yeah, and https://opendev.org/openstack/grian-ui lgtm too | 18:06 |
sean-k-mooney | can i be added to https://review.opendev.org/admin/groups/9df40d91fd8faf45962105507ce09182420fc66f and https://review.opendev.org/admin/groups/d5e713116bab8e1ad13712811545f3bbf651e8d8 | 18:06 |
fungi | not that i expected it to break, just didn't want to not notice if it did | 18:06 |
sean-k-mooney | ya it looks liek it all worked fine with the new instnace | 18:07 |
fungi | sean-k-mooney: technically we add the ptl for new openstack repos, is it okay if i put juan in and then he adds you and whoever else he wants? | 18:07 |
sean-k-mooney | ok sure i can ping them tomorrow | 18:07 |
sean-k-mooney | once the follow up patch to add the zuul import is merged i can use speculaitive execution to boot strap the repo anyway | 18:08 |
sean-k-mooney | well i can create the reivews either way but its nice to get the zuul jobs set up as soon as possible. | 18:08 |
fungi | i've added juan to grian-ui-core and grian-ui-release now | 18:09 |
sean-k-mooney | fungi ++ | 18:09 |
simondodsley | frickler: Possibly your change https://review.opendev.org/c/openstack/requirements/+/948285 has caused the `openstack-tox-functional-py39` job to start failing. | 18:41 |
simondodsley | For example: https://zuul.opendev.org/t/openstack/build/a73d3e5359f24b5ca30e1a98f7d5a6d0 | 18:41 |
frickler | yes, that was to be expected | 18:42 |
simondodsley | but these are voting jobs | 18:43 |
frickler | so cinder needs to either drop those or to invest into maintaining upper-constraints for py39 | 18:44 |
simondodsley | ok - i'll speak to the cores for cinder and glance | 18:45 |
simondodsley | thanks | 18:45 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!