Wednesday, 2025-05-14

opendevreviewMichal Nasiadka proposed opendev/zuul-providers master: Add Rocky 8/9 builds  https://review.opendev.org/c/opendev/zuul-providers/+/94969604:45
opendevreviewMichal Nasiadka proposed opendev/zuul-providers master: Add Rocky 8/9 builds  https://review.opendev.org/c/opendev/zuul-providers/+/94969604:48
opendevreviewMichal Nasiadka proposed zuul/zuul-jobs master: ensure-dib: Add podman  https://review.opendev.org/c/zuul/zuul-jobs/+/94969704:50
opendevreviewMichal Nasiadka proposed opendev/zuul-providers master: Add Rocky 8/9 builds  https://review.opendev.org/c/opendev/zuul-providers/+/94969604:50
opendevreviewMichal Nasiadka proposed zuul/zuul-jobs master: ensure-dib: Add newuidmap and podman  https://review.opendev.org/c/zuul/zuul-jobs/+/94969704:58
opendevreviewMichal Nasiadka proposed zuul/zuul-jobs master: ensure-dib: Add newuidmap and podman  https://review.opendev.org/c/zuul/zuul-jobs/+/94969705:00
opendevreviewMichal Nasiadka proposed zuul/zuul-jobs master: ensure-dib: Add podman and rootlesskit  https://review.opendev.org/c/zuul/zuul-jobs/+/94969705:01
*** ralonsoh_out is now known as ralonsoh\05:46
*** ralonsoh\ is now known as ralonsoh05:47
opendevreviewBenjamin Schanzel proposed zuul/zuul-jobs master: Add a meta log upload role with a failover mechanism  https://review.opendev.org/c/zuul/zuul-jobs/+/79533607:16
mnasiadkaclarkb: we need a new DIB release to switch rocky-container to quay.io (https://opendev.org/openstack/diskimage-builder/commit/cdaf45b9e00af4f4f29f80439abe11e55f18306f) - how do I ,,orchestrate'' this?07:34
mnasiadkaIIRC release team does not do DIB releases07:34
fricklermnasiadka: there's a dib IRC channel, not sure who still hangs out there07:39
mnasiadkaWell, usually ianw did the releases - I'm happy to help with that, but I guess I would need some additional rights to push tags to DIB repo :)07:39
mnasiadkaOr we can ,,onboard'' DIB to openstack/releases repo under cycle independent07:40
mnasiadkabut yes, let's move the discussion to DIB channel07:41
fricklerI'm not there fwiw and it also doesn't seem to get logged. ping here again if you can make no progress07:41
opendevreviewribaudr proposed openstack/project-config master: Add team IRC ops for #openstack-nova  https://review.opendev.org/c/openstack/project-config/+/94970708:36
*** ralonsoh_ is now known as ralonsoh09:21
opendevreviewMerged openstack/diskimage-builder master: Remove qemu-debootstrap from debootstrap element  https://review.opendev.org/c/openstack/diskimage-builder/+/94655010:38
opendevreviewMerged openstack/diskimage-builder master: Remove the usage of pkg_resource  https://review.opendev.org/c/openstack/diskimage-builder/+/93332412:28
opendevreviewMerged openstack/project-config master: Add team IRC ops for #openstack-nova  https://review.opendev.org/c/openstack/project-config/+/94970713:05
opendevreviewMerged openstack/project-config master: to create a new repo for a cfn new launched sub-group heterogeneous distributed training framework  https://review.opendev.org/c/openstack/project-config/+/94955513:29
fungihttps://superuser.openinfra.org/articles/opendev-and-rackspace-building-stronger-open-infrastructure-together/13:31
fricklerdoes ^^ mean that IPv6 is ready now? *scnr*13:37
fungii read it as "real soon now" ;)13:39
fungidan_with might know how close they are to dual-stack global networking13:39
opendevreviewMichal Nasiadka proposed opendev/zuul-providers master: Add Rocky 8/9 builds, labels and provider config  https://review.opendev.org/c/opendev/zuul-providers/+/94969613:52
Clark[m]mnasiadka: we don't run dib releases with Openstack releases because of chicken and egg problems/concerns. But ya someone in the release group needs to push a tag. I can do it if I ever dig my gpg key out of cold storage. Fungi did one once recently too iirc.13:52
opendevreviewMichal Nasiadka proposed zuul/zuul-jobs master: ensure-dib: Add podman and rootlesskit  https://review.opendev.org/c/zuul/zuul-jobs/+/94969713:52
mnasiadkaClark: ianw sorted it out on #openstack-dib - thanks :)13:53
Clark[m]Oh cool 13:53
fungiyeah, there was a fair bit of discussion over there about the release process13:53
mnasiadkaWell I think there was a couple of important patches in DIB since Dec 2024 (previous release) - so user-experience wise it would be good to release things more often :)13:54
mnasiadkaat least rocky builds now use quay.io instead of docker hub13:54
fungisure, also our nodepool/zuul deployments use released dib, so we can't take advantage of any changes until there's a release (which is why infra-core is included in diskimage-builder-release)14:00
opendevreviewMichal Nasiadka proposed zuul/zuul-jobs master: ensure-dib: Add podman and rootlesskit  https://review.opendev.org/c/zuul/zuul-jobs/+/94969714:02
opendevreviewMichal Nasiadka proposed zuul/zuul-jobs master: ensure-dib: Add podman and rootlesskit  https://review.opendev.org/c/zuul/zuul-jobs/+/94969714:02
clarkbI'll approve teh gitea 1.23.8 update in a few minutes if there aer no objections14:42
clarkbI don't see anything in scrollback or email that would indicate I should not do this but let me know if I missed something14:43
fungiplease do, i'm around all day14:47
clarkbdone it is on its way in (which will probably take about an hour or maybe even a little more14:48
mnasiadkaOk then, the niz-rocky builds are on image conversion level so they will be good to go when it finishes - I'm off the hook :)14:49
clarkbmnasiadka: thanks again for getting those images sorted out14:56
mnasiadkano problem, happy to help - nice difference from chasing breakages in Kolla world ;-)15:05
mnasiadkaQuestion to some more Gerrit-knowledgeable people - in Kolla we have Review-Priority and Backport-Candidate labels - is there a way that a vote on this label would override other votes? As in person A votes RP+1, another person wants to change that to -1 - and we end up with one +1 and one -1 - I'd like that to be more of a ,,label'' than a vote with only one value...15:18
mnasiadkawell, than a vote with multiple values from multiple people15:19
clarkbI think hashtags are better suited to this problem15:19
clarkbI forget why others have said taht won't work for review priority though. Maybe it is because anyone can set hashtags if we open them up globally (we haevn't yet but that is the idea)15:20
clarkbbut no in general individual own their votes. The actions you take on those votes can be scoped to specific groups or users, But I don't think we can generically say this vote goes away if someone else votes something different15:20
mnasiadkaI think we have an ACL in Kolla that allows them only for core-reviewers - let me check15:20
mnasiadkawell, hashtags sound like a better suited solution for that instead of standard voting mechanism15:22
corvusonly other thing i'd say about votes is there are different options for calculating the winner (max with and without blocking, for example).  not sure if that's flexible enough to accommodate what you want.  but also, hashtags ftw.15:50
fungiright... it's in that category of technical solutions to social challenges: document the hashtags your project intends to use for specific situations, if someone misuses them then have a chat with that person about it, and if they're unreasonable then escalate it to project leadership/platform admins15:54
fungimicro-managing per-project access to set and remove hashtags is almost certainly an overoptimization15:55
fungiif a user starts abusing access by going around randomly setting or removing hashtags on projects, i have no qualms about disabling their account immediately15:56
opendevreviewClark Boylan proposed opendev/system-config master: DNM Forced fail on Gerrit to test 3.11 upgrade and downgrade  https://review.opendev.org/c/opendev/system-config/+/89357116:04
opendevreviewClark Boylan proposed opendev/system-config master: Update Gerrit images to 3.10.6 and 3.11.3  https://review.opendev.org/c/opendev/system-config/+/94977816:04
clarkbthere is a newer gerrit release for 3.10 and 3.11. I figure getting those updated is a godo step 0 before we start testing upgrade stuff. Then the second change there has a couple of holds in place to make testing of the upgrade easy16:06
fungiagreed16:07
clarkbgitea change should merge in a minute or two. Its uploading logs16:38
opendevreviewMerged opendev/system-config master: Update to gitea 1.23.8  https://review.opendev.org/c/opendev/system-config/+/94954416:39
clarkband it is deploying now16:41
clarkbhttps://gitea09.opendev.org:3081/opendev/system-config/ is up and reports the expected version. The page rendered how I expect it. I'll do a clone test next16:44
fungilgtm!16:44
clarkbclone works16:45
fungi09 seems to be working16:45
clarkbya so far it looks happy. We're through at least gitea11 at this point16:48
clarkbhttps://zuul.opendev.org/t/openstack/build/5585152355814cc089d9c1fdde0e2138 success and my checks of individual backeds look good16:54
clarkbinfra-root checking the giteas I notice that gitea10-gitea14 report no space left on device for /var/log/syslog and the journals on May 10 (from dmesg -T). df -h reports plenty of disk now and we do seem to be writing to syslog and maybe the journal as well. Gitea seems to have up to date content too and / is not mounted ro. Also gitea09 doesn't seem to have been hit by this17:02
clarkbnot sure what is going on there but it is weird enough that it may be worth someone else doing a quick check that there isn't anything terribly wrong we need to intervene for17:02
clarkbI'm beginning to suspect some temporary blip in storage for those servers and when storage resumed normal operations so did our servers17:03
clarkbI think if things were persistently sad the upgrade we just did would have failed (due to being unable to fetch and store new docker images)17:04
clarkbinfra-root https://104.130.253.194/q/status:open+-is:wip is a held Gerrit 3.11 which we can use to interact with it and see that it works as expected. I also held a 3.10 node and that is the node I'll use to test the upgrade and downgrade process17:30
clarkbbasically this 3.11 node should be safe to use at any time as a "what does 3.11 look like" check then the other node iswhere things will go up and down and change versions17:30
clarkbvisually this doesn't seem all that different17:31
fungithe free space is enough that i doubt it dipped into root-only overhead during rotation17:44
fungialso 09 and 10 have basically identical utilization at the moment17:45
clarkbya and it should rotate more regularly than only on the 10th and not since17:45
clarkbthis is why I suspect something on the underlying cloud17:45
fungiagreed17:45
corvusclarkb: could check cacti graphs17:46
clarkbah yup17:46
clarkbcorvus: that was a great idea. Gitea10 does seem to have run out of disk on the 10th17:47
clarkbit was very sawtooth17:47
clarkbsuddenly I'm reminded of the tarball generation problem and i think that must be what happened hwere17:47
fungiokay, so maybe rotation related after all17:47
clarkbwe run a cron to prune those daily and that wasn't keeping up17:47
fungioh! yes, tarballs17:48
clarkbfungi: ya not log rotation but rotation of the tarball artifacts17:48
fungiagreed, that would definitely explain it17:48
fungiand 09 just got lucky i guess17:48
corvuswhile /bin/true; rm tarballs; end17:48
fungiyou forgot the "do" ;)17:48
clarkbya I think we can probably just update the cron to run hourly or twice a day or similar17:48
clarkbI'll prep a change for that so we have it if it becomes useful again17:48
clarkb(right now things seems stable for the last few days)17:49
opendevreviewClark Boylan proposed opendev/system-config master: Run gitea tarball cleanup every 6 hours  https://review.opendev.org/c/opendev/system-config/+/94979017:56
clarkbin related news github announced anonymous request rate limit changes as even they are being crushed by the bots on the internet vying for AI supremacy17:57
clarkblooks like prior to the 10th it would get close to the limit but not exceed it18:01
clarkbthen on the 10th we got "lucky"18:01
clarkbso ya I think 949790 should help mitigate for the future if we don't have better ideas (pretty sure I checked and we cannot disable this feature entirely otherwise I would)18:02
opendevreviewMerged zuul/zuul-jobs master: Add a meta log upload role with a failover mechanism  https://review.opendev.org/c/zuul/zuul-jobs/+/79533618:16
corvusi think we could adapt ^ for use in our environment... but then we might not notice cloud storage failures....18:18
corvussomething to think about18:18
clarkbI guess we could generate a random order for the swift backends then pass that entire list to this new role?18:20
clarkbthen as long as any one of them succeeds we would avoid job failures. With the current backends we use its likely that 3 fail or 2 fail given that 3 belong to one cloud and 2 to another. So we probably need at least 4 options and at that point you may as well use all 518:21
fungireminiscent of (mike pondsmith/r. talsorian games) cyberpunk lore where where the old net was overrun by rogue ai systems so they had to build the blackwall to keep them from leaking into the new reconstructed net after the datakrash of 2022. even the timeline isn't too far off18:22
mnasiadkafungi: if hashtags are limited to core reviewers we should be fine ;)18:28
mnasiadkaOk - both niz-rocky patches have passed Zuul and are good to go https://review.opendev.org/q/topic:%22niz-rocky%2218:29
corvushashtags are very useful for non-core reviewers, i would encourage not limiting them.  if it's important enough to restrict, then it should probably be a label (and then look into the submit rules)18:34
fungimnasiadka: yeah, maybe you misread me. i said limiting hashtags to core reviewers is a wasteful overoptimization at best. if people misuse them (core or otherwise) then talk to them. if they won't listen, talk to us18:38
fungipeople problems require people solutions18:38
ildikovHi y'all, I'm reaching out about an error I got on my patch to update a StarlingX doc page: https://review.opendev.org/c/starlingx/governance/+/94974619:02
ildikovIf I read the logs correctly, it seems like a library issue and not an error in my patch. But I might've missed smth. Can someone please help me confirm one way or the other?19:03
fungilooking19:04
Clark[m]I think that is the problem that occurs when you run old flake8 on modern python (likely due to the node set being updated to noble)19:06
Clark[m]If you update flake8 it should fix the problem 19:07
ildikov@Clark[m] great, that confirms my suspicion 19:12
ildikov@Clark[m] sooo, how can I update flake8? :)19:13
fungiyeah, current flake8 per https://pypi.org/project/flake8/ is 7.2.0, that job seems to have installed 3.8.4 (from 2020) according to the log, it's probably using an old constraints file but i'm digging into that now19:15
fungithe log says it ran `pip install -U -r /home/zuul/src/opendev.org/starlingx/governance/test-requirements.txt`19:17
fungiso it's coming in as an indirect dependency via https://opendev.org/starlingx/governance/src/branch/master/test-requirements.txt#L1 19:17
ildikovah, ok19:17
ildikovI checked that file, among other possible ones, and then realized I might need help figuring the dependency out19:18
fungithe current hacking version is 7.0.019:18
fungijust for reference19:18
fungibut i'd probably try to match to whatever version of hacking other starlingx projects are using19:19
ildikovok, cool, I'll check19:19
slamptonHi, I'm an e-mail admin, troubleshooting the issue of not being able to receive e-mail from Gerrit (162.253.55.233). The thing is, we're receiving e-mail from openstack.org from that IP, but not from opendev.org. That suggests to me that you have a difference in either reverse DNS, SPF. DKIM, or DMARC records between those domains.19:28
slamptonProofpoint says they have removed the IP block, even though the lookup tool know gives an error. Can you provide smtp logs from your end, when sending from opendev.org to windriver.com?19:28
fungislampton: i'll get those now, just a sec19:30
slamptonthx19:30
fungifor the record, i requested removal of that address from proofpoint weeks ago and received no response, they seem to be completely unstaffed or otherwise defunct19:30
slamptonthat's ridiculous, they seem to be staffed just fine, if you are a paying customer19:33
fungiyeah, maybe they just ignore anything put into their removal request form. anyway i do see messages getting accepted right now in the log... (redacted) 2025-05-14 19:29:51 1uFHna-00000006wlT-0tv7 -> REDACTED@windriver.com R=dnslookup T=remote_smtp H=mxb-0064b401.gslb.pphosted.com [205.220.178.238] X=TLS1.2:ECDHE_SECP256R1__RSA_SHA256__AES_256_GCM:256 CV=yes C="250 2.0.0 46mbc8sm08-119:36
fungiMessage accepted for delivery"19:36
fungii'm looking to see when the last refusal was19:37
mnasiadkafungi: makes sense, after giving some thoughts we’re probably going to switch backport-candidate from review labels to use hashtags, but for review priority I would probably still stick with review labels (but think about giving the PTL/cores option to delete RP votes - because we sometimes end up with that semi-active person setting review priority - which needs to be lowered)19:37
fungislampton: most recent rejection i see from pphosted.com in our logs was 2025-05-08 08:22:35 utc, almost a week ago now19:39
fungithere was an incident around 2025-05-12 02:38-04:00 with problems looking up the domain in dns, but that only resulted in messages being temporarily deferred19:41
fungislampton: if messages aren't getting through for the past week, then my only guess is proofpoint has started 250/2.0.0 accepting the messages but then dumping them in the bitbucket19:42
ildikovI was finally able to find a hacking version that worked, thanks fungi for the help in figuring that out19:52
fungiildikov: Clark[m] too, but you're welcome!19:59
ildikovyes, thanks Clark[m] too!19:59
slamptonfungi: thanks, that correlates with the removal of the IP block. They still seem to be silently dropped, because I don't see anything in our logs from @opendev.org. I've asked Proofpoint to investigate further19:59
clarkbfungi: slampton: looking at my personal inbox I think the deliveries may be coming from review@openstack.org20:02
fungiyes, we don't have e-mail set up for the opendev.org domain for $reasons, so still use @openstack.org for the addresses20:02
fungiwe want messages to be deliverable, therefore we only send from addresses at a domain that actually accepts smtp deliveries itself20:03
fungiconfirmed, the sender (smtp "mail from") on those messages should be review@openstack.org20:06
fungithe rfc 822 "from:" header will also be review@openstack.org20:07
fungithough with varying names20:08
mnaserOSError: [Errno 39] Directory not empty: '/opt/stack/requirements/.eggs/pbr-6.1.1-py3.10.egg/pbr-6.1.1.dist-info' -> '/opt/stack/requirements/.eggs/pbr-6.1.1-py3.10.egg/EGG-INFO'20:28
mnaserRunning setup.py install for openstack_requirements: finished with status 'error'20:28
mnaserwaaah, do we just keep catching issues lol20:28
mnaserhttps://github.com/vexxhost/magnum-cluster-api/actions/runs/15030117460/job/42240444170#step:8:670520:30
mnaserhttps://github.com/pypa/setuptools/releases20:31
mnaseran hour ago a job passed20:31
mnasertrying to look at https://github.com/pypa/setuptools/compare/v80.5.0...v80.7.020:32
slamptonfungi: aha, so we may be good then20:33
mnaserjob passed with setuptools 80.4.020:34
mnaserso a bunch happened here https://github.com/pypa/setuptools/compare/v80.4.0...v80.7.020:35
clarkbmnaser: the code that raised the exception is 7 years old20:35
fungibut maybe not a code path we used to hit20:35
clarkbfungi: ya or something was previously ensuring the directory on the dst side was empty and now it isn't20:36
clarkbos.rename says you can't rename foo/ into bar/ if bar/ is not empty20:36
fungialso possible20:36
mnaserhttps://github.com/vexxhost/magnum-cluster-api/actions/runs/15029256236/job/42237574746#step:8:6576 installed just fine an hour ago20:37
mnaserwhere setuptools was 80.4.020:37
clarkbwhich is what I think happened here. Something had already written to bar/ (/opt/stack/requirements/.eggs/pbr-6.1.1-py3.10.egg/) and kaboom because it already had an EGG-INFO in it20:37
fungii wonder if that's pbr's metadata writer for the pbr.json file20:38
mnaseris setuptools pinned in opendev?20:39
fungidepends on the project (whether it uss a pyproject.toml), phase (build vs install) and invoking tool (e.g. tox may install specific setuptools versions)20:40
fungiso the answer is "sometimes"?20:41
mnaseroh..20:41
mnaseri think the flood gates of failure are about to come20:41
mnaserhttps://zuul.opendev.org/t/openstack/status see 949800,1 -- failed with same reason20:41
mnaserhttps://zuul.opendev.org/t/openstack/build/13528f8c1f99421391600a49673e66df20:41
clarkbfungi: I think that goes into a different file not EGG-INFO looks like it goes to pbr.json. However, it is defined as a egg_info.writers entrypoint20:42
mnaserim running a quick pip install in a docker container here to see whats in the folder20:44
mnaserhttps://paste.openstack.org/show/bvXJuXBeCevAgxY48IgH/20:46
mnaserit looks like everything is identical except for a requires.txt in the EGG-INFO folder20:47
mnaserwhich contains `setuptools`20:47
mnaserso it feels like something is already generating everything in EGG-INFO and that copies is useless (and problematic)20:47
clarkbmnaser: thinking out loud here about how to debug this I think you want to figure out what creates the EGG-INFO dir with that content and see if that can be moved to later or if we can stop doing the rename as it may be redundant20:48
mnaseri'm trying to probe at this but it is very much out of my knowledge scope20:49
mnaseroh interesting20:50
mnaseri noticed `× python setup.py egg_info did not run successfully.` and when i ran it inside the repo, it fails20:50
mnaserif i `rm -rfv /opt/stack/requirements/.eggs` and run it, it works, but the second run around, it fails20:50
mnaserso its almost like maybe that function is meant to be idempotent (or maybe not meant to run more than once?) -- one or the other20:51
clarkbseems like it would be a bug to not be idempotent as you can't control whether or not people prebuild random things20:51
mnaseri can actually reproduce this same thing with zuul/zuul repo.. now they both use pbr, im trying to find a non-pbr project to see if its maybe pbr related20:52
clarkbmnaser: note you need a setuptools but not pbr package20:53
mnaserah right yes20:53
clarkb(its possible to not use setuptools at all these days)20:53
clarkbI think doc8 may fit the bill but I haven't double checked to see if it is still using setuptools)20:54
mnaserthey got rid of setup.py so i cant replicate that20:54
mnaserfound https://github.com/fabric/fabric i can try on 20:55
mnaseryea.. issue is not there20:55
clarkbits possible that the egg_info.writers entrypoint is side effecting things in ways we don't want20:56
clarkbmnaser: what you can do is clone pbr and remove the egg_info.writers entry from setup.cfg then build a wheel from that using itself. Then install that wheel and try to install something to see if it works20:56
mnaserlets try20:57
clarkbEGG-INFO doesn't show up in pbr at all though so my hunch is it is related to that entrypoint somehow20:57
mnaseryeah i searched the code for that too and indeed, silent20:57
clarkboh wait20:57
clarkbwe have LocalEggInfo too20:57
mnaserits still happening but i wonder if pip is doing an isolated build20:58
mnasersorry nvm i am not using pip20:58
clarkbok looking at your traceback it is specifically failing to install pbr as a setup requires21:02
mnaseroh, you're right21:02
clarkbbindep usese pbr via pyproject.toml and pep517. Its possible that would work while installing setup requires doesn't21:02
clarkbmnaser: if installing setup requires is the issue you can try to preinstall pbr first21:02
clarkbsomething like venv/bin/pip install pbr && venv/bin/pip install requirements or whatever you want to install21:03
clarkbI wonder if the problem here is that specific setup_requires path and not really pbr at fault as much as some issue with that system (it is still entirely possible that pbr is tickling some bug thourhg)21:03
mnaserwith pbr preinstalled it also blows up21:04
clarkbis the traceback different?21:04
mnaserhttps://www.irccloud.com/pastebin/kbRG91J1/21:04
mnasersame ol21:04
clarkbhuh why is it trying to install setup_requires if they are already present21:05
mnaserhttps://github.com/pypa/setuptools/commits/main/setuptools/installer.py ther was some changes21:06
mnaseri wonder if its the avoidance of pkg_resources or soemthing21:06
clarkbI was able to install bindep in a venv with setuptools 8.7.0 without preinstalling anything else21:06
mnaserwhat about requirements?21:06
clarkb*setuptools 80.7.021:06
clarkbhttps://github.com/pypa/setuptools/commit/7379eaa957aaf4f2a01438066afb1674a64545f4 this does look suspcious21:07
fungipbr did add setuptools as an explicit install_requires in a recent release, though that was months ago still21:08
mnaseractually as an exercise let me try the different setuptools versions and see which one fails21:08
clarkbmnaser: ^ that commit isn't in 80.6.021:08
mnasermight help narrow it down21:08
clarkbmnaser: ++ I was just going to suggest trying 8.6.021:08
mnaserok 80.6.0 i can run the egg_info twice with no issues21:09
mnaser(or more than twice)21:09
mnaser80.7.0 breaks21:10
mnaserhttps://github.com/pypa/setuptools/compare/v80.6.0...v80.7.0 so 13 commits to look through21:10
clarkbmnaser: the one I linked touches functions in your traceback which is why I called it suspicious21:11
mnaserhttps://github.com/pypa/setuptools/commit/7cb4c76468735ae69450a3693bed56217afe902c21:11
clarkbya there is also that one21:12
clarkbunfortunately with 30 character commit messages it is hard to evaluate what the intent was...21:12
opendevreviewJames E. Blair proposed opendev/zuul-providers master: Build images daily  https://review.opendev.org/c/opendev/zuul-providers/+/94980621:14
clarkbmnaser: theory: if 7379eaa it changes lookups for existing egg infos. That changed lookup would short circuit if found and return that existing info rather than writing new ones21:15
mnaserwell i guess one way to find out21:16
clarkbmnaser: I suspect that the code that updated to find the egg info is buggy and it isn't finding the existing egg anymore so it ends up writing a new one later when it shouldn't21:16
clarkband then ti goes kaboom21:16
mnasercheckout setuptools locally, revert that patch and install21:16
mnaserlemme try21:16
clarkbmnaser: ya you may also be able to just edit the setuptools code and see if ti fall through around like 90ish here https://github.com/pypa/setuptools/commit/7379eaa957aaf4f2a01438066afb1674a64545f421:16
clarkbspefically linke 93 on the new side return egg_dist. I don't think that is happening anymore and thus it falls through to line 121 on the new side which is what ultimately explodes21:17
clarkbMy hunch is the old code was returning on line 93 because it found the egg and not falling through21:17
mnaserhttps://github.com/pypa/setuptools/commit/7379eaa957aaf4f2a01438066afb1674a64545f4 conflicts to revert, so i reverted https://github.com/pypa/setuptools/commit/7cb4c76468735ae69450a3693bed56217afe902c first and no go21:18
mnaserso donig the second one now21:18
mnaserhttps://github.com/pypa/setuptools/commit/7cb4c76468735ae69450a3693bed56217afe902c fixed it21:19
mnasergonna try to revert that one _alone_ since i had to do both in order to avoid conflict21:19
clarkbmnaser: your second revert is the same hash as the first. DId you mean 7379eaa?21:19
mnasersorry, yes, thats right, the second revert fixed it21:20
clarkbfwiw I strongly suspect that code is simply buggy allowing it fall through then explode and I don't think that would be a pbr issue21:20
clarkbits just that pbr is tripping over it21:20
clarkb(because it is a setup_requires?)21:20
mnaseryeah reverting 7379eaa957aaf4f2a01438066afb1674a64545f4 alnoe fixes it21:24
mnaserso.. now what :)21:24
mnaserclarkb: lol https://github.com/pypa/setuptools/issues/499821:25
fungipatch or revert as a setuptools pr21:25
fungior at least that yep21:25
fungiparallel discovery and research is also great confirmation21:25
clarkbif that issue wasn't already up I'd suggest that we edit the function to confirm it is falling through on the second pass21:25
clarkbthen file an issue. But since there is already an issue. I think its time for a drink and doing something not python related?21:26
mnaseri dont know if we want to give some sort of heads up so people dont recheck and eat up endless ci resources? 21:28
clarkbwe could do something like #status notice Setuptools issue 4998 is affecting python CI jobs. Hold off on rechecks until setuptools 80.7.0 is pulled or an 80.8.0 includes a fix.21:28
fungii'm about to sit down to winner, but sgtm yep21:29
clarkbfungi: that notice specifically is good?21:29
fungiyes, though also now the setuptools release has been yanked21:29
mnaseroh nice21:29
mnaseryeah i was just gonna say that21:29
fungibreaking news!21:29
fungi;)21:29
fungi(pun intended)21:30
clarkbin that case maybe #status notice Setuptools 80.7.0 broke python package installs for many affecting CI jobs. That release has been yanked and it should be safe to recheck failed changes.21:31
fungiwfm21:51
clarkb#status notice Setuptools 80.7.0 broke python package installs for many affecting CI jobs. That release has been yanked and it should be safe to recheck failed changes.21:59
opendevstatusclarkb: sending notice21:59
-opendevstatus- NOTICE: Setuptools 80.7.0 broke python package installs for many affecting CI jobs. That release has been yanked and it should be safe to recheck failed changes.21:59
opendevstatusclarkb: finished sending notice22:02
JayFhttps://bugs.launchpad.net/pbr/+bug/2107732 was never triaged and I saw it referenced in a setuptools issue22:45
JayFprobably worth a look 22:45
JayFit's nice that they are reaching out22:45
clarkbJayF: oslo is responsible for pbr fwiw22:46
JayFAh. Makes sense. I'll link it over there.22:46

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!