mnasiadka | clarkb: yeah, that's why I moved back to irccloud (with the across TZ interactions it's easier for me sometimes to reply on a mobile - didn't find a good other mobile client for a bouncer) | 04:58 |
---|---|---|
*** ralonsoh_out is now known as ralonsoh | 05:25 | |
frickler | regarding the discussion of moving into the matrix: my main concern is still the lack of decentralization, if matrix.org ever went away (or became non-free), the whole thing would blow up. | 05:43 |
frickler | lack of feasible clients is another concern, but I admit that I'm likely special in that regard, seeing tmux+irssi as the ideal solution | 05:45 |
frickler | infra-root: there's an error message in the zuul export-keys cron jobs, looks like the exports proceed anyway, so might be just cosmetic? https://paste.opendev.org/show/bk2Gv0kKpSihKknpQVEc/ | 05:52 |
*** jroll04 is now known as jroll0 | 07:17 | |
mnasiadka | who doesn't like irssi, but they could actually be a bit better on the Matrix plugin front | 07:34 |
mnasiadka | but I guess there's a small risk matrix.org goes away | 07:34 |
opendevreview | Michal Nasiadka proposed opendev/glean master: Add support for CentOS 10 keyfiles https://review.opendev.org/c/opendev/glean/+/941672 | 08:05 |
*** sfinucan is now known as stephenfin | 11:19 | |
mnasiadka | Any reason glean is not having distro in requirements.txt and has it's own version under _vendor/ ? | 12:56 |
fungi | frickler: there are a few console-based clients listed on matrix.org, though since i switched from irssi to weechat over a decade ago i've opted to use a weechat plugin (weechat-matrix-rs, basic chat functionality is working but more advanced features are still wip) | 13:06 |
fungi | mnasiadka: i'd usually consult git history, we're sticklers for thorough commit messages | 13:07 |
fungi | mnasiadka: https://review.openstack.org/545314 added it initially "as not to have external dependencies installed" | 13:09 |
fungi | you'll notice there is no requirements.txt in glean | 13:09 |
frickler | fungi: I tested some some years ago and they were essentially nonfunctional, might be time to revisit | 13:16 |
corvus | frickler: wow, the odds of hitting that race condition are fantastic. https://review.opendev.org/950536 should fix it. | 13:26 |
corvus | regarding matrix federation; there are a number of for-profit companies that host homeservers for low cost (i realize that is not sufficient, but it does indicate some ecosystem health). some open source communities host their own homeservers for the use of their constituents (fedora is one). | 13:31 |
corvus | running the server is the easy part, dealing with authentication and abuse is the harder part, which is likely why there aren't a ton of generally available homeservers. but there are some; it looks like this is a list from folks who want to see less concentration on matrix.org: https://servers.joinmatrix.org/ | 13:32 |
fungi | though the same could be said for irc servers | 13:33 |
fungi | dealing with authentication and abuse is why there are whole communities built up around operating irc networks. those problems aren't unique to matrix | 13:34 |
corvus | yep, and of course, we have dealt with irc networks ... let's say "becoming unavailable" :) | 13:34 |
fungi | very much so. if memory serves, that situation was the impetus or at least an accelerant for zuul's community moving fully to matrix when it did | 13:35 |
corvus | yeah, and a repeat of that if matrix.org became unavailable would "only" affect the matrix.org users, not others. that could be most users if we recommend matrix.org, so it's definitely a concern, but how much of a concern depends on our collective choices, and there are options for recovery that don't involve rebuilding everything from scratch. | 13:37 |
mnasiadka | fungi: yeah, after asking that question I found that by myself, if lack of requirements.txt is the end goal - fine by me ;-) | 14:03 |
clarkb | frickler: re decentralization I also like the to think about what that would happen if the fear comes to pass. We'd changes homeservers, move back to IRC, or use another tool (mattermost etc I dunno) | 14:47 |
clarkb | frickler: given the risk seems low and there are viable alternatives I'm not sure that is a major concern to me. Yes it is annoying to hcange tooling but its not the first time and unlikely to be the last despite our best efforts | 14:47 |
clarkb | to me this is similar to the concern wtih granian vs uwsgi. uwsgi is basically unmaintained at this point so has no maintainers. Granian has one maintainer. That is still a risk but we're in a better place if we switch because 1>0 | 14:52 |
clarkb | with matrix vs IRC the benefits are more nebulous and a lot of it depends on opinion, but I do think that matrix is going to be a better experience for those who have struggled with IRC (due to persistent scrollback and web based account signups) | 14:53 |
clarkb | fungi: looking at lists the files have not rotated yet. I seem to recall the first time logrotate runs it makes note of ages of things then you have to wait for the next pass for it to actually do the work? | 15:05 |
clarkb | according to syslog logrotate did run `May 21 00:01:05 lists01 systemd[1]: Finished Rotate log files.` | 15:06 |
fungi | huh... | 15:09 |
fungi | looking | 15:09 |
clarkb | I want to say this is expected behavior and things will rotate on the next run. But ya good to double check | 15:09 |
clarkb | then I'd like to proceed with https://review.opendev.org/c/opendev/system-config/+/949778 if there are no objections to update to gerrit 3.10.6 with the plan of restarting gerrit today and triggering an online reindex of changes post restart | 15:14 |
johnsom | Just an observation, but zuul seems to be a lot slower recently. Like I put in a search for the job "octavia-v2-dsvm-cinder-amphora-v2" and I'm am sitting on the spinning circle for a while... | 15:17 |
fungi | what spinning circle where? | 15:17 |
fungi | what's the url where you see it? | 15:18 |
johnsom | https://zuul.openstack.org/builds?job_name=octavia-v2-dsvm-cinder-amphora-v2 | 15:19 |
johnsom | The job list just isn't coming up | 15:19 |
fungi | try scoping it by project too | 15:20 |
fungi | non-project-scoped queries can take many times longer to return | 15:20 |
fungi | i think this has come up in the past with query optimization for the builds page | 15:21 |
clarkb | there have been improvements to the db schemas and queries to make things faster, if I had to guess some update in a recent deployment may have undone some of that effort inadverdently. But I haven't looked at the zuul commit log | 15:21 |
johnsom | Hmm, not sure I know how to do that from the jobs page. It seems to only have a filter for job name | 15:21 |
clarkb | fungi: ya it seems like things go back and forth as features are added to the web and migrations are done to the database | 15:21 |
fungi | the url you showed is the builds page, there's a drop-down in th etop-right to add filters | 15:22 |
clarkb | top left not right but yes | 15:22 |
fungi | er, top-left sorry | 15:22 |
johnsom | Ack, thanks. Two pages deeper than I was looking | 15:23 |
johnsom | https://zuul.openstack.org/builds?job_name=octavia-v2-dsvm-cinder-amphora-v2&project=openstack%2Foctavia&skip=0 | 15:23 |
johnsom | Hmm, still no luck | 15:23 |
johnsom | I know the job has been running (though timing out I think) as I see it ran on a patch on 5/13 | 15:25 |
johnsom | https://zuul.opendev.org/t/openstack/build/c9202aea67994e2280044565b42a4b65 | 15:26 |
johnsom | I was hoping to get an idea of what shape that job is in... I'm guessing it needs some love | 15:26 |
clarkb | hrm I'm not seeing any zuul web server or zuul sql driver updates in the last month or so | 15:29 |
clarkb | the database is up and running as the default give me the last 50 rows is returning data (as are build specific pages) | 15:30 |
fungi | normally the job history link from the top of the build page should get you back something useful | 15:30 |
fungi | er build history | 15:30 |
johnsom | Does that URL work for you? Is it just something on my side? | 15:30 |
fungi | and that's returning https://zuul.opendev.org/t/openstack/builds?job_name=octavia-v2-dsvm-cinder-amphora&project=openstack/octavia | 15:30 |
fungi | which does have results | 15:30 |
fungi | aha, your first url had a bad job name, extra -v2 on the end? | 15:31 |
clarkb | https://zuul.opendev.org/t/openstack/builds?job_name=octavia-v2-dsvm-cinder-amphora&project=openstack%2Foctavia&branch=master&pipeline=check&skip=0 this url works for me | 15:32 |
clarkb | https://zuul.opendev.org/t/zuul/builds?job_name=octavia-v2-dsvm-cinder-amphora&project=openstack%2Foctavia&branch=master&pipeline=check&skip=0 this does not | 15:32 |
fungi | the dashboard doesn't know whether job names its queried for exist in the database or not, without querying the database to find out, since they can be since-deleted from configuration or ephemeral from changes that never merged too | 15:32 |
clarkb | oh ha I'm in the wrong tenant no wonder | 15:32 |
clarkb | johnsom: ^ double check you didn't do what I did | 15:32 |
johnsom | Ah, interesting, yeah, the extra -v2..... I searched for "octavia-v2-dsvm-cinder-amphora" in jobs and it's last on the list. I clicked that. | 15:32 |
fungi | so that's probably a job which hasn't run in a very long time (or ever) and the first 50 results are so far back in the database (if they even exist) that finding them is taking too long | 15:33 |
johnsom | Some point in history we must have had a bad job... | 15:33 |
clarkb | but ya a job that doesn't exist (either because the name is wrong or you're in the wrong tenant) will prdouce this behavior | 15:33 |
johnsom | Thank you for the help! I have what I need. | 15:34 |
clarkb | fungi: any thoughts on proceeding with https://review.opendev.org/c/opendev/system-config/+/949778 today (will need a restart of gerrit and we should do the online reindexing after startup) | 15:36 |
clarkb | I guess I can check with the openstack release team to make sure they dno't have anything going on | 15:36 |
fungi | yeah, that seems like a good one to knock out. i'm around all day today | 15:39 |
clarkb | ok I'll approve it | 15:40 |
fungi | was going to run some errands, but decided to push them off to after my morning meetings tomorrow instead | 15:40 |
clarkb | I want to get it done so that we can do testing of latest versions of things | 15:40 |
clarkb | as part of 3.11 upgrade prep | 15:40 |
clarkb | change is approved | 15:41 |
fungi | excellent | 15:41 |
mnasiadka | Is there some mirror sync issue with Ubuntu Jammy? Getting "ERROR:kolla.common.utils.keystone-base:libldap-dev : Depends: libldap-2.5-0 (= 2.5.18+dfsg-0ubuntu0.22.04.3) but 2.5.19+dfsg-0ubuntu0.22.04.1 is to be installed" | 15:49 |
clarkb | mnasiadka: if you look at https://mirror.dfw.rax.opendev.org/ubuntu/ there is a timestamp file that is usually good first indication if we haven't updated recently | 15:49 |
fungi | have a build result link? | 15:49 |
clarkb | and yes that timestamp looks old | 15:49 |
mnasiadka | https://5554f32d93e8516e6111-eb074e4d266b226b76de64a5b482d977.ssl.cf1.rackcdn.com/openstack/452259b2e5f34e5f82cca695ee8a9d2d/kolla/build/000_FAILED_keystone-base.log | 15:50 |
fungi | https://grafana.opendev.org/d/9871b26303/afs | 15:50 |
fungi | last updated a month ago but the volume isn't full at least | 15:51 |
fungi | https://static.opendev.org/mirror/logs/reprepro/ubuntu-jammy.log.1 is very long, i wonder if it got stuck on a timeout from a huge mirror content update | 15:52 |
fungi | checking for stale locks now | 15:53 |
fungi | The lock file '/afs/.openstack.org/mirror/ubuntu/db/lockfile' already exists. There might be another instance with the same database dir running. To avoid locking overhead, only one process can access the database at the same time. Do not delete the lock file unless you are sure no other version is still running! | 15:56 |
fungi | i did this: k5start -t -f /etc/reprepro.keytab service/reprepro -- bash -c "rm /afs/.openstack.org/mirror/ubuntu/db/lockfile" | 15:56 |
fungi | now i've got an ubuntu mirror refresh underway in a root screen session on the mirror-update server, will monitor it to make sure it gets caught up asap | 15:57 |
clarkb | fungi: and there was no month old sync running I guess? | 15:57 |
clarkb | and thanks! | 15:57 |
fungi | there was not, no | 15:57 |
fungi | i'm sure it got culled by the timeout | 15:57 |
fungi | but i checked the process list just in case | 15:58 |
fungi | looking at the log i linked earlier, it must have been running for at least an hour because the log was rotated during the run and there's about an hour worth of timestamps in there. unfortunately the previous log is gone due to how long this has been | 16:00 |
clarkb | ya we'll just have to roll forward and figure it out from where things are at today | 16:00 |
fungi | a month's worth of updates will likely take a while | 16:01 |
opendevreview | Michal Nasiadka proposed opendev/glean master: Add support for CentOS 10 keyfiles https://review.opendev.org/c/opendev/glean/+/941672 | 16:17 |
fungi | clarkb: 949778 has hit a post_failure on system-config-upload-image-gerrit-3.11 | 16:20 |
fungi | should we manually dequeue/enqueue to speed things up or just wait and recheck? | 16:20 |
clarkb | fungi: it looks like an rsync data transfer problem not an issue with docker hub apis. Given that I think reenqueing should be fine | 16:21 |
clarkb | fungi: did you want to do that or should I? | 16:21 |
fungi | i can, just a sec i'll get it going | 16:21 |
clarkb | thanks | 16:22 |
fungi | okay, it's back in the gate | 16:23 |
clarkb | if it were a docker hub api issue then waiting a bit befor retrying can sometimes be helpful | 16:23 |
fungi | i'm going to step away from the screen for a few minutes to put lunch together, since this'll be a while still | 16:24 |
clarkb | enjoy | 16:24 |
fungi | brb | 16:25 |
opendevreview | Michal Nasiadka proposed opendev/glean master: Add support for CentOS 10 keyfiles https://review.opendev.org/c/opendev/glean/+/941672 | 16:29 |
opendevreview | Michal Nasiadka proposed opendev/glean master: Add support for CentOS 10 keyfiles https://review.opendev.org/c/opendev/glean/+/941672 | 16:34 |
clarkb | for post gerrit restart reindexing I think we do this `ssh -p 29418 fooadmin@review gerrit index start changes --force` the --force will be necessary because we're already going to be running the latest index version. This seems to match what we do after renaming repos too so I think this should be what we want | 16:43 |
clarkb | if we want to be extr aparanoid we could reindex all the indexes too. But since the chagne index is the one we've noticed a problem with I figure we can scope to that for now | 16:46 |
opendevreview | Michal Nasiadka proposed opendev/glean master: Add support for CentOS 10 keyfiles https://review.opendev.org/c/opendev/glean/+/941672 | 16:47 |
clarkb | mnasiadka: looking at ^ that latest patchset I think those output files indicate we're writing to /etc/sysconfig/network-scripts and not the network manager keyfiles? | 16:49 |
clarkb | the mock for is keyfile makes sense to me though so I'm not sure why that is happening. Unless maybe those files were just copied overand still need editing for the keyfile content | 16:50 |
clarkb | as an alternative we could run https://gerrit-review.googlesource.com/Documentation/rest-api-changes.html#index-change the problem with that is we'd want to list changes with recent updates. However, to do that you're relying on the index so I think there is a bit of a chicken and egg to generate that list | 16:58 |
clarkb | We could monitor stream events for a few minutes prior to restarting (I'm not sure if stream events is triggered off of index updates or more directly) | 16:59 |
mnasiadka | clarkb: trying to sort that out, I think it was writing to proper files a patchset or two before - had some weird local outputs (like /etc/NetworkManager/system-connections/eth0 read only) so that's why I pushed it for Zuul to test | 16:59 |
clarkb | anyway thats a long way to say I think just reindexing all changes is simplest | 16:59 |
clarkb | mnasiadka: ack | 16:59 |
fungi | the reindex command looks correct to me | 17:13 |
clarkb | we should be about 5 minutes away from merging | 17:14 |
clarkb | so restart process is now: note old image, pull new image, stop gerrit, move replication waiting queue aside, delete large caches, start gerrit, wait for general functionality then trigger change reindexing | 17:20 |
* clarkb grumbles something about how most of this should be handled by gerrit itself | 17:20 | |
clarkb | how about #status notice Gerrit is being updated to the latest 3.10 bugfix release as part of early prep work for an eventual 3.11 upgrade. Gerrit will be offline momentarily while it restarts on the new version. | 17:23 |
opendevreview | Merged opendev/system-config master: Update Gerrit images to 3.10.6 and 3.11.3 https://review.opendev.org/c/opendev/system-config/+/949778 | 17:23 |
clarkb | I've started a root screen on review03 | 17:24 |
clarkb | we're still waiting for the images to promote | 17:24 |
clarkb | opendevorg/gerrit 3.10 ca3978438207 3 weeks ago 691MB <- this is the current image | 17:25 |
fungi | attaching | 17:25 |
*** darmach0 is now known as darmach | 17:27 | |
fungi | the promotes completed | 17:28 |
fungi | and deploy is done now | 17:28 |
clarkb | fungi: those two caches I've listed appear to be the largest. I'll reuse your command from last time which clears those two out | 17:28 |
fungi | sgtm | 17:28 |
fungi | lgtm too | 17:29 |
clarkb | fungi: cool I updated the waiting queue move to use today's date and everything else should be the same as last time | 17:29 |
clarkb | that rm command in there looks good? | 17:29 |
fungi | yes | 17:30 |
clarkb | ok I'll do a pull now and then we can probably send the alert and run the command? | 17:30 |
fungi | want me to put the status notice together? | 17:31 |
clarkb | fungi: I drafted one above if it looks good to you at 17:23:49 | 17:31 |
fungi | lgtm, sorry i missed it earlier | 17:31 |
fungi | juggling several unrelated conversations at once, as usual | 17:32 |
clarkb | I think I'm ready if you're ready. Want to send the notice then I'll run the queued up command once it is completed? | 17:32 |
fungi | yep, on it | 17:32 |
fungi | #status notice Gerrit is being updated to the latest 3.10 bugfix release as part of early prep work for an eventual 3.11 upgrade. Gerrit will be offline momentarily while it restarts on the new version. | 17:32 |
opendevstatus | fungi: sending notice | 17:32 |
-opendevstatus- NOTICE: Gerrit is being updated to the latest 3.10 bugfix release as part of early prep work for an eventual 3.11 upgrade. Gerrit will be offline momentarily while it restarts on the new version. | 17:33 | |
opendevstatus | fungi: finished sending notice | 17:35 |
fungi | looks like we're gtg | 17:36 |
clarkb | alright I'll run the command now | 17:36 |
clarkb | suddenly I'm worried about signal handling again. Arg | 17:36 |
clarkb | we ran it last time a second time just to confirm it would work and it seemed to but this seems to be stopping slower than I would expect | 17:37 |
fungi | mmm | 17:37 |
fungi | maybe it just takes some time to wrap up? | 17:37 |
clarkb | usually its a few seconds | 17:37 |
fungi | since it's been running a while | 17:37 |
clarkb | like 3-5 I think. But yes we can let it run a while | 17:37 |
fungi | okay, this is verging on "i think it's not going to stop" time | 17:39 |
clarkb | the log does indicate it stopped sshd which is what we saw last time too | 17:39 |
clarkb | where it seems to half stop but the process doesn't die | 17:40 |
fungi | so maybe our signal stuff still isn't quite right | 17:40 |
clarkb | ya its possible that sigint isn't as equiavlent to sighup as I hope. Next time we could manually issue the sighup out of bad to docker compose | 17:40 |
corvus | you mean gerrit stopped the mina ssh server? | 17:40 |
clarkb | corvus: ya [2025-05-21T17:36:55.348Z] [ShutdownCallback] INFO com.google.gerrit.sshd.SshDaemon : Stopped Gerrit SSHD | 17:40 |
clarkb | corvus: so it is processing the signal but then not fully shutting down like it did when we used sighup | 17:41 |
clarkb | its about to get the sigkill due to the timeout | 17:41 |
clarkb | amd teh command exited non zero so it short circuited | 17:41 |
* fungi grumbles | 17:42 | |
clarkb | I'm manually rerunning the commands one by one as a result | 17:42 |
clarkb | [2025-05-21T17:43:27.439Z] [main] INFO com.google.gerrit.pgm.Daemon : Gerrit Code Review 3.10.6-12-gf8d9e0470a-dirty ready | 17:43 |
clarkb | next time we restart gerrit I think we can issue a kill -HUP $gerritpid and see if that is better. If it is we may just have to live with that for a bit? I dunno will have to think about it | 17:44 |
clarkb | I can see my personal dashboard and diffs are loading for the one change I pulled up (they didn't load when I first pulled up teh change but load now) | 17:44 |
fungi | yeah, that seems like the next step for figuring it out | 17:44 |
corvus | gertty seems happy | 17:45 |
opendevreview | James E. Blair proposed opendev/zuul-providers master: Move periodic image builds to correct pipeline https://review.opendev.org/c/opendev/zuul-providers/+/950591 | 17:45 |
clarkb | `ssh -p 29418 cboylan.admin@review03.opendev.org gerrit show-queue -w` also looks good | 17:45 |
fungi | yeah, egerything's looking right to me | 17:45 |
fungi | and the reindex? | 17:46 |
corvus | there's a change you can try opening and reviewing. :) | 17:46 |
clarkb | https://opendev.org/opendev/zuul-providers/commit/a3118c472a5b760a2313ee8ae86fea0d37ea4f2b seems to have replicated | 17:46 |
clarkb | fungi: yup I wanted to check replication ^ but I'll run ssh -p 29418 review03 gerrit index start changes --force now | 17:47 |
opendevreview | James E. Blair proposed opendev/zuul-providers master: Move periodic image builds to correct pipeline https://review.opendev.org/c/opendev/zuul-providers/+/950591 | 17:47 |
fungi | cool, just making sure we don't forget | 17:47 |
clarkb | the show-queue output shows all of that is in the task queue now | 17:47 |
fungi | confirmed | 17:47 |
clarkb | corvus: any reason to not approve https://review.opendev.org/c/opendev/zuul-providers/+/949896 at this point to remove max-ready-age from zuul-providers? | 17:55 |
clarkb | I think we're about 1/3 of the way done with reindexing | 17:56 |
corvus | clarkb: i think that's ready to merge | 17:57 |
corvus | haven't seen any leaky bits recently, so we're either okay to remove that because it's not needed, or, removing that will help us see if anything else more subtle is wrong | 17:58 |
clarkb | ack I'll approve once reindexing is done | 17:58 |
clarkb | looks like the child change should be mergeable too (the one getting image formats from zuul) | 17:58 |
clarkb | as well as the periodic pipeline change | 17:59 |
corvus | yep; the image formats change should be effectively self-testing | 18:00 |
clarkb | oh I have an idea | 18:07 |
clarkb | what if the cache pruning is taking longer than that timeout and it is trying to shutdown | 18:07 |
clarkb | I'm wondering if upstream h2 changes have made that cache pruning more effective. Remember we tried ti and it seemed to noop? I'm just thinking out loud here wondering if it isn't nooping anymore | 18:08 |
clarkb | that would explain why the second restart we did last time was quick and fine | 18:08 |
opendevreview | Douglas Viroel proposed opendev/system-config master: Add statusbot to openstack-watcher channel https://review.opendev.org/c/opendev/system-config/+/950593 | 18:08 |
fungi | certainly a possibility | 18:08 |
clarkb | since we had manually cleared out the large caches by that point | 18:08 |
clarkb | maybe we should back that out since we're relying on cache deletion | 18:09 |
clarkb | that would also explain why it works in CI which uses sigint in a number of restart locations (particularly with the upgrade testing) | 18:09 |
clarkb | I think we may run the rename playbook somewhere which also restarts things | 18:10 |
clarkb | this is my best guess at the moment. And we saw the two caches were 20GB and 15GB respectively so pruning them down may have been expensive | 18:10 |
clarkb | once things settle down and I'ev had lunch I'll work up a revert change | 18:12 |
clarkb | hrm we only set that value to 15 seconds. There are 17 caches on disk and 15*17 is 255 seconds which is less than our 300 second timeout so this may not be the answer | 18:16 |
clarkb | but it may be contributing is say shutdown wants to take longer than 45 seconds for some reason and we're processing things serially? | 18:16 |
opendevreview | Clark Boylan proposed opendev/system-config master: Revert "Set h2.maxCompactTime to 15 seconds" https://review.opendev.org/c/opendev/system-config/+/950595 | 18:22 |
clarkb | [2025-05-21T18:21:00.084Z] [Reindex changes v86-v86] INFO com.google.gerrit.server.index.OnlineReindexer : Reindex changes to version 86 complete | 18:24 |
clarkb | there were three changes that failed. These look like ones we've seen before: "ps 3 of change 19316 failed." "ps 1 of change 19321 failed." "ps 2 of change 11544 failed." | 18:25 |
fungi | yeah, those seem familiar | 18:25 |
fungi | very low numbers | 18:25 |
opendevreview | Merged opendev/zuul-providers master: Remove max-ready-age https://review.opendev.org/c/opendev/zuul-providers/+/949896 | 18:26 |
opendevreview | Merged opendev/zuul-providers master: Move periodic image builds to correct pipeline https://review.opendev.org/c/opendev/zuul-providers/+/950591 | 18:27 |
clarkb | corvus: ^ fyi | 18:27 |
opendevreview | Jeremy Stanley proposed openstack/project-config master: Temporarily require Signed-Off-By in the sandbox https://review.opendev.org/c/openstack/project-config/+/950596 | 18:38 |
opendevreview | Jeremy Stanley proposed openstack/project-config master: Temporarily require Signed-Off-By in the sandbox https://review.opendev.org/c/openstack/project-config/+/950596 | 18:42 |
opendevreview | Jeremy Stanley proposed openstack/project-config master: Allow use of receive.requireSignedOffBy in ACLs https://review.opendev.org/c/openstack/project-config/+/950597 | 18:42 |
clarkb | fungi: I'm about to pop out for lunch. I've disconnected from the screen if you want to look it over and shut it down if you are satisfied all si well I think that is fine at this point | 18:54 |
clarkb | I've approved the acl linter update since that seems non controversail. I'll let you decide if/when there has been enough feedback on updating sandbox's acls | 18:55 |
clarkb | I guess I didn't confirm that you've got the syntax correct for the acl but worst case gerrit will reject it and we'll fix it (I think it is ocrrect rhough) | 18:56 |
clarkb | actually I checked on the held 3.11 node where I modified x/test-project and it put `requireSignedOffBy = true` under `[receive]` so I think we're good | 18:57 |
fungi | yeah, screen content lgtm, closing it out now | 18:58 |
corvus | my main concern with sandbox would be confusing new users that are using that as part of training.... maybe set a time limit ahead of time? | 18:58 |
fungi | yeah, that was my main concern with the change as well | 18:58 |
clarkb | looking at https://review.opendev.org/q/project:opendev/sandbox updates are infrequent but steady. I think we can probably update it this afternoon and revert tomorrow with minimal impact | 18:59 |
fungi | do you think an inline comment in the acl is warranted for that (can we do comments in acls?) or just in the commit message? would 3 days be too long? | 18:59 |
clarkb | I don't think an inline comment in the acl helps much since those acls are pretty opaque to end users | 19:00 |
clarkb | the commit message is probably sufficient given that | 19:00 |
clarkb | and 3 days is probably on the long end just looking at the chagne list for the project. But doable | 19:00 |
fungi | 2 days then? 20z friday? | 19:01 |
clarkb | wfm | 19:01 |
opendevreview | Jeremy Stanley proposed openstack/project-config master: Temporarily require Signed-Off-By in the sandbox https://review.opendev.org/c/openstack/project-config/+/950596 | 19:01 |
clarkb | though as a heads up I'm considering taking friday afternoon off (the weather looks great and I'm suffering some cabin fever) | 19:01 |
clarkb | but +A'ing a change like that should be fine | 19:01 |
fungi | i expect to be around | 19:01 |
opendevreview | Merged openstack/project-config master: Allow use of receive.requireSignedOffBy in ACLs https://review.opendev.org/c/openstack/project-config/+/950597 | 19:02 |
fungi | and i know i said i was planning to be around all day, but now it sounds like i'm popping out for a quick early dinner around the corner. shouldn't be long | 19:03 |
opendevreview | Michal Nasiadka proposed opendev/glean master: Add support for CentOS 10 keyfiles https://review.opendev.org/c/opendev/glean/+/941672 | 19:33 |
mnasiadka | clarkb: ^^ that should work - I can rework the testing so we don't change the distro variable name if needed, but tests should pass now. | 19:34 |
leonardo-serrano | Hi, I'm trying to download a code tar.gz from one of the opendev.org repos and getting an "invalid server response 500". Is this a known issue? | 19:38 |
leonardo-serrano | More specifically, trying to download this link: https://opendev.org/starlingx/apt-ostree/archive/b22d528742c41e37618176f1f09dc1b862444316.tar.gz | 19:39 |
Clark[m] | leonardo-serrano: yes unfortunately that feature in gitea has been broken/flaky for some time now | 19:44 |
Clark[m] | As an alternative you can clone the repo and run git archive locally or do a shallow clone and use the content directly depending on what you are trying to accomplish | 19:45 |
leonardo-serrano | I see. Thanks for the suggestions. I'll adjust accordingly | 19:47 |
fungi | yes, we don't encourage fetching tarballs of git repos directly anyway as you lose the metadata and in many cases (e.g. openstack's projects) people assume they're installable when they aren't. it's better if projects explicitly publish distribution tarballs | 20:02 |
fungi | i wish we could completely disable or hide the links to that gitea feature | 20:04 |
fungi | leonardo-serrano: keep in mind that starlingx/apt-ostree uses pbr, so needs the actual git clone in order to be able to make a proper python sdist for it | 20:10 |
fungi | a naive tarball of the worktree is insufficient | 20:10 |
fungi | infra-root: unless there's immediate objection in the next few minutes, i'm self-approving https://review.opendev.org/950596 and scheduling a reminder to myself to revert it at the time mentioned in the commit message | 20:29 |
clarkb | wfm. I'm going to pop out for a bike ride and probably some yard work but should be back in a couple hours | 20:30 |
clarkb | this rain/sun/rinse/repeat cycle means the grass has grown more in the last week or two then it has since last fall | 20:30 |
clarkb | I made the mistake of looking outside during lunch today | 20:30 |
fungi | yeah, i won't have a chance to mow my lawn until sometime next week, and am dreading the inevitable slog that's going to be | 20:32 |
opendevreview | Merged openstack/project-config master: Temporarily require Signed-Off-By in the sandbox https://review.opendev.org/c/openstack/project-config/+/950596 | 20:52 |
opendevreview | Merged opendev/system-config master: Add statusbot to openstack-watcher channel https://review.opendev.org/c/opendev/system-config/+/950593 | 21:20 |
fungi | the ubuntu mirror update completed a little while ago, i'm rerunning now in the same screen session just to make sure it's approximately a no-op the second time through | 21:22 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!