*** dtantsur has joined #openstack-release | 01:03 | |
*** dtantsur|afk has quit IRC | 01:05 | |
*** dtantsur has quit IRC | 01:08 | |
*** dtantsur has joined #openstack-release | 01:12 | |
*** dtantsur has quit IRC | 01:24 | |
*** armax has quit IRC | 01:25 | |
*** dtantsur has joined #openstack-release | 01:29 | |
*** armax has joined #openstack-release | 02:13 | |
*** diablo_rojo has quit IRC | 02:38 | |
*** pmannidi has quit IRC | 03:07 | |
*** vishalmanchanda has joined #openstack-release | 03:43 | |
*** armax has quit IRC | 04:01 | |
*** evrardjp has quit IRC | 04:33 | |
*** evrardjp has joined #openstack-release | 04:33 | |
*** ykarel has joined #openstack-release | 04:44 | |
*** ricolin_ has joined #openstack-release | 05:15 | |
*** ricolin_ has quit IRC | 05:26 | |
openstackgerrit | Rico Lin proposed openstack/releases master: Release stable branches for heat services https://review.opendev.org/756007 | 05:31 |
---|---|---|
ykarel | Tarball missing for 2.1.1 release of heat-agents: https://tarballs.opendev.org/openstack/heat-agents/?C=M;O=D | 05:32 |
ykarel | noticed during package build of it https://review.rdoproject.org/r/#/c/29923/ | 05:32 |
ykarel | release job passed though https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_432/df7a05a48ca0a0a99e442c19849577d0e4fbe2de/release/release-openstack-python/43254e4/job-output.txt | 05:32 |
ykarel | fungi, can u check if the issue from some days back reappeared ^ | 05:33 |
*** jtomasek has joined #openstack-release | 05:37 | |
*** ykarel has quit IRC | 05:59 | |
*** ykarel has joined #openstack-release | 06:00 | |
*** icey has quit IRC | 06:39 | |
*** icey has joined #openstack-release | 06:41 | |
*** icey has quit IRC | 06:45 | |
*** icey has joined #openstack-release | 06:48 | |
*** vishalmanchanda has quit IRC | 07:02 | |
*** icey has quit IRC | 07:04 | |
*** slaweq has joined #openstack-release | 07:05 | |
*** e0ne has joined #openstack-release | 07:07 | |
*** slaweq has quit IRC | 07:09 | |
*** haleyb has quit IRC | 07:10 | |
*** haleyb has joined #openstack-release | 07:11 | |
*** sboyron has joined #openstack-release | 07:11 | |
*** vishalmanchanda has joined #openstack-release | 07:12 | |
*** slaweq has joined #openstack-release | 07:14 | |
*** icey has joined #openstack-release | 07:18 | |
*** tosky has joined #openstack-release | 07:41 | |
*** rpittau|afk is now known as rpittau | 07:41 | |
*** ykarel_ has joined #openstack-release | 07:43 | |
*** ykarel has quit IRC | 07:44 | |
*** ykarel__ has joined #openstack-release | 07:44 | |
ttx | ykarel_: nice catch | 07:46 |
ttx | we definitely need that up and working before we launch the final release steps | 07:47 |
*** ykarel_ has quit IRC | 07:47 | |
*** ykarel__ is now known as ykarel | 07:52 | |
*** icey has quit IRC | 07:57 | |
*** icey has joined #openstack-release | 08:00 | |
*** tosky_ has joined #openstack-release | 08:12 | |
*** tosky is now known as Guest98160 | 08:13 | |
*** tosky_ is now known as tosky | 08:13 | |
*** Guest98160 has quit IRC | 08:15 | |
*** jbadiapa has quit IRC | 08:32 | |
*** ykarel_ has joined #openstack-release | 09:03 | |
*** ykarel has quit IRC | 09:03 | |
*** ykarel_ is now known as ykarel | 09:25 | |
*** ykarel has quit IRC | 10:50 | |
*** ykarel has joined #openstack-release | 10:50 | |
*** jbadiapa has joined #openstack-release | 11:20 | |
fungi | ykarel: ttx: there was another server hung in rackspace we had to forcibly reboot a couple of days ago (we suspect it's a problem with live migration on their xen deployment but hard to know). looks like it's left things in an inconsistent state again such that one of the read/write volumes isn't getting successfully released to the read-only replicas, but the script which periodically releases | 11:28 |
fungi | them all seems to be short-circuiting before it reaches the tarballs volume. i've manually released the tarballs replicas for now while i pinpoint the problem volume | 11:28 |
fungi | aha, that one was simpler, turns out we just had a hung vos release script still running from two days ago, never terminated after things failed over to the redundant fileserver | 11:33 |
fungi | so it was still holding our safety lockfile | 11:33 |
fungi | yep, looks like that's gotten everything back on track again | 11:36 |
fungi | ykarel: thanks for bringing it to my attention! | 11:36 |
ykarel | fungi, THanks | 11:43 |
ttx | thanks fungi! | 12:19 |
fungi | any time! | 12:23 |
*** jtomasek has quit IRC | 12:37 | |
*** jtomasek has joined #openstack-release | 12:41 | |
*** sboyron_ has joined #openstack-release | 13:03 | |
*** sboyron has quit IRC | 13:03 | |
*** vishalmanchanda has quit IRC | 13:03 | |
*** evrardjp has quit IRC | 13:04 | |
*** ttx has quit IRC | 13:04 | |
*** evrardjp has joined #openstack-release | 13:04 | |
*** vishalmanchanda has joined #openstack-release | 13:04 | |
*** ttx has joined #openstack-release | 13:09 | |
openstackgerrit | Merged openstack/releases master: Release stable branches for heat services https://review.opendev.org/756007 | 13:10 |
*** ttx has quit IRC | 13:16 | |
*** ttx has joined #openstack-release | 13:17 | |
openstackgerrit | Bernard Cafarelli proposed openstack/releases master: Neutron stable releases (Stein, Train, Ussuri) https://review.opendev.org/755203 | 13:22 |
openstackgerrit | Merged openstack/releases master: Neutron stable releases (Stein, Train, Ussuri) https://review.opendev.org/755203 | 14:23 |
*** icey has quit IRC | 14:36 | |
*** icey has joined #openstack-release | 14:36 | |
hberaud | fungi: not sure but I think that heat-dashboard faced a similar issue https://624a3940f1199870a766-186f218c38a8ba58edb43244f86e77cb.ssl.cf1.rackcdn.com/42ce3d15f0cbe40d530419678d40701903bd5445/tag/publish-openstack-releasenotes-python3/667bfd0/job-output.txt | 14:37 |
fungi | hberaud: similar to what? | 14:38 |
hberaud | fungi: your previous discussion on this channel | 14:38 |
smcginnis | rsync: failed to set permissions on "/afs/.openstack.org/docs/releasenotes...." | 14:39 |
fungi | the tarballs.opendev.org site was serving stale content for ~2.5 days, so anything released during that time didn't appear on the site until a couple hours ago | 14:39 |
smcginnis | Looks like some sort of AFS issue. | 14:39 |
smcginnis | Starting at "2020-10-07 13:25:37.407447" | 14:39 |
fungi | yeah, doesn't sound at all similar | 14:39 |
hberaud | ack | 14:39 |
fungi | but i'll look into it | 14:40 |
hberaud | thanks | 14:40 |
*** armax has joined #openstack-release | 14:40 | |
fungi | for the record, the zuul build result pages are far easier to use to get to the bottom of job failures, if someone gives me a link to the raw logs i always wind up having to hunt for the zuul build id in them so i can get to the build result view | 14:42 |
hberaud | ack I take note | 14:44 |
fungi | looking at that error, the most likely cause would be two different concurrent builds updating the heat-dashboard release notes at the same time since they're relying on in-tree tempfiles and use the delete option of rsync, one could easily remove the other's tempfile | 14:47 |
fungi | i'll see if i can find two runs, maybe triggered on different branches, which were running rsync at the same time | 14:47 |
fungi | yeah, looks like there were publish-openstack-releasenotes-python3 jobs running simultaneously for heat-dashboard 1.5.1, 2.0.2 and 3.0.1 tags | 14:51 |
smcginnis | Some thing we've run in to with publishing normal docs. | 14:56 |
smcginnis | We added a semaphore for that to prevent it, or at least reduce the chances. | 14:57 |
smcginnis | Not sure if we would need to do the same here. | 14:57 |
*** priteau has joined #openstack-release | 14:58 | |
*** slaweq_ has joined #openstack-release | 14:58 | |
*** slaweq has quit IRC | 14:58 | |
*** ykarel is now known as ykarel|away | 15:00 | |
fungi | https://zuul.opendev.org/t/openstack/build/667bfd0d8f4843ee8710f0d54285538c/log/job-output.txt#3335-3360 | 15:09 |
fungi | https://zuul.opendev.org/t/openstack/build/1be889bccad047aeacff3b5d56bce393/log/job-output.txt#3333-3334 | 15:09 |
fungi | confirmed, those were trying to rsync the same tree at the same time, one succeeded and one did not because the other one removed a tempfile | 15:09 |
clarkb | I can't recall, is there a reason we aren't using supercedent pipelines for those jobs? | 15:10 |
clarkb | I'm guessing because each job needs to run for every commit? | 15:11 |
clarkb | s/commit/ref/ | 15:11 |
fungi | the release notes job may not need to | 15:11 |
fungi | though we'd likely want a special tag-triggered supercedent pipeline if we were going to take that route | 15:12 |
fungi | and that might be the only job it runs | 15:12 |
*** ykarel|away has quit IRC | 15:32 | |
smcginnis | fungi, clarkb: Would be easy enough to do this for the publish-openstack-releasenotes-python3 job too: https://review.opendev.org/#/c/724727/2/zuul.d/jobs.yaml | 15:48 |
smcginnis | I think the only concern at the time was adding delays if the jobs were not actually related, but I think it runs infrequently enough that it doesn't really make much of a difference. | 15:49 |
fungi | and they don't take long to run, generally | 16:05 |
*** rpittau is now known as rpittau|afk | 16:06 | |
*** slaweq has joined #openstack-release | 16:06 | |
*** slaweq_ has quit IRC | 16:07 | |
fungi | smcginnis: that should work, with the caveat that a bunch of builds of that job for different projects could queue up behind one another during mass releases | 16:07 |
fungi | clarkb: rethinking, i don't expect the supercedent idea to work because it only collapses items for the same project+branch, and even with branch guessing these tags were for different branches | 16:08 |
clarkb | oh good point | 16:09 |
fungi | i'm not actually certain what supercedent would do with tag events, but generally i think it would end up being equivalent to independent | 16:09 |
clarkb | for some reason I thought it was supercedent per project (branch didn't matter) | 16:09 |
smcginnis | This happens (the failure) rarely enough that it probably isn't too critical. But I'll propose a patch similar to the above so we don't have to figure it out again every time it actually does run into this. | 16:09 |
clarkb | I think the biggest issue is that every time it happens we (opendev/infra) get asked to debug it :) | 16:11 |
hberaud | smcginnis: too many queued things are not an issue for us? (c.f fungi's comment) | 16:11 |
smcginnis | clarkb: Yep. ;) If we can avoid that, then I think it's good to have to live with a potentially small delay. | 16:11 |
hberaud | smcginnis: from my POV I don't think that this is an issue | 16:11 |
smcginnis | hberaud: Usually not. The main issue is when multiple jobs are running for the same repo. So like the case today where one patch was releasing three stable branches. | 16:12 |
fungi | smcginnis: an optimization which was proposed was to add a special kind of semaphore (or special flag for a semaphore) which limited its conflict scope so it would only block subsequent builds if triggered by events for the same project | 16:12 |
smcginnis | So it's really not a big deal. As long as one of those passes, then it will at least get published. | 16:12 |
hberaud | got it | 16:12 |
smcginnis | fungi: Is that a mechanism that is present in zuul today? Or would that require a larger effort to enable? | 16:13 |
fungi | i don't think anyone has written an implementation yet | 16:13 |
smcginnis | If a standard semaphore looks like it causes unreasonable delays, then maybe it might be worth doing that. | 16:15 |
smcginnis | But for now, I don't think it would be an issue to just limit concurrency. | 16:15 |
hberaud | +1 | 16:15 |
clarkb | smcginnis: do you think release day next week will create enough tags that a semaphore like that will be problematic in that specific situation? | 16:17 |
clarkb | I agree during the rest of a release cycle it should be a non issue but not sure of when we make a ton of releases all in one short window | 16:17 |
hberaud | clarkb: They will be on different repo | 16:19 |
hberaud | clarkb: so I don't think it will be an issue during the final release | 16:20 |
smcginnis | Yeah, if it really is only when the same repo is being tagged, then the coordinated release should be fine. | 16:20 |
hberaud | s/repo/repos/ | 16:20 |
smcginnis | And I don't think we ever hit this in the past, though I think it didn't start happening until fairly recently. | 16:20 |
smcginnis | But it would have been before the ussuri release, and I don't recall hitting any issues there. | 16:21 |
hberaud | maybe because release note syncs are now on several branches | 16:21 |
*** tosky has quit IRC | 16:21 | |
hberaud | so likelyhood to facing this happen more often | 16:23 |
hberaud | s/likelyhood/likelihood/ | 16:23 |
clarkb | got it | 16:24 |
*** dtantsur is now known as dtantsur|afk | 16:24 | |
hberaud | In other words, the likelihood to trigger a similar scenario increase each time a cycle is finished and when a new stable branch is created. PTL and liaisons often release their stable branches through a single patch so each new stable branch increase the probability to trigger this scenario | 16:34 |
*** ykarel has joined #openstack-release | 17:10 | |
*** ykarel has quit IRC | 17:18 | |
*** e0ne has quit IRC | 17:23 | |
*** armstrong has joined #openstack-release | 18:15 | |
*** vishalmanchanda has quit IRC | 18:22 | |
armstrong | Hello | 18:46 |
armstrong | I ran the command (at line 469: ./tools/list_rc_updates.sh) due for this week (Wednesday) and got some output, which I need some help interpreting. Anyone to help? | 19:09 |
*** priteau has quit IRC | 19:12 | |
smcginnis | armstrong: Hey! If you can put the output into a paste, I can take a look and walk through it with you. | 19:13 |
smcginnis | Or etherpad if we want to make any notes on them. | 19:16 |
smcginnis | Bullet item 4 has the full instructions for the task: https://releases.openstack.org/reference/process.html#r-1-week-final-rc-deadline | 19:16 |
smcginnis | The key thing being the second sub-bullet: "Propose patches creating a new RC for those that have unreleased bugfixes or updated translations" | 19:17 |
smcginnis | So what we are looking for is if any repos have merged changes that either address bugs or include translations. | 19:17 |
smcginnis | We can ignore all of the "update .gitreview for stable/victoria" and related patches. | 19:17 |
smcginnis | Once we look through the list and see which repos have merged anything of interest, we can use that list of projects to generate release requests. | 19:18 |
smcginnis | Examples from the last cycle can be found here: https://review.opendev.org/#/q/topic:r1-final-rc-deadline+(status:open+OR+status:merged) | 19:18 |
smcginnis | Tomorrow this can be run to get the final list of interesting projects. Then the new-release command can be run, or you could put the list of repos in a text file and use that to run tools/propose_auto_releases.sh to iterate through the list and get new RC releases proposed. | 19:20 |
*** tosky has joined #openstack-release | 19:26 | |
armstrong | ok doing that right now | 19:27 |
armstrong | https://usercontent.irccloud-cdn.com/file/wlwWRPpB/command_output.md | 19:37 |
armstrong | smcginnis: can you open the file? | 19:37 |
smcginnis | armstrong: Can you just use paste.openstack.org or etherpad.openstack.org? | 19:39 |
armstrong | ok | 19:40 |
armstrong | http://paste.openstack.org/show/798806/ | 19:43 |
smcginnis | Scanning through the output. | 19:43 |
smcginnis | Looks like cinder has some significant changes merged. | 19:44 |
smcginnis | Maybe cloudkitty. | 19:44 |
smcginnis | heat-dashboard has translations. | 19:44 |
smcginnis | Ah, here's a tricky part. Looks like it includes cycle-trailing deliverables too. | 19:44 |
smcginnis | So we can ignore kayobe. | 19:45 |
smcginnis | And kayobe-config-dev. | 19:45 |
smcginnis | And kolla-ansible. | 19:45 |
smcginnis | kuryr-kubernetes has something. | 19:46 |
smcginnis | A few with CI related changes. Those can be ignored. | 19:46 |
smcginnis | neutron has something. | 19:47 |
armstrong | ok, how do we remove them? | 19:47 |
smcginnis | octavia-dashboard has translations. | 19:47 |
smcginnis | Remove which? | 19:47 |
armstrong | those to ignore | 19:47 |
johnsom | Yeah, I will post a RC2 for the translations here in an hour or so | 19:47 |
smcginnis | johnsom: Awesome, thanks! | 19:47 |
smcginnis | armstrong: We just won't propose releases for them. | 19:48 |
armstrong | ok | 19:48 |
smcginnis | senlin-dashboard has translations. | 19:48 |
smcginnis | Trove has some things. | 19:48 |
smcginnis | Not bad, looks like that's it. | 19:49 |
smcginnis | armstrong: So does it make sense what we are looking for? | 19:49 |
armstrong | ok | 19:49 |
armstrong | Yes it does, but just wondering if we have to do something on the result. | 19:50 |
smcginnis | armstrong: Did you read through bullet 4 in the process doc that I mentioned: https://releases.openstack.org/reference/process.html#r-1-week-final-rc-deadline | 19:51 |
slaweq | smcginnis: hi, may I ask You about one thing, regarding ovsdbapp | 19:51 |
smcginnis | slaweq: Sure! | 19:52 |
slaweq | smcginnis: it seems that we need to bump minimum version of ovs there, patch is here https://review.opendev.org/#/c/756329/ | 19:52 |
armstrong | Oh, sorry smcginnis i missed that | 19:52 |
armstrong | smcginnis: just saw it, thanks | 19:52 |
slaweq | can we do that now or should we maybe wait until victoria will be released and do it later? | 19:53 |
openstackgerrit | Michael Johnson proposed openstack/releases master: Release octavia-dashboard 6.0.0rc2 https://review.opendev.org/756581 | 19:54 |
smcginnis | slaweq: Requirements for stable/victoria are frozen until the coordinated release goes out. | 19:54 |
johnsom | Well, I had break between meetings, so there we are. That should be it for Octavia. No other RC2 plans | 19:54 |
smcginnis | So I think it's fine for you to raise the lower-constraints in the repo. But we should probably wait until after next Wednesday to release that so we can update global requirements right away. | 19:55 |
smcginnis | johnsom: Perfect, thanks! | 19:55 |
slaweq | smcginnis: ok, so cherry-pick can be merged, but don't do new release of ovsdbapp until next wednesday at least | 19:56 |
slaweq | understood | 19:56 |
slaweq | thx a lot | 19:56 |
smcginnis | slaweq: 👍 | 19:57 |
smcginnis | armstrong: So back to the process doc, tomorrow it would be good to run the script again and check for any final updates. | 19:57 |
smcginnis | armstrong: Then right after getting that list, identify which ones look like they should have an RC2 and get new releases proposed. | 19:57 |
armstrong | Smcginnis: Sure, I will do that tomorrow | 19:58 |
smcginnis | armstrong: That can be done either one by one using the new-release script, or you can add a list of them to a local text file as you go through (makes it easy to edit the list) and use propose_auto_releases.sh to get patches up. | 19:58 |
armstrong | ok | 19:59 |
smcginnis | Oops, wrong name of the second script: process_auto_releases.sh | 19:59 |
smcginnis | https://releases.openstack.org/reference/using.html#toos-process-auto-releases-sh | 19:59 |
armstrong | smcginnis: OK, noted | 20:00 |
openstackgerrit | Slawek Kaplonski proposed openstack/releases master: Release Neutron RC2 for Victoria https://review.opendev.org/756582 | 20:03 |
*** slaweq has quit IRC | 20:14 | |
*** gmann is now known as gmann_lunch | 20:22 | |
*** slaweq has joined #openstack-release | 20:23 | |
*** mgoddard has quit IRC | 20:31 | |
*** mgoddard has joined #openstack-release | 20:32 | |
*** jbadiapa has quit IRC | 20:33 | |
*** slaweq has quit IRC | 20:58 | |
*** gmann_lunch is now known as gmann | 21:04 | |
*** openstackgerrit has quit IRC | 22:07 | |
*** sboyron_ has quit IRC | 22:16 | |
*** tosky has quit IRC | 22:22 | |
*** openstackgerrit has joined #openstack-release | 23:22 | |
openstackgerrit | Brian Rosmaita proposed openstack/releases master: Proposing victoria RC2 for cinder https://review.opendev.org/756599 | 23:22 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!