opendevreview | Elod Illes proposed openstack/openstack-manuals master: [www] Set Train series state as End of Life https://review.opendev.org/c/openstack/openstack-manuals/+/909509 | 08:31 |
---|---|---|
opendevreview | Elod Illes proposed openstack/openstack-manuals master: [www] Set Train series state as End of Life https://review.opendev.org/c/openstack/openstack-manuals/+/909509 | 12:59 |
opendevreview | Elod Illes proposed openstack/openstack-manuals master: [www] Set Train series state as End of Life https://review.opendev.org/c/openstack/openstack-manuals/+/909509 | 13:00 |
elodilles | frickler: i've updated the patch ^^^ | 13:02 |
opendevreview | Merged openstack/openstack-manuals master: [www] Set Stein series state as End of Life https://review.opendev.org/c/openstack/openstack-manuals/+/904720 | 13:19 |
opendevreview | Riccardo Pittau proposed openstack/election master: Adding Riccardo Pittau candidacy for Ironic PTL https://review.opendev.org/c/openstack/election/+/909534 | 13:36 |
opendevreview | Takashi Kajinami proposed openstack/governance master: Retire PowerVMStacker SIG https://review.opendev.org/c/openstack/governance/+/909540 | 13:59 |
opendevreview | Merged openstack/openstack-manuals master: [www] Set Train series state as End of Life https://review.opendev.org/c/openstack/openstack-manuals/+/909509 | 14:16 |
opendevreview | Takashi Kajinami proposed openstack/governance master: Retire PowerVMStacker SIG https://review.opendev.org/c/openstack/governance/+/909540 | 14:16 |
fungi | tc-members: ^ not sure what the actual intended implementation is when repos get retired twice (once as part of a project team when moving to a sig, then when the sig is retired) | 15:09 |
fungi | but also, it reminds me that we probably haven't done a good job of making sure legacy.yaml is updated when deliverables or entire projects get retired, so could be worth a quick audit to make sure we're not missing any | 15:09 |
fungi | unrelated, rosmaita's recent messaging around unmaintained-core has been a resounding success! https://review.opendev.org/c/openstack/project-config/+/908911 "Neutron team decided not to create now this new group. Anyone is able to join the um general group if needed to merge Neutron related patches." | 15:11 |
rosmaita | \o/ | 15:12 |
fungi | oh, also, for anyone interested in discussion of ai/machine-generated code contributions and such, in case you're not following the public openinfra-board ml, there's a call coming up 15:00 utc thursday which i'm told is open for participation from anyone who wants to help: | 15:21 |
fungi | https://lists.openinfra.dev/archives/list/foundation-board@lists.openinfra.dev/message/JJCH2CRGQ5HFWQ46LVE4N6GG6Q7YMF5B/ | 15:21 |
dansmith | fungi: double ugh | 15:35 |
opendevreview | Elod Illes proposed openstack/openstack-manuals master: [www] Set Ussuri series state as End of Life https://review.opendev.org/c/openstack/openstack-manuals/+/909602 | 17:19 |
opendevreview | Elod Illes proposed openstack/openstack-manuals master: [www] Set Yoga series state as Unmaintained https://review.opendev.org/c/openstack/openstack-manuals/+/909603 | 17:25 |
JayF | tc-members: meeting in ~10 minutes | 17:49 |
fungi | irc this week, not voice? i never can remember, ml announcement didn't say either way | 17:53 |
JayF | irc | 17:54 |
dansmith | day(date) < 7 && voice || irc | 17:54 |
fungi | thanks! | 17:54 |
JayF | sorry for omitting that, I was late getting the agenda out so I did it from memory instead of my template | 17:54 |
* fungi engraves that formula in his eyeballs | 17:54 | |
clarkb | fungi: that sounds worse than floaters | 17:57 |
JayF | .startmeeting tc | 18:00 |
JayF | #startmeeting tc | 18:00 |
opendevmeet | Meeting started Tue Feb 20 18:00:44 2024 UTC and is due to finish in 60 minutes. The chair is JayF. Information about MeetBot at http://wiki.debian.org/MeetBot. | 18:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 18:00 |
opendevmeet | The meeting name has been set to 'tc' | 18:00 |
JayF | Welcome to the weekly meeting of the OpenStack Technical Committee. A reminder that this meeting is held under the OpenInfra Code of Conduct available at https://openinfra.dev/legal/code-of-conduct. | 18:00 |
JayF | Today's meeting agenda can be found at https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee. | 18:00 |
JayF | #topic Roll Call | 18:01 |
JayF | o/ | 18:01 |
dansmith | o/ | 18:01 |
gmann | o/ | 18:01 |
spotz[m] | o/ | 18:01 |
frickler | \0 | 18:01 |
rosmaita | o/ | 18:01 |
knikolla | O/ | 18:01 |
slaweq | o/ | 18:01 |
JayF | #info One indicated absense on the agenda: jamespage | 18:01 |
JayF | And everyone else is here! | 18:01 |
JayF | #topic Followup on Action Items | 18:02 |
JayF | rosmaita appears to have sent the email as requested last meeting, thanks for that | 18:02 |
rosmaita | ;) | 18:02 |
JayF | Any comments about that action item to send an email inviting people to global unmaintained group? | 18:02 |
JayF | moving on | 18:02 |
JayF | #topic Gate Health Check | 18:02 |
frickler | the promised followup from tonyb and elodilles is still pending | 18:02 |
JayF | ack | 18:02 |
JayF | #action JayF to reach out to Tony and Elod about follow-up to unmaintained group email | 18:03 |
JayF | I'll reach out to them | 18:03 |
JayF | Anything on the gate? | 18:03 |
dansmith | nova just merged a change to our lvm-backed job to bump swap to 8G. I know that's sort of controversial because "we'll just consume the 8G now", but | 18:03 |
dansmith | we've spent a LOT of time looking at OOM issues | 18:03 |
dansmith | I think we should consider just bumping the default on all jobs to 8G | 18:03 |
dansmith | we've also got the zswap bit up and available and I think we should consider testing that in more jobs as well | 18:04 |
clarkb | note that will make jobs run slower as we can no longer use fallocated sparse files for swap files | 18:04 |
JayF | I think Ironic is one of the most sensitive projects to I/O performance, so I should see it if it impacts our jobs as a bit of a canary. | 18:04 |
dansmith | speaking of, JayF I assume ironic is pretty tight on ram? | 18:04 |
JayF | (Ironic CI, I mean) | 18:04 |
fungi | the zuul update over the weekend dropped ansible 6 support. i haven't seen anyone mention being impacted (it's not our default ansible version) | 18:04 |
dansmith | JayF: see how zswap impacts you mean? | 18:04 |
JayF | more swap usage generally | 18:05 |
dansmith | okay | 18:05 |
slaweq | IIRC we tried that some time ago and yes, it helps with OOM but indeed, as clarkb said, it makes many jobs to be slower and timeouting more often | 18:05 |
JayF | I suspect zswap would help in all cases | 18:05 |
rosmaita | dansmith: which jobs are running zswap now? | 18:05 |
dansmith | rosmaita: I think just nova-next | 18:05 |
rosmaita | ok | 18:05 |
dansmith | slaweq: which zswap? | 18:05 |
dansmith | er, s/which/which,/ | 18:06 |
slaweq | no, with swap bump to 8GB or something like that | 18:06 |
dansmith | ah, well, a number of our jobs are already running 8G, like the ceph one because there's no way it could run without it | 18:06 |
JayF | dansmith: https://zuul.opendev.org/t/openstack/build/fec0dc577f204ea09cf3ee37bec9183f is an example Ironic job if you know exactly what to look for w/r/t memory usage; it's a good question I can't quickly answer during meeting | 18:06 |
dansmith | zswap will not only make more swap available, but it also sort of throttles the IO, which is why I'm interested in ironic's experience with it | 18:07 |
JayF | zswap will also lessen the actual I/O hitting the disk | 18:07 |
gmann | I think monitoring nova-next first will be good to see how slow it will be. I am worried about doing it slow test/multinode tests jobs | 18:07 |
slaweq | with zswap it may be better indeed | 18:07 |
fungi | at the expense of cpu i guess? | 18:07 |
JayF | we could also experiment with zram, I suspect | 18:07 |
dansmith | gmann: doing which, 8G or zswap? | 18:07 |
JayF | fungi: yep, but I/O is the long pole in every Ironic job I've seen so far | 18:07 |
gmann | dansmith: 8 gb | 18:07 |
fungi | makes sense. i agree i/o bandwidth seems to be the scarcest resource in most jobs | 18:08 |
dansmith | fungi: yes, but we're starting to think that IO bandwidth is the source of a lot of our weird failures that don't seem to make sense | 18:08 |
dansmith | gmann: okay | 18:08 |
clarkb | which is made worse by noisy neighbors swapping | 18:08 |
fungi | which in many cases are also our jobs ;) | 18:08 |
JayF | So if we can reduce actual disk I/O through things like zswap (I'm really curious about zram, too), it should have an improvement | 18:08 |
dansmith | clarkb: I'm totally open to other suggestions, but a lot of us have spent a lot of time debugging these failures lately, | 18:09 |
dansmith | including guest kernel crashes that seem to never manifest on anything locally, even with memory pressure applied | 18:09 |
dansmith | honestly, I'm pretty burned out on this stuff | 18:10 |
clarkb | dansmith: I haven't looked recently but every time I have in the past there is at least one openstack service that is using some unreasnoable (per my opinion) amount of memory | 18:10 |
clarkb | the last one I saw doing that was privsep | 18:10 |
dansmith | and trying to care when it seems other people don't, or just want to say "no it's not that" | 18:10 |
clarkb | and ya I totally get the others not caring making it difficult to care yourself | 18:10 |
clarkb | I think it would be useful for openstack to actually do a deeper analsys of memory utilization (I think bloomberg built a tool for this?) and dtermine if openstack needs a diet | 18:11 |
clarkb | if not then proceed with adjusting the test env. But I strongly suspect that we can improve the software | 18:11 |
clarkb | memray is the bloomberg tool | 18:11 |
JayF | clarkb: I think the overriding question is: who is going to do this work? Where does it fall in priority amongst the dozens of high priority things we're working on as well? | 18:11 |
clarkb | JayF: I mean priority wise I would say issues affecting most projects in a continuous fashion should go right to the top | 18:12 |
JayF | The real resources issue isn't in CI; it's not having enough humans with the ability to dig deep and fix CI issues to tag-out those who have been doing that for a long time | 18:12 |
dansmith | JayF: right, I think the implication is that we haven't tried dieting, and I resent said implication :) | 18:12 |
clarkb | I don't know how to find volunteers | 18:12 |
dansmith | JayF: we're all trying to make changes we think will be the best bang for the buck | 18:12 |
JayF | yeah, exactly | 18:12 |
dansmith | because those of us that sell this software have a definite interest in it being efficient and high-quality | 18:12 |
dansmith | so anyone acting like we don't care about those things is a real turn-off | 18:13 |
JayF | dansmith: Can you take an action to email-summary to the list the memory pressure stuff, and include a call for help in it? Especially if we have a single project (e.g. privsec?) that we can point a finger at | 18:13 |
dansmith | JayF: TBH, I don't want to do that because I feel like I'm the one that has _been_ doing that and I'm at 110% burnout level | 18:13 |
JayF | In the meantime I think it's exceedingly reasonable to adjust the swap size, look into environmental helps such as zswap | 18:14 |
dansmith | (not necessarily via email lately, it's been a year or so) | 18:14 |
JayF | Is there any TC member with intimate knowledge of the CI issues willing to send such an email? | 18:14 |
knikolla | I can take a stab at that profiling memory usage. I’m not planning to run for reelection in the TC this round and doing a deep technical dive on something sounds like something I’m itching for to attempt cure burnout. | 18:14 |
clarkb | my main concern with going the swap route is that I'm fairly confident that the more we swap the worse overall reliability gets due to noisy neighbor phenomena. I agree that zswap is worth trying to see if we can mitigate some of that | 18:14 |
dansmith | and I'll add, because I'm feeling pretty defensive and annoyed right now and so I shouldn't be writing such emails.. sorry, and I hate to not volunteer, but.. I'm just not the right person to do that right this moment. | 18:15 |
rosmaita | we should have this discussion at the PTG and take time to gather some data ... i have seen a bunch of OOMs, but don't see an obvious culprit | 18:16 |
JayF | clarkb: the other side of that is also; if we can get the fire out in the short term it should free up folks to do deeper dives. There's a balance there and we don't always find it, I know, but it's clear folks are trying -- at least the folks engaged enough to be at a TC meeting. | 18:16 |
clarkb | I think we also have the problem of a 1000 cuts rather than one big issue we can address directly | 18:16 |
clarkb | which makes the coordination and volunteer problem that much more difficult | 18:16 |
fungi | i also get the concern with increasing available memory/swap. openstack is like a gas that expands to fill whatever container you put it in, so once there's more resources available on a node people will quickly merge changes that require it all, plus a bit more, and we're back to not having enough available resources for everything | 18:16 |
dansmith | clarkb: that's exactly why this is hard, but you also shouldn't assume that those of us working on this have not been looking for cuts to make | 18:16 |
clarkb | dansmith: I don't think I am | 18:16 |
dansmith | if you remember, I had some stats stuff to work on performance.json, but it's shocking how often the numbers from one (identical, supposedly) run to the next vary by a lot | 18:17 |
slaweq | fungi I think that what You wrote applies to any software, not only OpenStack :) | 18:17 |
dansmith | fungi: I also think it's worth noting that even non-AIO production deployments have double-digit memory requirements these days | 18:17 |
gmann | many of us trying/tried our best on those thing and its reality that we hardly get enough support on debugging/solving the issue from project side. a very few projects worried about gate stability and more is just happy with rechecksssss | 18:18 |
clarkb | my suggestion is more that this should be a priority for all of openstack and project teams should look into it. More so than that the few who have been poking at it are not doing anything | 18:18 |
JayF | In lieu of other volunteers to send the email; I'll send it and ask for help. | 18:18 |
JayF | #action JayF to email ML with a summary of gate ram issues and do a general call for assistance | 18:19 |
slaweq | gmann we should introduce some quota of rechecks for teams :) | 18:19 |
gmann | I remember we have sent a lot of emails on those. a few of those a regular emails with explicit [gate stability] etc but I did not see much outcome of that | 18:19 |
clarkb | dansmith: related we've discovered that the memory leaks affect our inmotion/openmetal cloud | 18:19 |
gmann | slaweq: ++ :) | 18:19 |
JayF | gmann: I know, and I have very minor faith it will do something, but if I'm able to make a good case while also talking to people privately, like I did for Eventlet, maybe I can shake some new resources free | 18:20 |
clarkb | we had to remove noav compute services from at least one node in order to ensure things stop trying to schedule there because there is no memory | 18:20 |
gmann | sure, that will be great if we get help | 18:20 |
clarkb | and we have like 128GB of memory or something like that | 18:20 |
slaweq | I can try to find some time and add to my "rechecks" script way to get reasons of rechecks and then make some analysis of the reasons - maybe there will be some pattern there | 18:21 |
slaweq | but I can't say it will be for next week | 18:21 |
slaweq | I will probably need some time to finish some internal stuff and find cycles for that there | 18:22 |
JayF | I'm going to quickly summarize the discussion here to make sure it's all been marked down: | 18:22 |
JayF | knikolla offered to do some deep dive memory profiling. Perhaps he'd be a good person for clarkb /infra people to reach out to if/when they find another memory-leaking-monster. | 18:22 |
JayF | slaweq offered to aggregate data on recheck comments and look for patterns | 18:22 |
JayF | JayF (I) will be sending an email/business case to the list about the help we need getting OpenStack to run in less ram | 18:23 |
JayF | and meanwhile, nova jobs are going to bump to 8G swap throughout, and begin testing zswap | 18:23 |
JayF | Did I miss or misrepresent anything in terms of concrete actions? | 18:23 |
dansmith | that last one is not quite accurate | 18:23 |
JayF | What's a more accurate way to restate it? | 18:23 |
dansmith | we probably won't mess with the standard templates, but we might move more of our custom jobs to 8G | 18:23 |
dansmith | like, the ones we inherit from the standard template we'll leave alone | 18:24 |
JayF | ack; so nova jobs are going to move to 8G swap as-needed and begin testing zswap | 18:24 |
dansmith | but, if gmann is up for testing zswap on some standard configs, that'd be another good action | 18:24 |
JayF | I would be very curious to see how that looks in an Ironic job, if you all get to the point where it's easy to enable/depends-on a change I can vet it from that perspective | 18:24 |
gmann | sure, we do have heavy jobs in periodic or in tempest gate and we can try that | 18:24 |
dansmith | JayF: it's available, I'll link you later | 18:25 |
JayF | ack; thanks | 18:25 |
JayF | I'm going to move on for now, it sounds like we have some actions to try, one of which will possibly attempt to start a larger effort. | 18:25 |
JayF | Is there anything additional on gate health topic? | 18:25 |
JayF | #topic Implementation of Unmaintained branch statuses | 18:26 |
JayF | Is there an update on the effort to unmaintain things? :DD | 18:26 |
rosmaita | don't think so ... fungi mentioned that last week's email about the global group helped | 18:27 |
frickler | yoga is quite far, doing another round of branch deletions right now | 18:27 |
JayF | ack; and I'll reach out to Elod/Tony about them refreshing that | 18:27 |
knikolla | awesome | 18:27 |
JayF | frickler: ack, thanks for that | 18:27 |
JayF | Is there anything the TC needs to address or assist with in this transition that's not already assigned? | 18:28 |
frickler | some PTLs did not respond at all | 18:28 |
frickler | we did some overrides for the train + ussuri eols already, may need to do that for yoga, too | 18:29 |
JayF | if no PTLs respond, that means they get the default behavior: that project is no longer responsible and pushes to global-unmaintained-core (?) group, right? | 18:29 |
frickler | well currently the automation requires either a ptl response for release patches or an override from the release team | 18:30 |
gmann | we can go with the policy, keep it open for a month or so then EOL | 18:30 |
frickler | to eol someone would need to create another patch | 18:31 |
JayF | gmann: AIUI, we already have volunteers to unmaintain these projects -- someone volunteered at a top level to maintain everything down from Victoria | 18:31 |
JayF | frickler: ack, do you want me as a TC rep to vote as PTL in those cases? Or is there a technical change we need to make? | 18:31 |
frickler | JayF: let me discuss this with the release team and get back to you | 18:32 |
knikolla | thanks for turning unmaintain into a verb JayF | 18:32 |
JayF | I've been doing it in private messages/irl conversations for months; I'm surprised this is the first leak to an official place :D | 18:32 |
fungi | i had hoped that the phrasing of the unmaintained branch policy made it clear that the tc didn't have to adjudicate individual project branch choices | 18:33 |
gmann | JayF: you mean the global group? or specific volunteer for specific projects? | 18:33 |
JayF | gmann: I recall an email to the mailing list, some group volunteered to maintain the older branches back to victoria, I'm having trouble finding the citation now, a sec | 18:34 |
frickler | well the gerrit group currently is elodilles + tonyb, I don't remember anyone else | 18:34 |
rosmaita | elodilles volunteered to maintain back to victoria | 18:35 |
gmann | yeah, I am also do not remember if anyone else volunteer / interested to keep them alive even no response from PTL/team | 18:35 |
JayF | rosmaita: ack; I'll try to find that email specifically | 18:36 |
gmann | rosmaita: ok even there is no response/confirmation from PTL on anyone need those or not | 18:36 |
frickler | fungi: this is about moving from stable to unmaintained, so not quite yet covered by unmaintained policy. or only halfway maybe? | 18:36 |
frickler | might need to be specified clearer and communicated with the release team, too? | 18:36 |
rosmaita | JayF: not sure there was an email, may be in the openstack-tc or openstack-release logs | 18:36 |
gmann | anyways, I think actual cases will ne known when we will go into the explicit OPT-IN time | 18:37 |
JayF | #action JayF Find something written about who volunteered to keep things un-maintained status back to Victoria | 18:37 |
gmann | that time we will see more branches filter out and move to EOL | 18:37 |
JayF | I'll do it offline, and reach out to elod if it was them (I think it was) | 18:37 |
JayF | frickler: ack; I think you're right we may have a gap | 18:39 |
JayF | https://governance.openstack.org/tc/resolutions/20230724-unmaintained-branches.html doesn't mention the normal supported->unmaintained flow | 18:39 |
JayF | oh, wait, here it is | 18:40 |
JayF | > By default, only the latest eligible Unmaintained branch is kept. When a new branch is eligible, the Unmaintained branch liaison must opt-in to keep all previous branches active. | 18:40 |
JayF | So every project gets one unmaintained branch for free, keeping it alive after the 1 year is what has to be opted into | 18:40 |
gmann | and i think latest unmaintained branch is also need to be +1 from PTL/liaison otherwise it will go to EOL after a month or so | 18:41 |
gmann | we have that also in policy/doc somewhere | 18:41 |
JayF | gmann: that is opposite of the policy | 18:41 |
JayF | > The PTL or Unmaintained branch liaison are allowed to delete an Unmaintained branch early, before its scheduled branch deletion. | 18:41 |
JayF | so you can opt-into getting an early delete | 18:41 |
JayF | nothing in the policy, as written, permits us to close it early without the PTL or Liason opt-in | 18:41 |
gmann | yeah I am not talking about delete. I mean those will not be opt-in automatically always | 18:41 |
gmann | let me find where we wrote those case | 18:42 |
gmann | 'The patch to EOL the Unmaintained branch will be merged no earlier than one month after its proposal.' | 18:43 |
gmann | ah this one it was, to keep those open for a month | 18:43 |
gmann | https://docs.openstack.org/project-team-guide/stable-branches.html#unmaintained | 18:43 |
JayF | that reflects my interpretation of the policy ++ | 18:44 |
gmann | so we can close them after one month and no response | 18:44 |
JayF | only if it's not the *latest* UM branch though | 18:44 |
JayF | from the top of that: | 18:44 |
JayF | >By default, only the latest eligible Unmaintained branch is kept open. To prevent an Unmaintained branch from automatically transitioning to End of Life once a newer eligible branch enters the status, the Unmaintained branch liaison must manually opt-in as described below for each branch. | 18:45 |
JayF | So I think that's reasonably clear. Is there any question or dispute? | 18:45 |
gmann | one question | 18:46 |
gmann | so in current case, should we apply this policy only for yoga the latest unmaintained or for xena to victoria also? as we agreed to make victoria onwards as unmaintained as initial level | 18:47 |
rosmaita | I think what applies to pre-yoga is in the resolution, not the project team guide: https://governance.openstack.org/tc/resolutions/20230724-unmaintained-branches.html#transition | 18:47 |
spotz[m] | Yeah I would think anything older then the release in question should have already been EOL or be included in this | 18:48 |
gmann | so all 4 v->y will be kept until they are explicitly proposed as EOL | 18:48 |
frickler | also a note that the policy is nice, but afaict a lot of that still needs implementation / automation | 18:48 |
gmann | yeah, and that is lot of work. now i feel about that and think we could keep only stable/yoga moving to unmaintianed and rest all previous one EOL | 18:49 |
JayF | Well, changing our course after announcing, updating documentation, and setting user expectations is also pretty expensive, too :/ | 18:49 |
gmann | yeah, I agree not to change now but I feel we added lot of work to release team | 18:50 |
JayF | I'm going to move on, we have a lot of ground to cover and 10m left | 18:50 |
JayF | frickler: If you can document what work needs to be done (still), it'd be helpful, I know this is a form of offloading work to releases team and I apologize for that :/ | 18:50 |
JayF | #topic Testing Runtime for 2024.2 release | 18:50 |
JayF | #link https://review.opendev.org/c/openstack/governance/+/908862 | 18:51 |
gmann | I need to check other review comments/discussion in gerrit but we can discuss about py3.12 testing. | 18:51 |
JayF | This has extremely lively discussion in gerrit; I'd suggest we keep most of the debate in gerrit. | 18:51 |
gmann | I am not much confident on adding it as voting in next cycle considering there might be more breaking changes in py3.12 and we did not have any cycle with testing it as non voting so that we know the results before we make it voting. | 18:51 |
gmann | agree with frickler comment in gerrit on that | 18:51 |
frickler | yes, another case of "policy needs to be backed by code" | 18:52 |
JayF | Well, the eventlet blocker that caused this to be completely undoable in most repos is gone now | 18:52 |
JayF | is there further blockers to turning on that -nv testing? | 18:52 |
frickler | no distro with py3.12 as default | 18:53 |
gmann | we never tested it as -nv right? so we do not know what else blocker we have | 18:53 |
frickler | that at least blocks devstack testing afaict | 18:53 |
JayF | gmann: that's sorta what I was thinking, too | 18:53 |
gmann | blocker/how-much-work | 18:53 |
JayF | frickler: ack | 18:54 |
frickler | doing tox testing should be relatively easy | 18:54 |
fungi | tonyb was going to look into the stow support in the ensure-python role as an option for getting 3.12 testing on a distro where it's not part of their usual package set | 18:54 |
gmann | we usually add nv in at least one cycle advance before we make it voting | 18:54 |
gmann | I think right path is 1. add -nv in 2024.2 2. based on result, discuss to make it voting in 2025.1 | 18:55 |
JayF | I'm going to move on from this topic and suggest all tc-members review 908862 intensively with "how are we going to do this?" in mind as well. We have other topics to hit and I do not want us to get too deep on a topic where most of the debate is already async. | 18:55 |
JayF | I'm skipping our usual TC Tracker topic for time. | 18:56 |
JayF | #topic Open Discussion and Reviews | 18:56 |
JayF | I wanted to raise https://review.opendev.org/c/openstack/governance/+/908880 | 18:56 |
JayF | and the general question of Murano | 18:56 |
JayF | IMO, we made a mistake in not marking Murano inactive before C-2. | 18:56 |
JayF | Now the question is if the better path forward is to release it in a not-great state, or violate our promises to users/release team/etc that we determine what's in the release by C-2. | 18:57 |
JayF | frickler: I saw your -1 on 908880, I'm curious what you think the best path forward is? | 18:58 |
gmann | replied in gerrit on release team deadline | 18:58 |
rosmaita | i suspect from the release team point of view, it's ok to drop, but not add? | 18:59 |
JayF | I am strongly biased towards whatever action takes the least human effort so we can free up time to work on projects that are active. | 18:59 |
gmann | releasing broken source code is more dangerous and marking them inactive without any new release is less | 18:59 |
slaweq | From previous experience I can say that if we would send email to the community with info that we are planning to not include it in the release, it is more likely that new volunteers will step up to help with the project | 18:59 |
rosmaita | slaweq: ++ | 18:59 |
slaweq | so I would say - send email to warn about it and do it at any time if project don't meet criteria to be included in release | 18:59 |
gmann | we did in murano case and we found volunteer let's see how they progress on this | 19:00 |
slaweq | if nobody will raise hand, then maybe nobody really needs it | 19:00 |
frickler | if we do that, why do we have the C-2 cutoff point at all? | 19:00 |
gmann | but this is not just murano case but it can be any project going inactive after m-2 so we should cover that in our process | 19:00 |
slaweq | if there is volunteer I would keep it and give people chance to fix stuff | 19:00 |
gmann | frickler: we do mark any inactive one as soon as we detect before m-2 but if we detect or it become inactive after m-2 | 19:01 |
slaweq | frickler maybe we should remove that cutoff | 19:01 |
JayF | frickler: I honestly don't know. That's why I asked for a release team perpsective. I suspect it was meant more to be a process of "we mark you inactive in $-1, you have until $-2 to fix it" and instead we are marking them late | 19:01 |
frickler | so I'd say mark inactive but still keep in the release | 19:01 |
gmann | our best effort will always be check before m-2 | 19:01 |
fungi | if a project isn't in good shape before milestone 2, then it should be marked inactive at that time. if it's in good shape through milestone 2 then releasing it in that state seems fairly reasonable | 19:01 |
gmann | release even with broken gate/code? | 19:02 |
JayF | We should finish this debate in gerrit, and we're over time. | 19:02 |
slaweq | frickler what if project have e.g. broken gates and it happend after m-2? How we would then keep it in release? | 19:02 |
JayF | It's a hard decision with downsides in every direction :( | 19:02 |
gmann | slaweq: exactly | 19:02 |
slaweq | IMHO we should write there something like each case will be discussed individually by the TC before making final decision | 19:02 |
slaweq | it's hard to describe all potential cases in the policy | 19:03 |
JayF | I think that'd be a good revision to 908880 | 19:03 |
JayF | but still doesn't settle the specific case here | 19:03 |
JayF | but we need to let folks go to their other meetings | 19:03 |
JayF | #endmeeting | 19:03 |
opendevmeet | Meeting ended Tue Feb 20 19:03:24 2024 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 19:03 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/tc/2024/tc.2024-02-20-18.00.html | 19:03 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/tc/2024/tc.2024-02-20-18.00.txt | 19:03 |
opendevmeet | Log: https://meetings.opendev.org/meetings/tc/2024/tc.2024-02-20-18.00.log.html | 19:03 |
JayF | Sorry, not my best show of topic:time management in that meeting | 19:03 |
spotz[m] | Thanks all | 19:03 |
slaweq | thanks all, and good night :) | 19:03 |
slaweq | o/ | 19:03 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!