-@gerrit:opendev.org- Zuul merged on behalf of James E. Blair https://matrix.to/#/@jim:acmegating.com: [zuul/zuul] 944155: Add AWS spot support https://review.opendev.org/c/zuul/zuul/+/944155 | 00:05 | |
-@gerrit:opendev.org- Zuul merged on behalf of James E. Blair https://matrix.to/#/@jim:acmegating.com: [zuul/zuul] 944156: Add a postConfig hook to endpoints https://review.opendev.org/c/zuul/zuul/+/944156 | 01:34 | |
@yodakv:matrix.org | Hello everyone, | 05:40 |
---|---|---|
PTAL: https://review.opendev.org/c/zuul/nodepool/+/933545, https://review.opendev.org/c/zuul/nodepool/+/933030 | ||
Please , it's not cool every time to patch before upgrade . My company follow strict verification process , so please take a look the requests. | ||
Would be great to have Azure spot instances and ephemeral os disk support in Azure nodepool driver . | ||
-@gerrit:opendev.org- Simon Westphahl proposed: [zuul/zuul] 944967: Run a deduplicated job if it matched any change https://review.opendev.org/c/zuul/zuul/+/944967 | 07:30 | |
-@gerrit:opendev.org- Simon Westphahl proposed: [zuul/zuul] 944967: Run a deduplicated job if it matches any change https://review.opendev.org/c/zuul/zuul/+/944967 | 07:31 | |
-@gerrit:opendev.org- Zuul merged on behalf of James E. Blair https://matrix.to/#/@jim:acmegating.com: [zuul/zuul] 944157: Add fleet support to aws driver https://review.opendev.org/c/zuul/zuul/+/944157 | 08:58 | |
@morucci:matrix.org | Hi folks, could we schedule the landing of https://review.opendev.org/c/zuul/zuul-jobs/+/943586 ? Anything I could do to help with this ? | 10:43 |
@morucci:matrix.org | corvus: Hi, regarding the GitLab driver would it be possible to land one or the other patches between: https://review.opendev.org/c/zuul/zuul/+/889431 and https://review.opendev.org/c/zuul/zuul/+/943325. IMHO both seems to solve the issue and I'm fine with both approaches. We have the issue that happen from time to time. | 10:49 |
@jim:acmegating.com | fbo: i vote 431; approved. :) i'd like to see https://review.opendev.org/927582 reviewed by someone else and merged before 943586. principally so that 586 doesn't get used unecessarily as a workaround for 582. i can approve 586 on saturday if no one else does before then. | 13:42 |
@clarkb:matrix.org | 582 is on my list to review but I keep getting distracted. I should have time today | 14:57 |
-@gerrit:opendev.org- Zuul merged on behalf of James E. Blair https://matrix.to/#/@jim:acmegating.com: [zuul/zuul] 889431: Fix null dereference in gitlab https://review.opendev.org/c/zuul/zuul/+/889431 | 15:10 | |
@dpawlik:matrix.org | Hello folks o/ Is possible somehow to make: spawning same label, available in few providers, but Zuul will do a round robin ? Right now it seems that until the first one is not "full", it will not take another provider | 15:46 |
@clarkb:matrix.org | dpawlik: it should roughly round robin by default. There isn't explicit coordination instead each launcher thread is iterating through the list, marking those it can't fulfill as rejected, moving along until it finds one it can fulfill and then operating on it | 16:05 |
@clarkb:matrix.org | tristanC: corvus I -1'd https://review.opendev.org/c/zuul/zuul-jobs/+/927582 because of the delta between the test cases but I had a couple other thoughts/suggestions too (I don't think those need changing before I would +2 though | 16:05 |
@clarkb:matrix.org | dpawlik: I think you can see that behavior here: https://grafana.opendev.org/d/6d29645669/nodepool3a-rackspace-flex?orgId=1&from=now-12h&to=now-9h&timezone=utc&var-region=$__all basically both regions are not at capacity but both are supplying nodes | 16:08 |
@clarkb:matrix.org | dpawlik: one explanation for the behavior you see might be that you have a very fast provider and launcher that is able to fulfill requests more more quickly than the others | 16:10 |
@clarkb:matrix.org | if that is the acse then the lack of coordination and simply grabbing the next request would end up weighing that provider and launcher more heavily | 16:11 |
-@gerrit:opendev.org- Zuul merged on behalf of Clark Boylan: [zuul/zuul] 943862: Setup swap in nox jobs https://review.opendev.org/c/zuul/zuul/+/943862 | 16:31 | |
@morucci:matrix.org | Thanks corvus for the merge of 431 | 16:39 |
@flaper87:matrix.org | 👋 Hi folks! | 17:28 |
We have a CI job that automatically updates the list of untrusted projects based on projects that exist in our GH organization. This works fine as the tenant config is updated, and the various containers are then restarted. However, it seems like the project won't be really picked up unless we call `smart-reconfigure` from the scheduler container. Is this expected? | ||
Is there a way to tell the scheduler to always reload these configs? I guess we could have an initContainer or something that runs `smart-reconfigure` but I'm trying to understand whether there's a reason for the current behavior | ||
@fungicide:matrix.org | flaper87: i think the main reason is that zuul isn't really aware of the correspondence between the git copy of that configuration and wherever you're deploying it. you could arrange your deployment orchestration with a handler to trigger a smart reconfig any time that file changes? | 17:32 |
@jim:acmegating.com | https://zuul-ci.org/docs/zuul/latest/releasenotes.html#upgrade-notes note #3 | 17:33 |
@fungicide:matrix.org | aha, that too | 17:34 |
@fungicide:matrix.org | i missed the point that "the various containers are then restarted" | 17:34 |
@fungicide:matrix.org | so yes, that stopped re-reading the config in 11.3.0 | 17:34 |
@fungicide:matrix.org | but on the up side, you shouldn't need to restart your containers if you're merely updating the config | 17:35 |
@tristanc_:matrix.org | Clark: ha thanks, that explain why, our under-used provider is using remote launcher process. And thank you for the zuul-jobs review, I'll update the change now | 18:07 |
-@gerrit:opendev.org- Tristan Cacqueray https://matrix.to/#/@tristanc_:matrix.org proposed: | 18:16 | |
- [zuul/zuul-jobs] 927582: Update the set-zuul-log-path-fact scheme to prevent huge url https://review.opendev.org/c/zuul/zuul-jobs/+/927582 | ||
- [zuul/zuul-jobs] 945042: Add the build and tenant to the job header https://review.opendev.org/c/zuul/zuul-jobs/+/945042 | ||
@clarkb:matrix.org | tristanC: in https://review.opendev.org/c/zuul/zuul-jobs/+/927582/8..9/test-playbooks/base-roles/emit-job-header.yaml can we drop the assertion on line 19 (line 21 supercedes it) and the nadd the tenant in url check from the other test case? | 18:18 |
@clarkb:matrix.org | tristanC: I want both test cases to have identical checks so that when we edit one then the other we're asserting consistent behavior between the base and base test roles | 18:19 |
@clarkb:matrix.org | and then I wonder how impactful the change may be generally. I think for opendev we will be fine as we use swift. But people using filesystems may be impacted if they are doing multiple fs based on the tree or osmething? that seems unlikely though | 18:20 |
@clarkb:matrix.org | tristanC: I posted a comment about the assertion edits | 18:22 |
@clarkb:matrix.org | might make it easier to see the difference I'm suggestion | 18:22 |
@clarkb:matrix.org | * might make it easier to see the difference I'm suggesting | 18:22 |
@jim:acmegating.com | i think anyone using sharding will still benefit from the initial 3-byte shard (including local fs) since that's staying. with that in mind, i doubt very much anyone would be affected by the change, but if you're concerned we could send out a notice. | 18:27 |
@clarkb:matrix.org | No not overly concerned. Just wanted to check if we had considered it | 18:28 |
@clarkb:matrix.org | I'm like 99% certain opendev won't notice which is a decent canary for other installations | 18:29 |
-@gerrit:opendev.org- Tristan Cacqueray https://matrix.to/#/@tristanc_:matrix.org proposed: [zuul/zuul-jobs] 927582: Update the set-zuul-log-path-fact scheme to prevent huge url https://review.opendev.org/c/zuul/zuul-jobs/+/927582 | 18:35 | |
@clarkb:matrix.org | that lgtm now +2 thanks | 18:44 |
@jim:acmegating.com | i'm happy to merge that saturday if it isn't approved by then. we expect no fallout, but i don't personally want to schedule any more surprises until then :) | 18:51 |
-@gerrit:opendev.org- James E. Blair https://matrix.to/#/@jim:acmegating.com proposed: [zuul/zuul-jobs] 944813: Add upload-image-s3 role https://review.opendev.org/c/zuul/zuul-jobs/+/944813 | 18:57 | |
@tristanc_:matrix.org | Great, thanks! | 19:01 |
@dpawlik:matrix.org | thanks Clark . Will take a look tomorrow | 20:43 |
@aureliojargas:matrix.org | Hey guys, I have this change open for more than a month with no reviews. Should I make it different to be more reviewable? Break it in smaller changes? Or opening a change just creating the new role and adapt the other roles in the future? I'm open to feedback. Thanks! | 21:28 |
https://review.opendev.org/c/zuul/zuul-jobs/+/941490 -- Add role: `ensure-python-command`, refactor similar roles | ||
@clarkb:matrix.org | For me I think its mostly just been a time thing. The change improves things structurally but isn't necessarily end user visible so is less urgent. Whereas the 927582 fixes something that is end user visible | 21:30 |
@clarkb:matrix.org | I've put it on my todo list for tomorow | 21:30 |
@aureliojargas:matrix.org | Thanks Clark. No problem about the time it takes, there's no hurry. I was just worried the change was not reviewed because there was some problem with it. | 21:34 |
@jim:acmegating.com | Aurelio Jargas: count me as a second reviewer on that after Clark . i think it's an important change. | 22:08 |
-@gerrit:opendev.org- Zuul merged on behalf of James E. Blair https://matrix.to/#/@jim:acmegating.com: [zuul/zuul] 944158: Add AZ support to AWS driver https://review.opendev.org/c/zuul/zuul/+/944158 | 23:01 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!