Wednesday, 2022-12-14

@clarkb:matrix.orgcorvus: I've just found https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#node-requirement-summary in double checking my paranoia for python3.11 zuul executors. I think that it will probably work, but that don't list it there... Also support for 3.8 is dropped in 2.14/7 so we may have to drop that in Zuul when we add Ansible 7 support?00:03
@clarkb:matrix.orgI guess as long as Zuul supports 6 and 7 we can tell people they might have to force 6 across the board if running on python3.800:03
@jim:acmegating.comClark: yeah, we could drop 3.8 when we add ansible 7.00:07
@clarkb:matrix.organyway it looks like ze01 is running with python3.11 and ansible is doing stuff and I don't see explosions yet00:08
@jim:acmegating.comClark: regarding 3.11 -- we could continue with the "try it and see if it works" approach.  if something blows up, we can go back to 3.10 and add ansible 7 first.  shouldn't be a big deal.00:08
@clarkb:matrix.orgack. I'm mostly just worried we end up in a situation where zuul can't land a change to rebuild the images. But I guess when that happens we fallback to the last release00:09
@clarkb:matrix.orgfwiw https://zuul.opendev.org/t/openstack/build/89f5e85476ea49a49bc44bb848c594f7 this ran under python3.11 successfully00:09
@jim:acmegating.comClark: i'm not aware of anything in 3.11 that we'd expect to break; a break would be something esoteric. so proceeding and experimenting seems reasonable.00:10
@jim:acmegating.comand yes, revert to release should work00:10
@clarkb:matrix.orgthis is exciting though. just a few weeks after python3.11 is released we've run CI jobs on top of that python version00:11
@jim:acmegating.com++00:12
-@gerrit:opendev.org- Zuul merged on behalf of James E. Blair https://matrix.to/#/@jim:acmegating.com: [zuul/nodepool] 867578: Fix openstack image deletion with newer sdk https://review.opendev.org/c/zuul/nodepool/+/86757800:49
-@gerrit:opendev.org- Ian Wienand proposed: [zuul/nodepool] 867590: Dockerfile: use containernetworking-plugins https://review.opendev.org/c/zuul/nodepool/+/86759003:16
-@gerrit:opendev.org- Zuul merged on behalf of Felix Edel: [zuul/zuul] 864903: Abort job if cleanup playbook timed out https://review.opendev.org/c/zuul/zuul/+/86490307:09
-@gerrit:opendev.org- Christian von Schultz proposed: [zuul/nodepool] 867333: Fix AWS quota limits for vCPUs https://review.opendev.org/c/zuul/nodepool/+/86733309:48
@ssbarnea:matrix.orgHello from ansible side! I observed that the list of ansible-lint reported issues inside zuul-jobs repo is growing and w/o seeing any activity on dealing with them, as this repo is part of linter own testing. Is there any desire to address these or not? If there is, I could give a hand myself, especially if I know who can help with reviews and merge on those fixes. If not, I might consider removing the testing from linter pipelines due to too much noise.10:39
@iwienand:matrix.orgssbarnea: i don't mind looking at them.  With things like https://review.opendev.org/c/zuul/zuul-jobs/+/867037 making sure we have enough context in the changelog so that we can understand what's going on a priori always helps more.  when i've tackled it before i've tried to make sure i keep changes pretty well isolated from each other addressing one thing as much as possible, and I feel like that has aided review.  so i'm sure that approach will be welcome.11:25
@ssbarnea:matrix.orgThat is implied, I would be against in a change the resolves more than one rule violation, as it would make it harder to review and more risky.11:39
@ssbarnea:matrix.orgRe your question: i cannot enable schema validation if I do not also fix become, as it is the same error.11:42
@ssbarnea:matrix.orgthe other occurrences are not fixed because the files are not touched.11:43
@ssbarnea:matrix.orgbut a follow-up should fix them11:43
@ssbarnea:matrix.orgit is a bit of a chicken and the egg, and we need to activate schema in order to prevent other changes from introducing problems after. 11:44
@ssbarnea:matrix.orgYou can decide how to approach it, there is no single way. Still, I would avoid trying to touch too many files in a single change due to the number of jobs running and risk of conflict.11:45
@rgunasekaran:matrix.orgHi everyone12:37
This is Rajesh Gunasekaran joined
-@gerrit:opendev.org- Christian von Schultz proposed: [zuul/nodepool] 867333: Fix AWS quota limits for vCPUs https://review.opendev.org/c/zuul/nodepool/+/86733312:53
-@gerrit:opendev.org- Christian von Schultz proposed: [zuul/nodepool] 867333: Fix AWS quota limits for vCPUs https://review.opendev.org/c/zuul/nodepool/+/86733313:15
@fungicide:matrix.org> <@ssbarnea:matrix.org> Hello from ansible side! I observed that the list of ansible-lint reported issues inside zuul-jobs repo is growing and w/o seeing any activity on dealing with them, as this repo is part of linter own testing. Is there any desire to address these or not? If there is, I could give a hand myself, especially if I know who can help with reviews and merge on those fixes. If not, I might consider removing the testing from linter pipelines due to too much noise.14:13
also be aware that we've, historically, preferred to instruct linters to skip rules which we collectively find result in less readability or loss of flexibility. ansible-lint seems to primarily be focused on ensuring consistency for roles published to ansible galaxy, which is not the intent for roles contained in the zuul-jobs repository
@ssbarnea:matrix.orgYeah, not an uncommon approach. Still, I did not see any updated on linter config for long period of time and some of the ignores from there have notes indicating that lack of time was the reason for not doing them, not lack of desire.16:03
@ssbarnea:matrix.orglatest version got ability to check module args, but because zuul_return is only mocked, that feature cannot really work. It could prove useful to publish the module inside ca collection, so people could lint it even outside zuul.16:04
@clarkb:matrix.orgMy apathy on the issue comes from the linter making suggestions that aren't possible (git module instead of exec'ing git for example) and suggestions that are actually massive performance hits (looping with ansible instaed of looping in a shell script since ansible loops are slow)16:15
@clarkb:matrix.orgI don't mind people cleaning things up, but I have a hard time being motivated myself when actually following all the rules leads to worse performance16:16
@iwienand:matrix.orgClark: i agree but we don't have to implement those things.  Things like naming tasks, captial letters, when: ordering are on the one hand trivial; but on a wide and deep codebase like zuul-jobs it is nice to open any particular file and be looking at something consistent.  So yeah, nothing to say but my original point that if changes are coming with context and explanations it's going to help them get reviewed.20:19
@clarkb:matrix.orgYup and I'm happy to review changes like that too. I can also understand why people might not prioritize those items though20:24
@hanson76:matrix.org> <@hanson76:matrix.org> Trying to figure out if I can get around it and found the following.21:29
>
> "Now that the TokenRequest API has been stable since Kubernetes 1.22, it is time to do some cleaning and promote the use of this API over the old tokens.
>
> Up until now, Kubernetes automatically created a service account Secret when creating a Pod. That token Secret contained the credentials for accessing the API.
>
> Now, API credentials are obtained directly through the TokenRequest API, and are mounted into Pods using a projected volume. Also, these tokens will be automatically invalidated when their associated Pod is deleted. You can still create token secrets manually if you need it."
>
> Not sure, but feels like there is no need for the loop "Wait for the token to be created" in the driver code (provider.py:188)
> Is that code there to wait until the namespace is ready to be used? Maybe there is a better way.
> The token is never used outside of that loop.
I've spent bit more time on this, I actually found an existing story https://storyboard.openstack.org/#!/story/2010224 for the problem.
I've not good at python and never contributed but it looks to me that a fix for this problem could be to explicitly create a secret connected to
the ServiceAcoount. That should not be that hard, the question is if it needs to be behind a check on k8s version >= 1.24 or not.
We have temporarily worked around the problem using EC2 instances instead of pods but that is not a good workaround in the long run.
@clarkb:matrix.orgHanson: the CI will check if it works against <1.24. Maybe write the chang and see if it works for older if so no check required21:30
@hanson76:matrix.org> <@clarkb:matrix.org> Hanson: the CI will check if it works against <1.24. Maybe write the chang and see if it works for older if so no check required21:49
I'll give it a try.
@jpew:matrix.orgIs anyone doing docker image promotion (promote-docker-image from zuul-jobs) with a private registry instead of github?22:03
@jpew:matrix.org * Is anyone doing docker image promotion (promote-docker-image from zuul-jobs) with a private registry instead of dockerhub?22:03
@iwienand:matrix.org> <@clarkb:matrix.org> Hanson: the CI will check if it works against <1.24. Maybe write the chang and see if it works for older if so no check required22:04
i also hit this with the microk8s work and had to pin that to 1.23 -- https://review.opendev.org/c/zuul/nodepool/+/866955/6/playbooks/nodepool-functional-k8s/pre.yaml. so there's also a way to replicate it if required
@jim:acmegating.comjpew: those roles may be pretty specific to how dockerhub deals with tags (in order to make it efficient and atomic)22:05
@jpew:matrix.orgcorvus: Just the promote one it appears, which is unfortunate :)22:05
@jim:acmegating.comjpew: fwiw the "docker" roles are really for docker, and "container" roles are more generic22:05
@jpew:matrix.org@corvus: Ah, I'll take a look at those22:07
-@gerrit:opendev.org- Anders Hanson proposed: [zuul/nodepool] 867744: Add support for Kubernetes 1.24+ https://review.opendev.org/c/zuul/nodepool/+/86774422:13
-@gerrit:opendev.org- Ian Wienand proposed: [zuul/nodepool] 867745: Unpin microk8s https://review.opendev.org/c/zuul/nodepool/+/86774522:18
@iwienand:matrix.orgHanson: ^ you should be able to "recheck" that for additional testing too22:18
-@gerrit:opendev.org- Anders Hanson proposed wip: [zuul/nodepool] 867744: Add support for Kubernetes 1.24+ https://review.opendev.org/c/zuul/nodepool/+/86774422:27
-@gerrit:opendev.org- Anders Hanson proposed wip: [zuul/nodepool] 867744: Add support for Kubernetes 1.24+ https://review.opendev.org/c/zuul/nodepool/+/86774422:55
-@gerrit:opendev.org- Anders Hanson proposed wip: [zuul/nodepool] 867744: Add support for Kubernetes 1.24+ https://review.opendev.org/c/zuul/nodepool/+/86774423:15
@clarkb:matrix.orgare there kubernetes users that might want to weigh in on https://review.opendev.org/c/zuul/zuul/+/867136 and https://review.opendev.org/c/zuul/zuul/+/867137/ ? I'm interested in the image size savings but I think updating the client is potentially a win too? 23:22
-@gerrit:opendev.org- Anders Hanson proposed wip: [zuul/nodepool] 867744: Add support for Kubernetes 1.24+ https://review.opendev.org/c/zuul/nodepool/+/86774423:25

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!