Wednesday, 2022-03-23

@clarkb:matrix.orgI'm ok with dropping 3.6 since it seems like containers are a common deployment system. Bionic and CentOS 7/8 would be affected if using distro python00:05
@tristanc_:matrix.org+1 to drop 3.600:10
-@gerrit:opendev.org- Zuul merged on behalf of Tobias Henkel: [zuul/zuul] 772695: Perform per tenant locking in getStatus https://review.opendev.org/c/zuul/zuul/+/77269501:00
-@gerrit:opendev.org- Zuul merged on behalf of James E. Blair https://matrix.to/#/@jim:acmegating.com: [zuul/zuul] 831430: Dequeue items that can no longer merge https://review.opendev.org/c/zuul/zuul/+/83143001:00
@tobias.henkel:matrix.org++ for dropping05:31
@westphahl:matrix.org+1 for dropping 3.606:16
-@gerrit:opendev.org- Zuul merged on behalf of James E. Blair https://matrix.to/#/@jim:acmegating.com: [zuul/zuul] 831873: Use JobData for Build result data https://review.opendev.org/c/zuul/zuul/+/83187306:47
-@gerrit:opendev.org- Zuul merged on behalf of Benjamin Schanzel: [zuul/zuul] 829867: Report gross/total tenant resource usage stats https://review.opendev.org/c/zuul/zuul/+/82986709:15
@frickler:freitrix.deo.k., my scenario is that I have e.g. 3 external resources available for testing and a job needs one of them, how would I implement that?10:36
I can use a semaphore with max=3, but how would the job know which of the resources it is to use?
if I define three different semaphores, how can I tell zuul that the job needs any one of them and not all three? and how would the job know which one it received?
is that a feature that might be worth getting added or is it already there and I can't find it?
-@gerrit:opendev.org- Dong Zhang proposed: [zuul/zuul] 834857: Fix bug in getting changed files in PR https://review.opendev.org/c/zuul/zuul/+/83485711:49
@fungicide:matrix.org> <@frickler:freitrix.de> o.k., my scenario is that I have e.g. 3 external resources available for testing and a job needs one of them, how would I implement that?12:10
> I can use a semaphore with max=3, but how would the job know which of the resources it is to use?
> if I define three different semaphores, how can I tell zuul that the job needs any one of them and not all three? and how would the job know which one it received?
> is that a feature that might be worth getting added or is it already there and I can't find it?
to clarify, you have a pool of resources which can be used by up to three jobs at a time? if so, that might be something nodepool could be made to manage, with a bit of coding. have jobs request whatever the resource is and then nodepool assigns and tracks them as they become available
@fungicide:matrix.orgbut a semaphore alone won't be able to do traffic control and route a build to one particular available resource out of that pool12:11
@frickler:freitrix.denot sure what you mean by "traffic control and route". in my special case, the resource would be a preconfigured openstack tenant, but I think the scenario also would apply to things like storage systems that a Cinder job might want to test against. the important limitation is that there needs to be a 1:1 relationship between one resource from the pool and one specific job.12:56
my job would have a clouds.yaml with three clouds defined and just need to be told whether to use cloud_a, cloud_b or cloud_c.
my longer term goal would indeed be to have nodepool create those tenants directly and then provide a nodeset built with some specific parameters within that tenant, but I was hoping that using semaphores would allow for some easy intermediate solution
currently I only see the option to have just one cloud definition and serialize the jobs using it
-@gerrit:opendev.org- Dong Zhang proposed: [zuul/zuul] 834857: Fix bug in getting changed files in PR https://review.opendev.org/c/zuul/zuul/+/83485713:46
@iselor:matrix.orgHi,14:14
I have an issue with "promote" pipeline, which should run on ref-updated gerrit event. After update from 4.3 to 5.0 I have noticed that it doesn't start for some refs and I see errors in logs:
zuul.web.ZuulWeb: Exception loading ZKObject <zuul.model.PipelineSummary object at 0x7f60af086670> at /zuul/tenant/myproject/pipeline/promote/status
zuul.Scheduler Exception loading ZKObject <zuul.model.PipelineChangeList object at 0x7fe628bb7070> at /zuul/myproject/omni/pipeline/promote/change_list
Do you have any hints what could be wrong?
@jim:acmegating.comJakub P.: possibly a bug fixed in 5.1.0; shut everything down, run 'zuul delete-state' and start up on 5.1.014:19
@iselor:matrix.orgthanks14:19
@fajfer:matrix.orglmao, that was a quick one - to anyone struggling with this, when promote is done via cli it works alright14:30
@fungicide:matrix.org> <@frickler:freitrix.de> not sure what you mean by "traffic control and route". in my special case, the resource would be a preconfigured openstack tenant, but I think the scenario also would apply to things like storage systems that a Cinder job might want to test against. the important limitation is that there needs to be a 1:1 relationship between one resource from the pool and one specific job.14:47
> my job would have a clouds.yaml with three clouds defined and just need to be told whether to use cloud_a, cloud_b or cloud_c.
> my longer term goal would indeed be to have nodepool create those tenants directly and then provide a nodeset built with some specific parameters within that tenant, but I was hoping that using semaphores would allow for some easy intermediate solution
> currently I only see the option to have just one cloud definition and serialize the jobs using it
traffic control being preventing more than one build from accessing the resource at a given time, and routing being informing a build which resource from the pool was assigned for its exclusive use
@fungicide:matrix.orgthose are things which nodepool is conceptually designed to do, semaphores aren't really (at least not the way i understand them)14:48
@fungicide:matrix.orgthough a semaphore combined with some external coordination service might approximate it14:53
@jim:acmegating.comyou could consider the static nodepool driver to start14:54
@frickler:freitrix.dewith those definitions, I'd say that "traffic control" is exactly what a semaphore does. it would just be the "routing" part that is missing. I fail to see where nodepool is designed to deal with resources that are not (composed of) nodes14:54
@fungicide:matrix.orgnodepool is designed to deal with arbitrary resources, it just happens to have implementation for nodes at present14:55
@fungicide:matrix.orgbut with development could be extended to other sorts of resources14:55
@fungicide:matrix.orgsemaphores don't seem like they're designed to coordinate specific resources selected from a larger pool14:56
@fungicide:matrix.orgthey're only designed to limit concurrency of builds14:56
@fungicide:matrix.orgi think if we tried to extend the semaphore concept to coorsinate locking resources from defined pools, we would in essence be reimplementing a good chunk of nodepool inside zuul14:58
@fungicide:matrix.org * i think if we tried to extend the semaphore concept to coordinate locking resources from defined pools, we would in essence be reimplementing a good chunk of nodepool inside zuul14:58
@frickler:freitrix.dehmm. can one combine nodes from different providers in one nodeset? like if I have three fake static nodes representing my cloud resources, I would want a nodeset that combines one of those with one or more "normal" e.g. openstack nodes15:02
@fungicide:matrix.orgfrickler: i believe that's one of the use cases for the new metastatic driver: https://zuul-ci.org/docs/nodepool/latest/metastatic.html15:09
@frickler:freitrix.deI don't understand how that driver works. how is "divide that node up into smaller nodes" implemented?15:19
@clarkb:matrix.orgit keeps track of how many slots (configurable) are available to run on each actual node. And if jobs don't run on a node for long enough then the node is deleted entirely15:20
@frickler:freitrix.deso that division is just virtual and multiple jobs simply run in parallel on that master node?15:24
@clarkb:matrix.orgyes, I think it is up to the job to ensure that the jobs don't interfere with each other15:25
@frickler:freitrix.deo.k., so then that would not help the jobs to know which one should use which cloud config, either15:26
@fungicide:matrix.orgright, though i think it would allow you to build a meta-static node consisting of nodes satisfied from different providers15:29
@fungicide:matrix.orgat least i remember it coming up as one use of metastatic, though reading over the driver documentation it's not immediately apparent to me how you'd define metastatic nodes consisting of a combination of nodes from different providers/drivers15:35
-@gerrit:opendev.org- Dong Zhang proposed: [zuul/zuul] 834857: Fix bug in getting changed files in PR https://review.opendev.org/c/zuul/zuul/+/83485715:37
-@gerrit:opendev.org- Dong Zhang proposed: [zuul/zuul] 834857: Fix bug in getting changed files in PR https://review.opendev.org/c/zuul/zuul/+/83485715:56
-@gerrit:opendev.org- Dong Zhang proposed: [zuul/zuul] 834857: Fix bug in getting changed files https://review.opendev.org/c/zuul/zuul/+/83485715:57
@pedromoritz:matrix.orgHi! I'm using Zuul 5.0.0 with Kubernetes driver. Does anyone here have a image definition for ubuntu-bionic? I would like to have a Dockerfile properly crafted to run jobs that uses ubuntu-bionic label.20:57
@clarkb:matrix.orgPedro Moritz de Carvalho Neto: I think pods are connected to using the kubectl connection type which means you should be able to just use the ubuntu:bionic image on dockerhub? You might want to cache your repos in there too to speed things up but that is an optimization21:06
@pedromoritz:matrix.orgGreat! I'll try it and get back with the outcomes.21:09

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!