@clarkb:matrix.org | I'm ok with dropping 3.6 since it seems like containers are a common deployment system. Bionic and CentOS 7/8 would be affected if using distro python | 00:05 |
---|---|---|
@tristanc_:matrix.org | +1 to drop 3.6 | 00:10 |
-@gerrit:opendev.org- Zuul merged on behalf of Tobias Henkel: [zuul/zuul] 772695: Perform per tenant locking in getStatus https://review.opendev.org/c/zuul/zuul/+/772695 | 01:00 | |
-@gerrit:opendev.org- Zuul merged on behalf of James E. Blair https://matrix.to/#/@jim:acmegating.com: [zuul/zuul] 831430: Dequeue items that can no longer merge https://review.opendev.org/c/zuul/zuul/+/831430 | 01:00 | |
@tobias.henkel:matrix.org | ++ for dropping | 05:31 |
@westphahl:matrix.org | +1 for dropping 3.6 | 06:16 |
-@gerrit:opendev.org- Zuul merged on behalf of James E. Blair https://matrix.to/#/@jim:acmegating.com: [zuul/zuul] 831873: Use JobData for Build result data https://review.opendev.org/c/zuul/zuul/+/831873 | 06:47 | |
-@gerrit:opendev.org- Zuul merged on behalf of Benjamin Schanzel: [zuul/zuul] 829867: Report gross/total tenant resource usage stats https://review.opendev.org/c/zuul/zuul/+/829867 | 09:15 | |
@frickler:freitrix.de | o.k., my scenario is that I have e.g. 3 external resources available for testing and a job needs one of them, how would I implement that? | 10:36 |
I can use a semaphore with max=3, but how would the job know which of the resources it is to use? | ||
if I define three different semaphores, how can I tell zuul that the job needs any one of them and not all three? and how would the job know which one it received? | ||
is that a feature that might be worth getting added or is it already there and I can't find it? | ||
-@gerrit:opendev.org- Dong Zhang proposed: [zuul/zuul] 834857: Fix bug in getting changed files in PR https://review.opendev.org/c/zuul/zuul/+/834857 | 11:49 | |
@fungicide:matrix.org | > <@frickler:freitrix.de> o.k., my scenario is that I have e.g. 3 external resources available for testing and a job needs one of them, how would I implement that? | 12:10 |
> I can use a semaphore with max=3, but how would the job know which of the resources it is to use? | ||
> if I define three different semaphores, how can I tell zuul that the job needs any one of them and not all three? and how would the job know which one it received? | ||
> is that a feature that might be worth getting added or is it already there and I can't find it? | ||
to clarify, you have a pool of resources which can be used by up to three jobs at a time? if so, that might be something nodepool could be made to manage, with a bit of coding. have jobs request whatever the resource is and then nodepool assigns and tracks them as they become available | ||
@fungicide:matrix.org | but a semaphore alone won't be able to do traffic control and route a build to one particular available resource out of that pool | 12:11 |
@frickler:freitrix.de | not sure what you mean by "traffic control and route". in my special case, the resource would be a preconfigured openstack tenant, but I think the scenario also would apply to things like storage systems that a Cinder job might want to test against. the important limitation is that there needs to be a 1:1 relationship between one resource from the pool and one specific job. | 12:56 |
my job would have a clouds.yaml with three clouds defined and just need to be told whether to use cloud_a, cloud_b or cloud_c. | ||
my longer term goal would indeed be to have nodepool create those tenants directly and then provide a nodeset built with some specific parameters within that tenant, but I was hoping that using semaphores would allow for some easy intermediate solution | ||
currently I only see the option to have just one cloud definition and serialize the jobs using it | ||
-@gerrit:opendev.org- Dong Zhang proposed: [zuul/zuul] 834857: Fix bug in getting changed files in PR https://review.opendev.org/c/zuul/zuul/+/834857 | 13:46 | |
@iselor:matrix.org | Hi, | 14:14 |
I have an issue with "promote" pipeline, which should run on ref-updated gerrit event. After update from 4.3 to 5.0 I have noticed that it doesn't start for some refs and I see errors in logs: | ||
zuul.web.ZuulWeb: Exception loading ZKObject <zuul.model.PipelineSummary object at 0x7f60af086670> at /zuul/tenant/myproject/pipeline/promote/status | ||
zuul.Scheduler Exception loading ZKObject <zuul.model.PipelineChangeList object at 0x7fe628bb7070> at /zuul/myproject/omni/pipeline/promote/change_list | ||
Do you have any hints what could be wrong? | ||
@jim:acmegating.com | Jakub P.: possibly a bug fixed in 5.1.0; shut everything down, run 'zuul delete-state' and start up on 5.1.0 | 14:19 |
@iselor:matrix.org | thanks | 14:19 |
@fajfer:matrix.org | lmao, that was a quick one - to anyone struggling with this, when promote is done via cli it works alright | 14:30 |
@fungicide:matrix.org | > <@frickler:freitrix.de> not sure what you mean by "traffic control and route". in my special case, the resource would be a preconfigured openstack tenant, but I think the scenario also would apply to things like storage systems that a Cinder job might want to test against. the important limitation is that there needs to be a 1:1 relationship between one resource from the pool and one specific job. | 14:47 |
> my job would have a clouds.yaml with three clouds defined and just need to be told whether to use cloud_a, cloud_b or cloud_c. | ||
> my longer term goal would indeed be to have nodepool create those tenants directly and then provide a nodeset built with some specific parameters within that tenant, but I was hoping that using semaphores would allow for some easy intermediate solution | ||
> currently I only see the option to have just one cloud definition and serialize the jobs using it | ||
traffic control being preventing more than one build from accessing the resource at a given time, and routing being informing a build which resource from the pool was assigned for its exclusive use | ||
@fungicide:matrix.org | those are things which nodepool is conceptually designed to do, semaphores aren't really (at least not the way i understand them) | 14:48 |
@fungicide:matrix.org | though a semaphore combined with some external coordination service might approximate it | 14:53 |
@jim:acmegating.com | you could consider the static nodepool driver to start | 14:54 |
@frickler:freitrix.de | with those definitions, I'd say that "traffic control" is exactly what a semaphore does. it would just be the "routing" part that is missing. I fail to see where nodepool is designed to deal with resources that are not (composed of) nodes | 14:54 |
@fungicide:matrix.org | nodepool is designed to deal with arbitrary resources, it just happens to have implementation for nodes at present | 14:55 |
@fungicide:matrix.org | but with development could be extended to other sorts of resources | 14:55 |
@fungicide:matrix.org | semaphores don't seem like they're designed to coordinate specific resources selected from a larger pool | 14:56 |
@fungicide:matrix.org | they're only designed to limit concurrency of builds | 14:56 |
@fungicide:matrix.org | i think if we tried to extend the semaphore concept to coorsinate locking resources from defined pools, we would in essence be reimplementing a good chunk of nodepool inside zuul | 14:58 |
@fungicide:matrix.org | * i think if we tried to extend the semaphore concept to coordinate locking resources from defined pools, we would in essence be reimplementing a good chunk of nodepool inside zuul | 14:58 |
@frickler:freitrix.de | hmm. can one combine nodes from different providers in one nodeset? like if I have three fake static nodes representing my cloud resources, I would want a nodeset that combines one of those with one or more "normal" e.g. openstack nodes | 15:02 |
@fungicide:matrix.org | frickler: i believe that's one of the use cases for the new metastatic driver: https://zuul-ci.org/docs/nodepool/latest/metastatic.html | 15:09 |
@frickler:freitrix.de | I don't understand how that driver works. how is "divide that node up into smaller nodes" implemented? | 15:19 |
@clarkb:matrix.org | it keeps track of how many slots (configurable) are available to run on each actual node. And if jobs don't run on a node for long enough then the node is deleted entirely | 15:20 |
@frickler:freitrix.de | so that division is just virtual and multiple jobs simply run in parallel on that master node? | 15:24 |
@clarkb:matrix.org | yes, I think it is up to the job to ensure that the jobs don't interfere with each other | 15:25 |
@frickler:freitrix.de | o.k., so then that would not help the jobs to know which one should use which cloud config, either | 15:26 |
@fungicide:matrix.org | right, though i think it would allow you to build a meta-static node consisting of nodes satisfied from different providers | 15:29 |
@fungicide:matrix.org | at least i remember it coming up as one use of metastatic, though reading over the driver documentation it's not immediately apparent to me how you'd define metastatic nodes consisting of a combination of nodes from different providers/drivers | 15:35 |
-@gerrit:opendev.org- Dong Zhang proposed: [zuul/zuul] 834857: Fix bug in getting changed files in PR https://review.opendev.org/c/zuul/zuul/+/834857 | 15:37 | |
-@gerrit:opendev.org- Dong Zhang proposed: [zuul/zuul] 834857: Fix bug in getting changed files in PR https://review.opendev.org/c/zuul/zuul/+/834857 | 15:56 | |
-@gerrit:opendev.org- Dong Zhang proposed: [zuul/zuul] 834857: Fix bug in getting changed files https://review.opendev.org/c/zuul/zuul/+/834857 | 15:57 | |
@pedromoritz:matrix.org | Hi! I'm using Zuul 5.0.0 with Kubernetes driver. Does anyone here have a image definition for ubuntu-bionic? I would like to have a Dockerfile properly crafted to run jobs that uses ubuntu-bionic label. | 20:57 |
@clarkb:matrix.org | Pedro Moritz de Carvalho Neto: I think pods are connected to using the kubectl connection type which means you should be able to just use the ubuntu:bionic image on dockerhub? You might want to cache your repos in there too to speed things up but that is an optimization | 21:06 |
@pedromoritz:matrix.org | Great! I'll try it and get back with the outcomes. | 21:09 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!