openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Add alembic.ini https://review.openstack.org/532260 | 00:02 |
---|---|---|
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Fix error handling for pidfile https://review.openstack.org/532206 | 00:02 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Move CRD tests to test_gerrit_legacy_crd https://review.openstack.org/531886 | 00:04 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Add skipped CRD tests https://review.openstack.org/531887 | 00:04 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: WIP: Support cross-source dependencies https://review.openstack.org/530806 | 00:04 |
*** JasonCL has quit IRC | 00:14 | |
*** JasonCL has joined #zuul | 00:15 | |
*** rlandy has quit IRC | 00:24 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: builder: do not cleanup image for driver not managing image https://review.openstack.org/516920 | 00:38 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Implement a static driver for Nodepool https://review.openstack.org/468624 | 00:40 |
*** corvus has quit IRC | 00:44 | |
*** JasonCL has quit IRC | 01:43 | |
*** JasonCL has joined #zuul | 01:48 | |
*** harlowja has quit IRC | 02:24 | |
*** mordred has quit IRC | 02:42 | |
*** mordred has joined #zuul | 02:43 | |
*** bhavik1 has joined #zuul | 05:06 | |
*** bhavik1 has quit IRC | 05:06 | |
*** bhavik1 has joined #zuul | 05:06 | |
*** corvus has joined #zuul | 05:21 | |
*** harlowja has joined #zuul | 05:24 | |
*** bhavik1 has quit IRC | 05:24 | |
*** bhavik1 has joined #zuul | 05:35 | |
*** bhavik1 has quit IRC | 05:42 | |
tobiash | SpamapS: I also had trouble with the hangs and spent half a day for debugging a similar issue | 06:15 |
tobiash | when I run zuul locally without any logging config I get the alembic errors, but not in my environment where I really run zuul | 06:16 |
tobiash | so this might have something to do with the log config | 06:16 |
SpamapS | tobiash: yeah that was realy weird. | 06:17 |
SpamapS | I'm just now adding some jobs to report to SQL and wondering about cleanup tasks. | 06:17 |
SpamapS | Seems like that DB could get big on a busy system. | 06:17 |
*** Guest76 is now known as jlk | 07:02 | |
*** threestrands has quit IRC | 07:36 | |
*** harlowja has quit IRC | 07:49 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Do pep8 housekeeping according to zuul rules https://review.openstack.org/522945 | 07:52 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Use same flake8 config as in zuul https://review.openstack.org/509715 | 07:52 |
SpamapS | tobiash: with sql reporter, have you seen where post jobs can't be reported? | 07:55 |
SpamapS | for some reason mine aren't showing up in builds | 07:55 |
tristanC | SpamapS: it should, is the sqlreporter added to your post pipeline? | 07:55 |
SpamapS | tristanC: I think so. :) | 07:55 |
SpamapS | tristanC: you win the Captain Obvious award tonight. It got left off. | 07:56 |
tristanC | yay :-) | 07:57 |
SpamapS | so do we need to add a GC thread for sql reporter connections? | 08:03 |
SpamapS | something that will just wake up every few minutes and delete old builds? | 08:03 |
tristanC | SpamapS: every few minutes sounds a bit overkill, actually why would you even delete entry from the database? | 08:09 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: handler: fix support for handler without launch_manager https://review.openstack.org/524773 | 08:10 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Add a plugin interface for drivers https://review.openstack.org/524620 | 08:10 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: builder: do not cleanup image for driver not managing image https://review.openstack.org/516920 | 08:10 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Implement a static driver for Nodepool https://review.openstack.org/468624 | 08:10 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Refactor run_handler to be generic https://review.openstack.org/526325 | 08:10 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Implement an OpenContainer driver https://review.openstack.org/468753 | 08:10 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Implement a Kubernetes driver https://review.openstack.org/521356 | 08:10 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Refactor NodeLauncher to be generic https://review.openstack.org/532450 | 08:10 |
SpamapS | tristanC: a busy system could end up with millions of records. I doubt they'll be relevant after a year. | 08:11 |
SpamapS | tristanC: and every few minutes is actually less painful than batching them up | 08:11 |
SpamapS | you want to pick off a few and go back to sleep | 08:12 |
SpamapS | databases don't like big deletes. | 08:12 |
tristanC | but isn't mysql able to handle hundreds of millions of records easily? | 08:16 |
SpamapS | Sure | 08:17 |
SpamapS | eventually you'll pay the price though | 08:17 |
SpamapS | indexes overflow RAM | 08:17 |
SpamapS | bad queries walk all the rows | 08:17 |
SpamapS | if the data isn't being used, archive it, get it out of the system. | 08:18 |
*** hashar has joined #zuul | 08:19 | |
tobiash | SpamapS: there is still somewhing broken with sql :( | 08:20 |
tobiash | seems like I had a small bug in the migration script which rendered it being effectively a noop | 08:21 |
tobiash | after fixing the migration script the test hangs when dropping the db because of waiting for a stale metadata lock | 08:21 |
SpamapS | It's working for me | 08:26 |
SpamapS | post and all | 08:26 |
tristanC | SpamapS: then maybe we could add an automatic build archiving thread, not convinced it should be enabled by default though | 08:39 |
tobiash | SpamapS: with github? | 08:40 |
*** jpena|off is now known as jpena | 08:45 | |
openstackgerrit | Tobias Henkel proposed openstack-infra/zuul feature/zuulv3: Really change buildset to string https://review.openstack.org/532459 | 08:58 |
*** dmellado has quit IRC | 10:06 | |
*** dmellado has joined #zuul | 10:07 | |
*** threestrands has joined #zuul | 10:21 | |
*** dmellado has quit IRC | 10:49 | |
*** hashar is now known as hasharAway | 10:51 | |
*** dmellado has joined #zuul | 11:23 | |
*** dmellado has quit IRC | 11:32 | |
*** dmellado has joined #zuul | 11:34 | |
*** jkilpatr has quit IRC | 11:44 | |
*** jkilpatr has joined #zuul | 12:16 | |
*** jpena is now known as jpena|lunch | 12:48 | |
*** dkranz has joined #zuul | 13:24 | |
*** rlandy has joined #zuul | 13:34 | |
*** jpena|lunch is now known as jpena | 13:51 | |
*** dkranz has quit IRC | 14:40 | |
electrofelix | does zuulv3 support projects turning pipeline notifications on/off for individual project without allowing it to alter the rest of the pipeline in any way? for example, selectively enable commenting by a post pipeline but without granting them access of a trusted config repo? | 14:52 |
*** pabelanger has quit IRC | 14:56 | |
*** pabelanger has joined #zuul | 14:56 | |
pabelanger | electrofelix: I don't believe so, AFAIK pipeline changes need to be done in a project-config repo | 14:58 |
tobiash | electrofelix: pabelanger is right, pipelines are defined only in trusted config repos | 14:59 |
tobiash | electrofelix: but you can have as many config repos as you want | 15:00 |
*** openstackgerrit has quit IRC | 15:01 | |
electrofelix | some teams would like to run other jobs and get some notification without blocking the change from landing, we're suggesting a post pipeline job at the moment | 15:02 |
electrofelix | some teams => some local teams | 15:02 |
electrofelix | rather than building another notification mechanism I was hoping we could selectively allow teams to add a job to the post pipeline and decide whether to have zuul comment without them needing to build their own pipeline to achieve this | 15:02 |
tobiash | electrofelix: are you using multiple tenants for the teams? | 15:03 |
tobiash | I'm separating bigger teams by tenants where each tenant has its own additional config repo where they can do such stuff | 15:03 |
electrofelix | we've not migrated just yet, and the initial plan wasn't to use separate tenants | 15:03 |
electrofelix | but this might suggest we need to do that, I guess the question then is can we still share some jobs across tenants? | 15:04 |
tobiash | electrofelix: then one way could be to give them an additional config repo and let them define a post-team-x pipeline there | 15:05 |
tobiash | electrofelix: you can share configs but have to pay attention on where to gate changes for them | 15:05 |
electrofelix | tobiash: you mean they would have to define the gate in their own config repo, but could they use some predefined jobs provided by us as part of that gate? | 15:06 |
tobiash | electrofelix: yes, that's possible but gating of the shared jobs must be done in a single tenant | 15:07 |
tobiash | I also have the base job in a single repo which is shared by all tenants | 15:08 |
electrofelix | ah, we aren't using a shared gate job for them, but rather run an instance of a particular job template against all projects for convenience so more the second item | 15:09 |
electrofelix | tobiash: tbh, the post-team pipeline sounds like the best option, thanks | 15:10 |
tobiash | no problem | 15:10 |
electrofelix | I presume the tenant part of the spec for zuul v3 http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html#tenants means that common pipelines and job definitions can be shared across tenants and they can then add to the pipelines? can they override them? | 15:22 |
electrofelix | mordred: reading about multi tenants and thinking about https://etherpad.openstack.org/p/zuulv3-jenkins-integration, does it need to consider use of job folders per tenant? | 15:25 |
*** dkranz has joined #zuul | 15:27 | |
tobiash | electrofelix: no, they cannot override them | 15:28 |
corvus | electrofelix: SpamapS has also proposed something similar that wants per-project pipeline configuration; i think after the v3.0 release we'll plan out some way of doing that and hopefully solve both use cases. | 15:35 |
*** jkilpatr has quit IRC | 15:39 | |
*** dkranz has quit IRC | 15:44 | |
electrofelix | tobiash: that's good to know | 15:48 |
electrofelix | corvus SpamapS: has that been captured anywhere or channel history only? | 15:48 |
corvus | electrofelix: in the proposed change | 15:50 |
* corvus looks | 15:50 | |
*** dkranz has joined #zuul | 15:50 | |
corvus | electrofelix: https://review.openstack.org/530521 | 15:50 |
corvus | electrofelix: we haven't talked about it -- only that we know we need to. right now we're focused only on v3.0 release. | 15:51 |
electrofelix | corvus: no worries, I'm trying to build up information that will help our migration and what things to look at helping out when some time is available | 15:52 |
*** jkilpatr has joined #zuul | 15:54 | |
*** dkranz has quit IRC | 15:57 | |
*** jkilpatr has quit IRC | 16:16 | |
mordred | electrofelix: I'm not sure - honestly, it might be nice to chat at some point about the zuulv3-jenkins-integration since you're much more up to date on more recent jenkins and have the use-case yourself | 16:25 |
SpamapS | I wonder if the simplest thing would be to allow variable expansion in pipeline configs like we do for status-url and such. | 16:27 |
*** jkilpatr has joined #zuul | 16:28 | |
SpamapS | electrofelix: anyway I want what you just said too. Some of my users are already like "hey this .. comments a lot". | 16:29 |
SpamapS | even though really it's just like "well stop writing broken patches" ;-) | 16:29 |
electrofelix | SpamapS: blame the users, it's always their fault | 16:29 |
SpamapS | precisely | 16:30 |
SpamapS | customer shmustomer | 16:31 |
electrofelix | mordred: I was thinking that as zuul can have jobs namespaced to a tenant, use of the same jobs in jenkins may need to be namespaced as well using folders otherwise how do you allow for two tenants to have the same job name but different defined, event if sharing a definition having it be used by another project may cause some surprise | 16:32 |
mordred | electrofelix: oh - so, I was kind of thinking that the zuul job corresponding to a jenkins would be a simple job that takes the jenkins job name as a parameter - that might get unweildy with a lot of jobs though ... | 16:34 |
mordred | electrofelix: are you coming to the Dublin PTG? | 16:40 |
electrofelix | mordred: will try to make, after all it's only 3 hours away ;-) | 16:46 |
*** openstackgerrit has joined #zuul | 16:51 | |
openstackgerrit | David Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Remove need to start executor as root https://review.openstack.org/532575 | 16:51 |
*** JasonCL has quit IRC | 16:52 | |
mordred | electrofelix: :) | 16:53 |
*** clarkb has quit IRC | 16:53 | |
*** JasonCL has joined #zuul | 16:54 | |
*** clarkb1 has joined #zuul | 16:54 | |
*** clarkb1 is now known as clarkb | 16:58 | |
*** harlowja has joined #zuul | 17:16 | |
*** dkranz has joined #zuul | 17:30 | |
rcarrillocruz | heya, are we good to merge https://review.openstack.org/#/c/530265/ | 17:53 |
rcarrillocruz | corvus: ^ | 17:53 |
corvus | rcarrillocruz: yep +3 | 17:55 |
rcarrillocruz | sweet, thx! | 17:55 |
*** jpena is now known as jpena|off | 18:14 | |
*** jkilpatr has quit IRC | 18:19 | |
*** jkilpatr has joined #zuul | 18:20 | |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Add specific setup inventory https://review.openstack.org/530265 | 18:20 |
*** muhd has joined #zuul | 18:21 | |
muhd | hi | 18:21 |
*** openstack has joined #zuul | 18:33 | |
*** ChanServ sets mode: +o openstack | 18:33 | |
*** harlowja has quit IRC | 18:37 | |
*** jkilpatr has joined #zuul | 18:40 | |
leifmadsen_ | greetings and salutations | 18:45 |
*** sshnaidm is now known as sshnaidm|afk | 18:48 | |
*** xinliang has quit IRC | 18:53 | |
*** xinliang has joined #zuul | 18:53 | |
*** xinliang has quit IRC | 18:53 | |
*** xinliang has joined #zuul | 18:54 | |
openstackgerrit | David Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Remove need to start executor as root https://review.openstack.org/532575 | 18:57 |
*** harlowja has joined #zuul | 19:13 | |
*** harlowja_ has joined #zuul | 19:16 | |
*** harlowja has quit IRC | 19:19 | |
SpamapS | hm | 19:38 |
*** rlandy has quit IRC | 19:43 | |
*** rlandy has joined #zuul | 19:48 | |
SpamapS | 2018-01-10 19:50:17.501903 | [cap] Waiting on logger | 19:51 |
SpamapS | What's this? Hrm. | 19:51 |
*** JasonCL has quit IRC | 19:54 | |
*** JasonCL has joined #zuul | 19:55 | |
clarkb | SpamapS: did the console log daemon die? | 19:56 |
SpamapS | no I forgot a security group rule | 19:57 |
SpamapS | to open everything ;) | 19:57 |
openstackgerrit | Tobias Henkel proposed openstack-infra/zuul feature/zuulv3: Really change patchset column to string https://review.openstack.org/532459 | 20:18 |
SpamapS | clarkb: have you done more than 2 node multi-node jobs with zuul? | 20:21 |
SpamapS | v3 I mean | 20:22 |
clarkb | yes we have 3 and 4 node jobs in tripleo and I think 3 node dvr ha for neutron | 20:22 |
SpamapS | Ok, I'm having weirdness happen with a 5 node job. | 20:23 |
pabelanger | SpamapS: I've done 5 myself, worked as I expected | 20:23 |
SpamapS | I have 2 providers in nodepool.. and it seems like it can't get enough nodes from one, even though the other has plenty of space. | 20:23 |
SpamapS | I had to delete a few from the smaller provider and then it was able to grab enough | 20:24 |
pabelanger | was is the max-servers for smaller provider? | 20:25 |
SpamapS | Predicted remaining pool quota: {'compute': {'cores': inf, 'instances': 0, 'ram': | 20:25 |
SpamapS | inf}} | 20:25 |
SpamapS | so that provider was _out_ of nodes | 20:25 |
pabelanger | oh, right, new quota stuff | 20:25 |
pabelanger | keep forgetting about that | 20:25 |
SpamapS | But it looked like it grabbed 3, and then just kept retrying | 20:25 |
SpamapS | instead of letting the other worker handle | 20:25 |
SpamapS | 2018-01-10 13:22:01,425 DEBUG nodepool.driver.openstack.OpenStackNodeRequestHandler[zuul.cloud.phx3.gdg-4438-PoolWorker.a-main]: Predicted remaining pool quota: {'compute': {'cores': inf, 'instances': 45, 'ram': inf}} | 20:26 |
pabelanger | did instances fail to boot? | 20:26 |
pabelanger | the othe r2 | 20:26 |
SpamapS | Nope | 20:26 |
SpamapS | everything booted fine | 20:26 |
SpamapS | Might be a logic problem. | 20:27 |
pabelanger | will have to defer to Shrews | 20:27 |
SpamapS | Seems like the p worker should have basically said "I can't handle this request" when it couldn't get all 5 nodes. | 20:27 |
SpamapS | (my clouds are named 'a', and 'p' because we have instance name length limits) | 20:27 |
SpamapS | A = iAd and P = PHX | 20:28 |
pabelanger | no, I think p will wait for even until quota is freed up | 20:28 |
pabelanger | IIRC | 20:28 |
SpamapS | pabelanger: but p's quota would never get freed up | 20:28 |
SpamapS | It was held by other labels. | 20:28 |
SpamapS | so you'd have to wait for another test entirely to run and complete. | 20:28 |
pabelanger | yes | 20:29 |
pabelanger | seen that in our nodepool | 20:29 |
Shrews | if a provider CAN handle a request, but doesn't have enough nodes available, it will pause until resources are free | 20:29 |
pabelanger | when we are at max quota | 20:29 |
SpamapS | Shrews: yeah thats breaking me, a lot. | 20:29 |
Shrews | there's nothing to say "go try another provider" | 20:29 |
Shrews | SpamapS: how does it break you, other than having to wait? | 20:30 |
SpamapS | Shrews: I don't run those other labels very much. | 20:30 |
pabelanger | SpamapS: do you have min-ready more then 1 for your labels or just a lot of labels | 20:30 |
SpamapS | So I could end up waiting days. | 20:30 |
pabelanger | right, I can see that, waiting days | 20:30 |
pabelanger | I do think tobiash added something to rotate out those labels however | 20:30 |
pabelanger | max-lifetime? | 20:31 |
tobiash | pabelanger: you mean max-ready-age? | 20:31 |
pabelanger | yah | 20:31 |
pabelanger | max-ready-age: 3600 | 20:31 |
SpamapS | ehhh | 20:31 |
SpamapS | no | 20:31 |
pabelanger | https://docs.openstack.org/infra/nodepool/feature/zuulv3/configuration.html#labels | 20:31 |
SpamapS | I don't think it should take a request if it's already at capacity. | 20:32 |
pabelanger | SpamapS: what if you set min-ready: 0 for the label too? Does mean waiting longer for a node to launch | 20:32 |
SpamapS | pabelanger: yeah, these are supposed to be quick tests. | 20:32 |
SpamapS | It's a label for tiny ubuntu nodes to run linters and stuff. | 20:32 |
tobiash | but using max-ready-age for solving this problem is a bad compromise | 20:33 |
SpamapS | So I keep a bunch just sitting there and then those tests finish fast. | 20:33 |
pabelanger | tobiash: yah, would be a workaround at best for sure | 20:33 |
tobiash | I think we might need to tune the algorithm in nodepool | 20:33 |
SpamapS | Agreed, I think the workers should stop taking reqs if they get over their quota. | 20:33 |
SpamapS | Because really I want _most_ things to end up on my 'a' cloud. | 20:34 |
SpamapS | In fact I'm going to scale back 'p' even more. | 20:34 |
Shrews | SpamapS: reading sb a bit more... "it can't get enough nodes from one, even though the other has plenty of space" | 20:34 |
pabelanger | we kinda have this issue when we get to max capacity in openstack nodepool, some requests take longer to fulfil, so seen what SpamapS is reporting before | 20:34 |
Shrews | SpamapS: it's on purpose that we do not get nodes from across multiple providers | 20:34 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Move CRD tests to test_gerrit_legacy_crd https://review.openstack.org/531886 | 20:34 |
Shrews | SpamapS: for a single job | 20:35 |
SpamapS | Shrews: yeah no I don't want cross-provider jobs | 20:35 |
tobiash | I think I had a chat with corvus about that topic back in june/july | 20:35 |
Shrews | SpamapS: then i think i've missed something | 20:35 |
tobiash | but don't find it anymore | 20:35 |
SpamapS | I want nodepool to not take reqs when it can't service them. Or maybe it has to introspect and release them. | 20:36 |
Shrews | but discussion about the node selection algorithm does keep coming up, as tobiash says. i usually defer to corvus since it's the algorithm from his spec. perhaps we should reconsider the algorithm | 20:37 |
SpamapS | I think size-of-req has to come into play when taking one. | 20:38 |
pabelanger | SpamapS: so, is (min-ready * all labels) + 5 nodes > cloud capacity? | 20:38 |
SpamapS | If I had a max-servers of 4, would it just be forever stuck? | 20:38 |
Shrews | SpamapS: but your scenario (releasing nodes it *could* get and give up on the request) could lead to a starvation of the job in a busy system | 20:39 |
Shrews | SpamapS: no, it would be rejected | 20:39 |
SpamapS | pabelanger: min-ready of all labels is 14. max-servers is 14. That was by design thinking it might avoid this fate. I was hoping some of the min-ready's would end up on 'a'. | 20:39 |
SpamapS | (and max-servers on 'a' is 50) | 20:40 |
Shrews | SpamapS: at least, it should... i'm failing to find that code now | 20:41 |
SpamapS | Shrews: the point about starvation is understood and complicates my thinking. | 20:41 |
Shrews | http://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/driver/openstack/handler.py?h=feature/zuulv3#n538 | 20:41 |
Shrews | tobiash: does that code ^^^ take into account max-servers? | 20:42 |
tobiash | Shrews: it should | 20:43 |
Shrews | i see the variable referenced, so assuming so | 20:43 |
SpamapS | Shrews: so that's weird, why didn't mine decline? | 20:43 |
Shrews | SpamapS: bug? tobiash? | 20:43 |
tobiash | SpamapS: it declines if request > quota available for whole provider | 20:44 |
SpamapS | I'm going to watch it for a while today. I had planned on reducing the max-servers for 'p' below the total min-ready once I had verified 'a' works. | 20:44 |
tobiash | as the provider quota is the theoretical max | 20:44 |
tobiash | it does *not* decline if request < provider quota but > currently available quota | 20:45 |
SpamapS | Oh right | 20:45 |
SpamapS | ok so that would cover if I had a max-servers of 4. | 20:45 |
tobiash | the algorithm currently takes the request and waits for nodes freed up | 20:45 |
tobiash | so thats how the current algorithm is designed | 20:46 |
tobiash | which is a problem if you have multiple small providers | 20:46 |
tobiash | I had this problem too but now I switched to a single large provider | 20:47 |
Shrews | SpamapS: as to your hopes about the min-ready's being equally distributed, i could not think of a good way to do that. Current design just submits a request for that node type. Totally random as to which provider satisfies it. | 20:49 |
tobiash | decline in nodepool term means 'I will never ever be able to fulfill this request' | 20:49 |
tobiash | Shrews: maybe a random delay of 0-1s until locking the request? | 20:50 |
tobiash | Shrews: am I right that nodepool is notified by zookeeper about a new request? | 20:51 |
Shrews | more randomness to achieve equal distribution? O.o :) | 20:51 |
SpamapS | yes | 20:51 |
SpamapS | nodepool watches the req node | 20:51 |
tobiash | in this case maybe the notification order is deterministic and the same provider is first every time | 20:51 |
SpamapS | and when new req's are added the watch fires | 20:51 |
SpamapS | and yeah it may be that it's always first-watcher | 20:51 |
Shrews | tobiash: nodepool isn't "notified" as much as it checks for unhandled node requests in zk on a periodic basis | 20:52 |
SpamapS | Shrews: wait what? | 20:52 |
SpamapS | we don't use a watch? | 20:52 |
SpamapS | that's.. kind of the point of zk... | 20:52 |
SpamapS | polling is soooo mysql | 20:52 |
Shrews | we do not | 20:52 |
Shrews | SpamapS: our first implementation was "let's try this first since it's so much easier and see how it works" | 20:53 |
Shrews | so far, it has worked great. but I welcome patches to implement watches ;) | 20:53 |
SpamapS | seems legit | 20:53 |
Shrews | watches come with complications | 20:53 |
SpamapS | so as far as the algorithm, we need a retry state, not a 'decline it is dead to me' just 'not right now' | 20:53 |
Shrews | like, once a watch is triggered, you have to re-establish it again | 20:54 |
SpamapS | Yeah I wonder if even just the order of providers ends up mattering. | 20:57 |
SpamapS | Becuase 'p' is first, and is totally getting all the reqs. | 20:57 |
Shrews | SpamapS: nope. just 1 thread per provider. nothing special | 20:58 |
*** hasharAway has quit IRC | 21:38 | |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Remove need to start executor as root https://review.openstack.org/532575 | 21:48 |
*** threestrands_ has joined #zuul | 21:52 | |
*** threestrands_ has quit IRC | 21:52 | |
*** threestrands_ has joined #zuul | 21:52 | |
*** threestrands_ has quit IRC | 21:53 | |
*** threestrands_ has joined #zuul | 21:53 | |
*** threestrands_ has quit IRC | 21:53 | |
*** threestrands_ has joined #zuul | 21:53 | |
*** dkranz has quit IRC | 21:54 | |
*** threestrands has quit IRC | 21:54 | |
*** dkranz has joined #zuul | 21:55 | |
*** dkranz has quit IRC | 22:02 | |
*** corvus is now known as jeblair | 22:21 | |
*** jeblair is now known as corvus | 22:22 | |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Rename _useBuilder method to useBuilder https://review.openstack.org/531883 | 22:24 |
corvus | Shrews: the fingergw did not stop on zuulv3.o.o. possible the issue is fixed already by later updates, but we should keep an eye on it next time. | 22:28 |
Shrews | corvus: how did you try to stop it? | 22:28 |
corvus | also, heck, it's probably okay to start and stop it repeatedly in production to test. :) | 22:28 |
corvus | Shrews: service zuul-fingergw stop | 22:28 |
Shrews | corvus: the init script totally doesn't work yet | 22:28 |
corvus | ah ok :) uh, confirmed! :) | 22:28 |
Shrews | corvus: it doesn't use the command socket. pabelanger had a change up to fix that for the other services. he said he'd add fingergw to it | 22:29 |
Shrews | some nameless coder added a non-working init script, which was approved by some nameless reviewers who didn't review close enough ;) | 22:30 |
* Shrews hides in a corner | 22:30 | |
Shrews | in shame | 22:31 |
corvus | it looked like an init script! | 22:31 |
corvus | i did not realize it was actually a cryptominer | 22:31 |
Shrews | it had 'init' in it somewhere. totally looked valid | 22:31 |
*** jkilpatr has quit IRC | 22:32 | |
*** JasonCL has quit IRC | 22:34 | |
pabelanger | Shrews: sorry, I haven't pushed up that patch yet. I will this evening | 22:36 |
Shrews | pabelanger: k. i can do it in the morning if you can't get to it this evening. gotta go do dinner things now | 22:38 |
*** threestrands_ has quit IRC | 22:41 | |
*** threestrands has joined #zuul | 22:55 | |
*** threestrands has quit IRC | 22:55 | |
*** threestrands has joined #zuul | 22:55 | |
*** Guest7 has joined #zuul | 23:11 | |
*** Guest7 has quit IRC | 23:11 | |
*** threestrands has quit IRC | 23:11 | |
*** threestrands has joined #zuul | 23:20 | |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul-jobs master: Adjust check for .stestr directory https://review.openstack.org/532688 | 23:22 |
*** rlandy is now known as rlandy|bbl | 23:37 | |
*** haint has quit IRC | 23:41 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!