SpamapS | jeblair: yeah I suspect it may not be talking to that project right just yet. | 00:00 |
---|---|---|
jeblair | SpamapS: ah. | 00:00 |
SpamapS | I did just verify zuul's user has write access | 00:00 |
SpamapS | so not 100% sure yet | 00:00 |
jeblair | SpamapS: if there is no config for that project, it won't report | 00:00 |
jeblair | SpamapS: you can get it to do so by adding a project stanza in a config project with "check: {jobs: []}" | 00:01 |
jeblair | SpamapS: then zuul thinks that project is attached to check and should report config syntax errors on the initial in-repo add. | 00:01 |
SpamapS | pabelanger: thanks that worked | 00:03 |
SpamapS | https://photos.app.goo.gl/IXr5GfCWrMUWxheG2 | 00:03 |
jlk | SpamapS: I hadn't seen that, but Ansible's ansibot author says he's seen it | 00:03 |
SpamapS | jeblair: it's reporting to the PR now, so maybe the old zuul.yaml was bad syntax'd too? | 00:03 |
pabelanger | SpamapS: yay | 00:03 |
SpamapS | jlk: my VM is running in the same DC as the github enterprise, so latency is crazy low | 00:03 |
jlk | huh, maybe we need to introduce some delay? | 00:04 |
SpamapS | jlk: or some kind of cache invalidating | 00:04 |
jlk | You'd think the etag would change | 00:04 |
SpamapS | like maybe there's an if-none-match or if-modified-since we can add | 00:04 |
SpamapS | I haven't looked at what we send for etag | 00:05 |
jlk | the etag is supposed to be an indicator that something has changed | 00:05 |
SpamapS | so we're sending if-non-match already | 00:05 |
SpamapS | none | 00:05 |
jlk | if we get the notice before the API has the event, I gotta wonder if that's a bug on the GHE side | 00:05 |
SpamapS | jeblair: ah yeah, it wasn't in any pipelines yet.. so that makes sense. | 00:05 |
SpamapS | jlk: Or maybe we're getting it _while the transaction is still uncommitted_ | 00:06 |
jlk | yeah, that seems pretty poor from GHE | 00:06 |
SpamapS | like, with txn: { send_notifications; commit_stuff } | 00:06 |
SpamapS | instead of getting the hooks after | 00:07 |
jlk | that seems dangerous | 00:07 |
SpamapS | Yeah it doesn't sound right. I'm guessing it has to do with etags. | 00:08 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: WIP: experiment with late-binding inheritance https://review.openstack.org/511352 | 00:13 |
SpamapS | jlk: the rate limiting work you did was all on the app side, eyah? | 00:16 |
SpamapS | yeah | 00:16 |
jlk | yeah. | 00:17 |
jlk | non-app chews through API faster | 00:17 |
jlk | was hoping that graphql would come online soon, and be available to apps | 00:17 |
jlk | still missing ONE API call for fetching thus far, aside from, you know, ALL the calls being available to apps. | 00:18 |
SpamapS | our version of GHE doesn't have apps yet anyway | 00:18 |
jlk | yeah, that's coming in the next release I believe | 00:18 |
SpamapS | That's what I hear | 00:18 |
jlk | oooh! I got a webhook sent through, only my request signature is wrong. Hrm. | 00:28 |
*** artgon has joined #zuul | 00:30 | |
*** artgon has left #zuul | 00:31 | |
jlk | oh sweet hot goodness. Got past that, only to have my client access token not work. Interesting. | 00:31 |
jlk | disconcerting. Github doesn't want to auth me. | 00:51 |
jlk | ah, works as a user, just not an app. I'm not sure I ever had that working. | 00:59 |
ianw | can you use with_items / with_nested on a job definition? | 04:32 |
ianw | https://review.openstack.org/#/c/512450/15/.zuul.d/jobs.yaml was what i was trying | 04:34 |
*** bhavik1 has joined #zuul | 04:42 | |
ianw | no being the answer, as i guess configloader doesn't grok that | 04:44 |
tobiash | ianw: the job definition is before ansible so that doesn't work | 04:58 |
ianw | tobiash: yeah, i thought we might have copied the behavior. might be fun to play with one day | 05:01 |
*** dmellado has quit IRC | 05:21 | |
*** dmellado has joined #zuul | 05:22 | |
*** bhavik1 has quit IRC | 05:31 | |
*** dmellado has quit IRC | 05:37 | |
*** dmellado has joined #zuul | 05:40 | |
*** yolanda has joined #zuul | 05:44 | |
openstackgerrit | Andreas Jaeger proposed openstack-infra/zuul-jobs master: Silence ansible-lint https://review.openstack.org/514526 | 06:29 |
openstackgerrit | Andreas Jaeger proposed openstack-infra/zuul-jobs master: Silence ansible-lint https://review.openstack.org/514526 | 06:37 |
*** yolanda has quit IRC | 06:40 | |
*** hashar has joined #zuul | 07:42 | |
*** yolanda has joined #zuul | 07:46 | |
SpamapS | hrm.. Depends-On not working in github. | 07:47 |
SpamapS | oh n/m, they check the body of the _pr_, not the commits inside it | 07:53 |
SpamapS | hrm.. and it leaves the gate status pending for some reason | 08:16 |
openstackgerrit | Kien Nguyen proposed openstack-infra/zuul-jobs master: [DNM] Fix generic process-test-results role https://review.openstack.org/514559 | 08:23 |
*** electrofelix has joined #zuul | 08:44 | |
openstackgerrit | Jens Harbott (frickler) proposed openstack-infra/zuul-jobs master: Follow redirects when triggering readthedocs build https://review.openstack.org/514570 | 08:58 |
*** sambetts|afk is now known as sambetts | 09:43 | |
*** yolanda has quit IRC | 10:55 | |
Shrews | clarkb: thx for the comments on 512637. left you a reply, but the short answer is "not going to change anything about the jobs themselves until after successful migration" | 11:28 |
openstackgerrit | Tobias Henkel proposed openstack-infra/zuul feature/zuulv3: Create job-output.txt together with JobDir https://review.openstack.org/514617 | 11:50 |
tobiash | that fixes the 'Log not found for build ID 94f4e27243a44e73a19808c328499a28' when opening the live log stream too early ^ | 11:51 |
kklimonda | dmsimard: as I said on #ansible, I have a more of a workflow question - you can see a job I've written that's used to "validate" yaml files for any parser errors: https://signal.klimonda.com/single-task/ | 11:58 |
kklimonda | but It's unreadable in this format | 11:58 |
kklimonda | dmsimard: so I had this brilliant idea that I could use "include_tasks" and have automatic per-yaml file task like here: https://signal.klimonda.com/multiple-tasks-ignored-errors/ | 11:59 |
dmsimard | kklimonda: you'd prefer to have one task per file or something like that ? | 11:59 |
kklimonda | but now ansible and ARA have no way of reporting back to me what failed | 11:59 |
dmsimard | ah, that's that I was going to suggest | 11:59 |
kklimonda | so I started thinking that perhaps, in my first example, ARA should be reporting status on per-item basis when task is being looped over (by using with_items) | 12:00 |
dmsimard | kklimonda: in 2.4 there's include_tasks and import_tasks, each have a different behavior | 12:01 |
dmsimard | and by the way, Zuul doesn't provide 2.4(.1) yet | 12:01 |
dmsimard | kklimonda: can you try import_tasks instead of include_tasks, out of curiosity ? | 12:01 |
dmsimard | I believe import_tasks is the equivalent of 2.3 'include' with 'static: no' | 12:01 |
kklimonda | "ERROR! You cannot use loops on 'import_tasks' statements. You should use 'include_tasks' instead." | 12:02 |
dmsimard | hm | 12:03 |
dmsimard | struggling to understand the difference, haven't played a lot with the new stuff in 2.4 a lot yet tbh | 12:04 |
dmsimard | kklimonda: in your multi-task example, the error is showing up as if you had ignore_errors: yes https://signal.klimonda.com/multiple-tasks-ignored-errors/result/857eb9fa-bacb-4c0d-be74-8173e842d6cd/ | 12:05 |
kklimonda | I'm not even sure if that approach has merit, perhaps a better way would be to generate txt/html report, upload it to the logs server, and make the task generate the report and fail if need be | 12:05 |
kklimonda | yes, I do have ignore_errors: yes here | 12:05 |
kklimonda | https://signal.klimonda.com/multiple-tasks-with-errors/ | 12:05 |
kklimonda | this one doesn't have ignore_errors, but in that case ansible aborts prematurely | 12:05 |
kklimonda | (well, not from ansible's point of view) | 12:06 |
dmsimard | kklimonda: oh, what's the problem then ? sorry I'm not quite awake yet :D | 12:06 |
kklimonda | dmsimard: so I'd like ARA to generate a report where I can see parsing status (FAILED/OK) on a per-file basis | 12:06 |
kklimonda | but I just don't think that's possible, and that begs the question if I'm moving in the right direction - perhaps I should be validating those files in a single task, generating a report out of that run and have ansible pass/fail based on the result | 12:08 |
dmsimard | hmm, just thinking out loud here | 12:09 |
dmsimard | but maybe register the result of your single task, iterate over it to print a line "file: status" | 12:10 |
dmsimard | let me get out some pseudocode | 12:13 |
dmsimard | kklimonda: that surely doesn't work but http://paste.openstack.org/raw/624460/ | 12:15 |
dmsimard | or use a template file, something, dunno | 12:16 |
dmsimard | ¯\_(ツ)_/¯ | 12:16 |
* dmsimard off to get caffeine | 12:16 | |
kklimonda | dmsimard: oh wow, can you use templates like that? | 12:39 |
dmsimard | kklimonda: inline jinja? sure http://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/multi-node-firewall/tasks/main.yaml | 12:40 |
*** yolanda has joined #zuul | 12:56 | |
kklimonda | dmsimard: ara_report is failing with MODULE FAILURE, I had to add ignore errors to the validate yaml task as it wouldn’t run next tasks anyway | 13:25 |
kklimonda | I’ll debug it some more once I get home | 13:25 |
kklimonda | What would be a result anyway? Does ARA render variables set with ara_record differently? | 13:26 |
openstackgerrit | David Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Add log streaming logging and exception handling https://review.openstack.org/513811 | 13:30 |
Shrews | ugh. foiled yet again by py 3.5 vs 3.6 changes | 13:30 |
dmsimard | kklimonda: it's just a way to associate arbitrary data with your report, see example here (expand "records") http://logs.openstack.org/92/511992/9/check/legacy-ara-integration-py27-latest-centos-7/e6c8e23/logs/build-playbook/ | 13:47 |
dmsimard | kklimonda: http://ara.readthedocs.io/en/latest/usage.html#using-the-ara-record-module | 13:47 |
dmsimard | kklimonda: note that it requires the ara modules to be in the ansible library path -- which isn't the case for zuul v3 | 13:58 |
clarkb | Shrews: I think the dropping of the "legacy" in the job names had me confused as to what the goal was there. I was reading it as migrate jobs to nodepool and zuulify them | 14:52 |
clarkb | but I guess that is because you can't have jobs with the same name defined in multiple places (so job name had to change) | 14:52 |
Shrews | clarkb: ah, no. sorry, should have probably been more explicit with the commit message. just a simple move into the nodepool repo. the renaming is following the "job rename" guidelines | 14:53 |
Shrews | https://docs.openstack.org/infra/manual/drivers.html#naming-with-zuul-v3 | 14:54 |
openstackgerrit | Merged openstack-infra/nodepool master: Migrate legacy jobs https://review.openstack.org/512637 | 15:10 |
Shrews | jeblair: a job defined in branch A of a project should be able to be referenced in branch B of the same project, right? | 15:25 |
Shrews | branch A == master, in this scenario | 15:25 |
jeblair | Shrews: yes | 15:32 |
Shrews | jeblair: ok. i must have something else wrong then. | 15:34 |
jeblair | Shrews: not necessarily. things are weird with branch stuff right now. | 15:35 |
jeblair | Shrews: that's what i'm working on. | 15:36 |
Shrews | jeblair: ok. https://review.openstack.org/513766 keeps failing on the situation i described. no logs, just "error" from the referenced jobs | 15:37 |
Shrews | jeblair: i'll be patient as you dig into such black magic and assume this to be a zuul bug | 15:38 |
jeblair | Shrews: i think you should copy that job and those playbooks into the feature branch | 15:38 |
Shrews | jeblair: yeah, i can do that. that was going to be my plan if you answered "no" | 15:39 |
jeblair | Shrews: i think that will work now and in the future (but not 100% on that yet) | 15:39 |
Shrews | jeblair: same job names? or should i modify them? | 15:40 |
jeblair | Shrews: same | 15:40 |
* Shrews tries | 15:40 | |
jeblair | Shrews: you might then run up against another bug or two, which are the things i'm working on now. | 15:40 |
Shrews | neat | 15:41 |
jeblair | i will not have a solution for those. | 15:41 |
jeblair | yet. | 15:41 |
Shrews | understood | 15:41 |
SpamapS | mmmm Zuul doing work for me. | 15:43 |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Migrate legacy jobs for feature/zuulv3 branch https://review.openstack.org/513766 | 15:44 |
SpamapS | need to figure out this github labels issue though | 15:44 |
SpamapS | I _think_ we might just have to add a small delay handling webhooks | 15:44 |
pabelanger | SpamapS: awesome! Have you tried using any of zuul-jobs roles yet? Would love to see another zuul using them and giving feedback. I know jamielennox tried on bonny | 15:53 |
*** yolanda has quit IRC | 15:54 | |
*** hashar is now known as hasharAway | 15:54 | |
tobiash | pabelanger: I started today with a new setup using zuul-jobs | 15:56 |
SpamapS | pabelanger: I'm using a couple yes. | 15:56 |
* SpamapS looks | 15:56 | |
pabelanger | Yay | 15:56 |
pabelanger | that is awesome :) | 15:56 |
tobiash | one thing I learned that the cached git handling is a bit scattered throughout project-config and zuul-jobs | 15:57 |
tobiash | maybe it is possible to solve this in a generic way directly in zuul-jobs | 15:58 |
SpamapS | pabelanger: I plan to contribute some as well. :) | 15:58 |
tobiash | but I have to think about more deeply | 15:58 |
SpamapS | Currently working on a thing that validates ansible playbooks for people. | 15:58 |
*** jeblair has quit IRC | 16:00 | |
*** jeblair has joined #zuul | 16:01 | |
SpamapS | and kubernetes resources as well | 16:03 |
SpamapS | pabelanger: are there any specific roles in there you wish people used more? | 16:08 |
SpamapS | I'm not likely to setup AFS any time soon :) | 16:08 |
SpamapS | But I do want to move my logs off my all in one zuul and into object storage at some point. | 16:08 |
pabelanger | SpamapS: nothing specific, mostly just want to get some more coverage. Something like prepare-workspace should work for everbody | 16:09 |
jeblair | SpamapS: zuul-jobs certainly does not require afs | 16:12 |
jeblair | SpamapS: all of the roles *and jobs* in zuul-jobs are meant to be reusable by anyone, so please do :) | 16:13 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Fix branch ordering on dynamic reconfiguration https://review.openstack.org/514744 | 16:27 |
*** sambetts is now known as sambetts|afk | 17:00 | |
jeblair | clarkb: ^ can you review that? | 17:04 |
jeblair | tobiash, SpamapS: ^ you also reviewed the earlier related change | 17:04 |
* tobiash looking | 17:07 | |
clarkb | now that I am looking at this change and the original one again I wonder how safe editing a list is when iterating it | 17:11 |
jeblair | clarkb: sorted returns a copy | 17:12 |
clarkb | jeblair: right but you assign that copy to branches then iterate over branches and modify branches in that iteration | 17:13 |
jeblair | clarkb: where do i modify branches in the iteration? | 17:13 |
clarkb | jeblair: it looks like prepending (like in your changes) is safe | 17:13 |
clarkb | oh thats an if | 17:14 |
clarkb | nevermind | 17:14 |
clarkb | I need more caffeine | 17:14 |
jeblair | whew | 17:14 |
clarkb | should've just trusted my previous review more :) | 17:14 |
SpamapS | pabelanger: yeah, prepare-workspace is working fine | 17:18 |
SpamapS | as are the SSH key pieces | 17:18 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Fix branch ordering on dynamic reconfiguration https://review.openstack.org/514744 | 17:27 |
jeblair | i sent this email about checking out tags: http://lists.openstack.org/pipermail/openstack-infra/2017-October/005631.html | 17:28 |
openstackgerrit | Merged openstack-infra/zuul-jobs master: Follow redirects when triggering readthedocs build https://review.openstack.org/514570 | 18:02 |
Shrews | jeblair: fyi, your suggestion to just copy the master branch jobs to the feature branch worked without any hiccups in https://review.openstack.org/513766 | 18:03 |
Shrews | pabelanger: clarkb: if you both could re-review that (513766) when you get a moment, we'll have the feature/zuulv3 properly testing again | 18:05 |
jeblair | Shrews: cool, don't look too hard :) | 18:08 |
SpamapS | so, I think my github problem is in fact just that my GHE is so close to my zuul. | 18:09 |
SpamapS | testing with a delay whenever receiging label updated events now | 18:09 |
*** electrofelix has quit IRC | 18:23 | |
*** __zeus__ has joined #zuul | 18:31 | |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: DNM: test bash script https://review.openstack.org/514788 | 18:35 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: DNM: test bash script https://review.openstack.org/514788 | 18:40 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: DNM: test bash script https://review.openstack.org/514788 | 18:45 |
pabelanger | Shrews: +3 | 18:46 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: DNM: test bash script https://review.openstack.org/514788 | 18:48 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Migrate legacy jobs for feature/zuulv3 branch https://review.openstack.org/513766 | 18:56 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: DNM: test bash script https://review.openstack.org/514788 | 19:02 |
dmsimard | I have a question that confuses my brain. Is it a bad assumption that Zuul pipeline precedence does not take into account nodepool at all ? | 19:14 |
dmsimard | My problem is the following: I have pipelines A B C (served by nodepool region #1) and X Y Z (served by nodepool region #2), if I set pipeline Z on a high precedence, will it prevent jobs from A B and C from being enqueued even though there could be available capacity ? | 19:15 |
mrhillsman | if they are being served by two separate regions why would it matter? | 19:16 |
mrhillsman | region1 could care less about region2 jobs and vice versa no? | 19:17 |
mrhillsman | i am a novice so pardon my response if too elementary | 19:18 |
dmsimard | mrhillsman: zuul pipeline precedence makes it so if you have a pipeline set to 'high', no job with a 'normal' precedence pipeline start until all the jobs in 'high' have been served | 19:18 |
dmsimard | mrhillsman: this is why there is a backlog of jobs in the 'post' pipeline in openstack sometimes, it's because it has a lower priority than, for example, 'gate' | 19:19 |
Shrews | dmsimard: priority changes which requests are processed first by nodepool. if they are separate nodepool processes, one won't affect the other because region#1 will decline any requests that can only be served by region#2 | 19:20 |
Shrews | and vice versa | 19:20 |
dmsimard | Shrews: they're not separate nodepool processes, it's one zuul and one nodepool (v2) | 19:20 |
Shrews | dmsimard: separate pool definitions? | 19:20 |
dmsimard | Shrews: yeah, like OVH and RAX are two different clouds | 19:21 |
Shrews | i'd have to assume "yes". same concept. one pool will decline requests only the other can server | 19:21 |
Shrews | serve | 19:21 |
dmsimard | Shrews: basically my pipeline A B C is served out of OVH, my pipeline X Y Z is served out of RAX.. if I have a 'high' precedence on Z, would it prevent nodepool from serving OVH nodes to A B and C | 19:22 |
dmsimard | Shrews: ok, I guess we can try it and see what happens.. | 19:22 |
Shrews | dmsimard: no. each pool handles requests independently (one thread per pool) | 19:22 |
dmsimard | Shrews: that's true for v2 as well ? | 19:22 |
Shrews | dmsimard: oh, no. v2 is totally different and soooo 2016 | 19:23 |
Shrews | :) | 19:23 |
dmsimard | Sorry I'm not in october 2017 yet | 19:23 |
dmsimard | We'll be in october sometime late next month probably :) | 19:23 |
Shrews | dmsimard: apologies then. thought we were discussing v3. ignore what i said :) | 19:23 |
*** rcarrillocruz has quit IRC | 20:08 | |
*** rcarrill1 has joined #zuul | 20:08 | |
*** __zeus__ has quit IRC | 20:26 | |
SpamapS | hrm.. still not getting a success status when gate passes. | 20:32 |
SpamapS | oh n/m .. weird | 20:32 |
SpamapS | must have been a permissions issue | 20:32 |
pabelanger | dmsimard: are they the same labels (for images) in A B C and X Y Z? | 20:53 |
*** hasharAway has quit IRC | 20:53 | |
dmsimard | pabelanger: in this case, no | 20:53 |
dmsimard | pabelanger: in RDO context, it's the RDO and the TripleO tenants on RDO cloud which are two different "clouds" | 20:54 |
pabelanger | dmsimard: okay, so yoi should be fine | 20:54 |
dmsimard | they might want to set the periodic jobs to have a 'high' priority, I just want to make sure that doesn't screw up the RDO gate. | 20:54 |
pabelanger | If the same label is across your clouds, then it might be possible for Z to starve resources in other pipelines | 20:55 |
dmsimard | pabelanger: they happen to use a different label/image so we should be fine then | 20:55 |
pabelanger | dmsimard: this is how tripleo-test-cloud-rh1 works today, if gate pipeline is full of jobs, nodepool will still process tripleo-centos-7 images properly | 20:57 |
dmsimard | pabelanger: great | 20:57 |
*** fdegir has joined #zuul | 21:12 | |
dmsimard | the madness that is merging things in openshift-ansible in a world without gerrit and zuul https://openshift-ansible-sq-status-ci.svc.ci.openshift.org/#/info?prDisplay=4256&historyDisplay=4256 | 21:44 |
dmsimard | Oops, that URL was filted to my PR, this one isn't: https://openshift-ansible-sq-status-ci.svc.ci.openshift.org | 21:46 |
pabelanger | dmsimard: that looks a lot like kubernetes bot, I wonder if they pulled that downstream to use | 21:49 |
dmsimard | pabelanger: that's what they use for origin and openshift-ansible so maybe | 21:50 |
pabelanger | yah, seems to be the same | 21:50 |
dmsimard | The one for origin is https://origin-sq-status-ci.svc.ci.openshift.org/ | 21:50 |
dmsimard | need to introduce them to zuul and gerrit :) | 21:51 |
pabelanger | Was just in sig-testing meeting for kubernetes today, they are officially moving off jenkins into prow | 21:51 |
dmsimard | pabelanger: off jenkins to what ? | 21:51 |
pabelanger | prow | 21:51 |
dmsimard | wth is prow | 21:51 |
pabelanger | they build the tool themself | 21:51 |
* dmsimard google | 21:51 | |
dmsimard | https://github.com/kubernetes/test-infra/tree/master/prow | 21:52 |
pabelanger | yup | 21:52 |
dmsimard | eh https://prow.k8s.io/# | 21:52 |
dmsimard | pabelanger: cmd/jenkins-operator is the controller that manages jobs running in Jenkins. | 21:54 |
dmsimard | pabelanger: are they at the zuul v2 stage where they're using jenkins as a dumb runner ? | 21:54 |
pabelanger | yah, they are at the point where they can remove it now | 21:55 |
pabelanger | and implement there own task execution | 21:55 |
dmsimard | okay | 21:56 |
pabelanger | The interesting piece of info I learned today, is google wants to decrease their commitment to the sig-testing group, both humans and hardware. So there is an effort to have the CNCF be more involved in running / manging testing. Basically start moving things into AWS | 21:56 |
jeblair | i wrote an email about some stable branch issues: http://lists.openstack.org/pipermail/openstack-infra/2017-October/005634.html | 22:26 |
SpamapS | I just discovered an amazing paradigm for testing stuff | 23:24 |
SpamapS | I may need to encode this into a tool | 23:24 |
SpamapS | I created an entire fake zuul yaml file | 23:24 |
SpamapS | with two projects | 23:24 |
SpamapS | and now I can run ANSIBLE_ROLES_PATH=thoserepos/roles ansible-playbook -i localhost, -e @thatfile playbooks/foo.yaml | 23:25 |
SpamapS | and it like, works | 23:25 |
SpamapS | I can test my zuul playbooks before I push them up | 23:25 |
pabelanger | yah, I've been doing that locally lately. Works pretty well | 23:28 |
SpamapS | There are just times where I need to work out the kinks before waiting 5 - 20 minutes to find out if it works :) | 23:29 |
pabelanger | yah, also working on https://review.openstack.org/512715/ for a shared linter / syntax-check job across all our projects. Plan on iterating on that, maybe drop tox as a dependency. But uses ANSIBLE_ROLES_PATH | 23:31 |
pabelanger | I should see how to do that in 2.4, since that apparently changed | 23:32 |
*** weshay|ruck is now known as weshay|PTO | 23:38 | |
SpamapS | Hrm, so when a Depends-On'd PR is merged, the dependent PR's aren't automatically sent into gate for some reason. | 23:52 |
jlk | this is for github? | 23:54 |
SpamapS | jlk: yeah | 23:54 |
jlk | It should have an idea of changes that are dependent | 23:54 |
jlk | there's a spot in the code that mucks with that. | 23:55 |
jlk | crap, I have to run. | 23:55 |
SpamapS | do they have to share queues? | 23:56 |
SpamapS | 2017-10-24 16:52:16,521 DEBUG zuul.Pipeline.GoDaddy.gate: Change <Change 0x7f64fc181278 3,af5ede37fa61a409376f5ae157e6c6b9a4964913> in project cloudplatform/godaddy-zuul-jobs does not share a change queue with <Change 0x7f64fc1e0278 18,3d019b588b29f1d15d1c08efde10d70af24f6e9c> in project cloudplatform/k8s-addons | 23:56 |
jlk | oh hrm. | 23:56 |
jlk | maybe? | 23:56 |
SpamapS | 2017-10-24 16:52:16,521 DEBUG zuul.Pipeline.GoDaddy.gate: Failed to enqueue changes ahead of <Change 0x7f64fc1e0278 18,3d019b588b29f1d15d1c08efde10d70af24f6e9c> | 23:56 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!