Thursday, 2018-08-09

*** jesusaur has quit IRC00:42
*** eumel8 has quit IRC01:24
*** rbergeron has quit IRC02:48
*** jesusaur has joined #zuul03:38
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: executor: enable inventory to be updated with zuul_return  https://review.openstack.org/59009204:12
*** jesusaur has quit IRC04:46
*** jesusaur has joined #zuul04:48
*** eumel8 has joined #zuul05:36
*** AJaeger has quit IRC06:16
ianwmordred / corvus : i don't know what i'm missing here ... http://logs.openstack.org/35/589335/12/check/openstack-infra-base-integration-fedora-latest/9f1b6c6/ara-report/06:21
ianw import_playbook: openafs-client.yaml06:22
ianw  when: ansible_os_family == 'Debian' or ansible_distribution == 'CentOS'06:22
ianwthat playbook has two included roles -- install kerberos and install openafs -> https://review.openstack.org/#/c/589335/12/tests/openafs-client.yaml06:23
ianwit seems to skip everything for the kerberos-client role.  but then it tries to run the openafs-client role?06:25
*** pcaruana has joined #zuul06:38
*** gtema has quit IRC07:16
*** darkwisebear has joined #zuul07:35
*** jpena|off is now known as jpena07:50
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Support job pause  https://review.openstack.org/58538908:07
*** gtema has joined #zuul08:16
*** mhu has joined #zuul08:44
*** electrofelix has joined #zuul09:11
*** darkwisebear has quit IRC09:39
openstackgerritMarkus Hosch proposed openstack-infra/nodepool master: Add list of metrics provided to statsd  https://review.openstack.org/59023310:31
*** jpena is now known as jpena|lunch11:01
*** snapiri has quit IRC11:25
*** snapiri has joined #zuul11:25
*** jpena|lunch is now known as jpena11:58
pabelangermorning!12:10
pabelanger+3 on 589563, openstacksdk bump12:10
pabelangerShrews: tobiash: corvus: clarkb: It seems softwarefactory is having an issue with an exception being raised in the lauch-retries loop when trying to delete a node: https://review.openstack.org/589854/ I haven't reviewed the chance yet, but wanted to see if you'd add it to your review pipeline. cc mhu12:15
mhupabelanger, thx - I'm not sure simply logging the failure is right, so advice on this is more than welcome12:16
tristanCperhaps a cleanup-retries option to safely retry?12:21
tristanCor a request-retries in zuul to restart the request loop instead of returning node_error12:22
openstackgerritMerged openstack-infra/nodepool master: Bump minimum openstacksdk version to 0.17.2  https://review.openstack.org/58956312:35
*** darkwisebear has joined #zuul12:36
Shrewspabelanger: the thing to do there might be to just create a new node to reference the instance and move on rather than wait for the delete. tobiash should weigh in on that idea though12:38
pabelangermhu: ^12:44
*** ianychoi has quit IRC12:49
*** darkwisebear has quit IRC13:00
mordredpabelanger, mhu: is this a different issue from the task manager issue that was fixed in 0.17.2 ?13:41
mhumordred, I think so. The issue would occur every time the cleanupNode call raises an exception13:43
mordredah - gotcha13:43
pabelangermordred: mhu: yes, this is related to single cloud use case and wanting a node not to return NODE_FAILURE.13:47
pabelangerwhich happens when we raise the delete exception13:47
mordredsingle cloud vs. multiple cloud are a little different aren't they?13:48
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: executor: enable inventory to be updated with zuul_return  https://review.openstack.org/59009213:49
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: executor: add support for generic build resource  https://review.openstack.org/57066813:49
pabelangermordred: yah, most of the feedback (and time) at the moment, is trying to keep cloud related failures away from users running jobs.  Even if this means a job is enqueued for hours while there is a cloud outage13:50
pabelangereach time zuul reports a NODE_FAILURE, users seem to get less happy :(13:51
tristanCalso, when there is a node_failure, end_time is null, could we revisit https://review.openstack.org/535549 please?13:52
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool master: Implement an OpenShift resource provider  https://review.openstack.org/57066713:53
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool master: Implement an OpenShift Pod provider  https://review.openstack.org/59033513:53
Shrewsafter more thinking, i'm fairly confident shifting the instance deletion work to the delete thread is the proper thing to do there13:59
Shrewsso, create new DELETING node with the instance id, store it, move on14:00
Shrewsalso keeps the delete work in one place, as it was intended to be14:04
pabelangerShrews: ++14:06
pabelangermhu: do you want to refresh your patch above based on Shrews feedback?14:07
pabelangernot sure if we also have a way to add a test too14:07
mhupabelanger, Shrews ok so just changing the node status to DELETING is enough?14:08
Shrewsthere should be a way to test that. tobiash already has one iirc.14:08
tristanCShrews: isn't that going to bubbleup the quota calculation?14:09
Shrewsmhu: i think there might other properties you'll need to set... provider maybe14:09
pabelangertristanC: I thought we re-recalculated quota properly now with tobiash patch14:09
pabelangerbetween retries14:09
Shrewsthat's why i'd like tobiash's input on it14:10
pabelanger+114:10
Shrewsre: quota14:10
Shrewsso you might want to wait to see what he says14:10
*** darkwisebear has joined #zuul14:11
Shrewsmhu: also, you're not *changing* the node status. you'd need to actually create a *new* node object14:11
Shrewssomething like: tmp_node = zk.Node(); tmp_node.state = zk.DELETING; tmp_node.provider = self.node.provider; yadda yadda yadda14:14
mhugotcha, thx14:17
*** samccann has joined #zuul15:08
clarkbShrews: pabelanger: the more these issues pop up the more I think we may need to consider decoupling the node requests from specific boots. Old nodepool basically processed requests in a true fifo, request A and B come in. Start booting nodes, first one up goes to A then next up goes to B without direct mapping between the two until that point15:21
clarkbthats a fairly big change to nodepool though15:22
corvusclarkb: what other issue would be solved by that?15:37
clarkbcorvus: long delays before jobs start if they go through multiple retries in cloud A before heading to cloud B15:38
clarkbcorvus: users often ask why their jobs haven't started yet and ^ seems to be a common cause.15:38
clarkbWe could have it retry globally rather than individual clouds if we decoupling the request handling from specific clouds15:38
clarkb*if we decoupled15:39
corvusclarkb: the old nodepool didn't *have* requests, it just tried to guess how many nodes the system needed as a whole and threw them at jenkins and hoped for the best.15:40
clarkbya, they were inferred requests based on queue lenghts15:40
corvusjobs would run in semi-random order based on how jenkins assigned the nodes15:41
corvusnow we *actually* have a queue and have some control the ordering of node delivery, modulo the performance of clouds as you point out15:41
corvusthe request system lets us do a lot of things we couldn't before -- like prioritize node requests by pipeline (we service high priority pipelines first), and true multi-node support (with different types of nodes)15:42
corvusi don't think we should go back15:42
clarkbcorvus: I'm not suggesting going back. But instead putting a proxy in between cloud providers and zuul requests15:43
Shrewsi have this idea in the back of my mind for nodepoolv4 of having a scheduler process to do the request assignment (supporting round robin, provider priority, first available, etc). but that's far down the road15:43
clarkbsomething that can paper shuffle a bit more intelligently15:43
clarkbcorvus: Zuul says "I need a node of type foo", nodepool asks some cloud to boot another type foo, then whenever any type foo comes back it is given to that first zuul request15:45
pabelangerShrews: it is good you are thinking about it, one topic that keeps coming up downstream is for the neat to better control which provider runs a job.  EG: time of day usage, due to provider costs15:45
clarkbin current system that request can only be served directly currently. Which means we may have booted 5 other type foos in that time span15:45
corvusShrews: yeah, that's an option -- or we can also look at having more cooperative launchers15:45
corvusShrews: if you add in a scheduler, we'd need to add in *multiple* schedulers.  nodepool is currently our most distributed system.  zuul is not a model to follow here -- it's the other way around :)15:46
Shrewscorvus: absolutely15:46
clarkbcorvus: Shrews ya  Iwouldn't add a spof zuul, each launcher could just run the request handler thread and be responsible for those it pulls off of zk15:46
clarkbthe drawback is now your failure domains are in multiple axis15:47
corvusShrews, clarkb: but one way or another, i think there is room for some more decoupling here.15:47
clarkb*spof like zuul. ugh brain not typing good today15:48
corvusat any rate, i think Shrews's idea for handing off the delete action is a good fix in the current system.15:48
pabelangerI think so far, at least in rdo, the new launcher has worked well. It is just in our case, we only have 1 provider, which has a fair bit of outages.  If we had another provider, I don't think the users would complain as much15:49
clarkbpabelanger: ya and end of day if you only have one provider and it is down then no jobs are going to run15:49
clarkb(no matter how smart nodepool is trying to be)15:49
pabelangerexactly15:50
pabelangerI've been trying to express that to mgmt, while making nodepool better for single cloud is great, we really should be working to fix the cloud or add more clouds.15:50
corvus++fix the cloud makes things better for everyone.  customers included.  :)15:50
pabelangerright, just very hard when you have control over said cloud.15:51
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: DMN - testing base-test  https://review.openstack.org/59040715:53
*** gtema has quit IRC16:02
*** jpena is now known as jpena|off16:07
openstackgerritMarkus Hosch proposed openstack-infra/nodepool master: Add metric for image build result  https://review.openstack.org/59041216:16
tristanCclarkb: it's not entirely down, it's just that sometime, for a short period of time, node creation and deletion fail leading to un-necessary node_failure error16:31
* tobiash is reading backlog16:36
*** darkwisebear has quit IRC16:37
clarkbtristanC: if you can't create or delete nodes wouldn'y you call that down?16:37
tristanCthe running job may complete correctly16:38
clarkbya the existing intsances don't have an outage but the cloud does16:39
corvussuprisingly, this seems to be a disagreement of what "it" is referring to.  :)  i guess what is meant is that the control plane may not be functioning, but the hypervisors are.16:39
corvus(or, rather, if "it" is "the cloud" then a disagreement over what "the cloud" refers to :)16:41
* corvus shakes fist at cloud16:41
clarkbit is an important distinction, it just happens that nodepool's primary duties are creating and deleting instances so that is the outage it cares about16:41
Shrewsit also depends on what your definiton of "is" is16:42
* Shrews watches the 90's reference go over heads16:42
tristanCclarkb: and it seems like this is happening during short period of time, it just need a single deletion exception to result in a build failure16:43
clarkbtristanC: but only if the build failed first right?16:43
clarkb(it is a situation we should handle, mostly just agreeing that outages like that being fixed make everyone happy including nodepool)16:44
tristanCyes, deletion error after build failure16:44
pabelangerwhen i last looked (not today), nodepool failed to launch a node (due to FIP timeout when server first came online), then we also failed to delete.  Both times looks like api_timout in clouds.yaml (60 seconds) was hit16:45
pabelangerI suspect, in the cloud, something caused the load to raise up and stop responding fast enough to the request16:45
mordredpabelanger: in the past when we'vehad clouds hit api timeouts (/me looks at HP) it was mostly related to db indexing issues on the cloud in question16:46
mordredfwiw16:46
mordredpabelanger: but now I'm just backseat driving :)16:46
pabelangeryah, I think there is a few issues TBH. Wouldn't be surprised if database was involved, I actually wonder if they have it backed with ceph.16:48
tobiashpabelanger, Shrews: ++ for creating new node and offloading deletion to the delete worker16:49
pabelangermhu: ^16:49
pabelangertobiash: thanks for confirming!16:49
tobiashalso the quota calculation should be fine as the aborted node is deleted short after16:49
tristanCclarkb: agreed there is an issue with the cloud, but it may be better if zuul/nodepool would retry a bit more before resulting in a node_failure16:50
tobiashwhat won't work is to reuse the node and just set it to deleting as the node needs to be in aborted state in case of a quota error16:50
tobiashalso quota calculation should be fine as any node is counted towards the estimated quota but the aborted nodes are deleted quickly and a node in deleting state must be calculated into the quota16:51
openstackgerritMarkus Hosch proposed openstack-infra/nodepool master: Add metric for image build result  https://review.openstack.org/59041217:02
*** darkwisebear has joined #zuul17:03
openstackgerritJames E. Blair proposed openstack-infra/zuul master: WIP Map file comment line numbers  https://review.openstack.org/59042917:03
corvusmordred, tobiash, jhesketh: ^ okay, that's a little wonky, but now that i've got it sketched out, i think i can tidy things up a bit17:04
tobiashcorvus: what do you think about maybe doing this directly on the executor after ansible finished instead of doing another merger call?17:08
corvustobiash: we can't guarantee the state of the git repos then (the job could delete them)17:09
tobiashcorvus: oh that's right17:09
corvustobiash: we could say that's a pathological case and just decide to skip line mapping if that happens, but we would still need to handle things like checking out the right branch/commit, etc, since it's much more reasonable for a job to have checked out different branches17:10
tobiashcorvus: just thought we can save that but you convinced me that that's a bad idea :)17:12
*** darkwisebear has quit IRC17:13
corvustobiash: heh, i'm not 100% convinced :)17:13
tobiashcorvus: related to that, do we 'git fetch' on every merger call even if we have a repo state and have already every ref needed?17:16
tobiashwhat would be cool is if we could avoid to do yet another git fetch just because of line mapping17:17
corvusyes we do; we rely on git being efficient (if it has everything, it will only need to exchange the ref list, but that can still be slow on repos with lots of refs)17:19
openstackgerritMatthieu Huin proposed openstack-infra/nodepool master: Do not abort node launch if failed node cannot be deleted  https://review.openstack.org/58985417:19
tobiashah good17:20
tobiashfrom which kernel version did you upgrade your executors?17:21
corvuspabelanger, clarkb: ^ maybe you know?17:21
tobiashI'm thinking if I should switch the openshift nodes from centos atomic (centos 3.10) to fedora atomic (4.17)17:22
clarkbit was ubuntu xenial hwe -1 to ubuntu xenial hwe current17:23
openstackgerritJames E. Blair proposed openstack-infra/zuul master: WIP Map file comment line numbers (alternate)  https://review.openstack.org/59044217:24
corvustobiash: ^ quick sketch of your suggestion (the merger and filecomment file changes would be about the same, i haven't copied them over yet)17:25
pabelangercorvus: tobiash: clarkb: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-07-24.log.html#t2018-07-24T16:04:16 includes version we upgraded from and to17:25
tobiashpabelanger: thanks, I searched a few days after that17:26
pabelangernp!17:27
pabelangertobiash: what is your current kernel version? will be intersted to see what boost you get if you update17:31
tobiashpabelanger: my kernel version is 3.10 (the one from current centos atomic)17:32
tobiashpabelanger: but quantifying the boost won't be that easy as the hosts are shared by many different services17:32
pabelangerunderstood17:33
clarkbin our case I expect that many of the performance improvements around PTI (fallout from meltdown) are what have aided us17:34
tobiashwe currently use 16 core, 64gb vms as openshift nodes which run 8 executors, 8 mergers, 3 elk stacks, several cassandra instances,...17:34
pabelangerhttp://grafana.openstack.org/d/T6vSHcSik/zuul-status?orgId=1&from=now-30d&to=now is pretty convincing when we rebooted with 4.15 kernels17:34
clarkbI recall reading that newer kernels are basically back to pre PTI performance levels17:34
pabelangeralmost think we could stop ze11.o.o for now and see how 10 executors does again17:34
tobiashclarkb: ah so maybe I won't get such a speedup17:35
tobiashpabelanger: yes, I saw that and it was really impressive17:36
*** gouthamr is now known as gouthamr_away17:37
tobiashpabelanger: what happened around 2018-06-02 http://grafana.openstack.org/d/T6vSHcSik/zuul-status?orgId=1&from=now-6M&to=now&panelId=25&fullscreen ?17:38
tobiashlooks like after that the load has increased a lot and now it's back to normal17:38
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Scope config line comments to change  https://review.openstack.org/59044617:38
pabelangertobiash: not sure, when did we upgrade to ansible 2.5?  Also, could be start of release count down for openstack17:39
pabelangercan look more shortly17:39
tobiashah, right, that could be ansible 2.517:40
tobiashso ansible 2.5 made it worse and the kernel upgrade fixed it...17:40
tobiashinteresting17:40
pabelangeryah, I haven't really profiled 2.5 myself, it was just pure chance when I see ze11.o.o doing more builds, that I noticed it was a different kernel17:41
tobiashit's also amazing to see how perfectly evenly the builds are distributed among the executors17:43
corvustobiash: this is the magic: http://git.zuul-ci.org/cgit/zuul/tree/zuul/executor/server.py#n180817:46
pabelangerha, ze02.o.o is up to 79 jobs atm. next closes is 57 :)17:49
tobiashthat's cool17:50
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Scope config line comments to change  https://review.openstack.org/59044617:52
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Scope config line comments to change  https://review.openstack.org/59044617:59
pabelangertristanC: left comment on https://review.openstack.org/590092/ mostly curious why we can't use add_host vs zuul_return18:03
pabelangertobiash: eager to try: https://review.openstack.org/585389/ for zuul_return pause18:04
pabelangerif anybody else would like to review :) ^18:04
tobiashworks like a charm18:04
pabelangerNice!18:05
tobiashjust the second latest version of that change had a bug that a job without children which pauses never unpaused18:06
tobiashbut that is fixed in the latest ps18:06
pabelanger++18:08
pabelangerwill help rdo for sure, we upload a lot of artifacts to our logs.r.o server, and have some issue keeping up with crontab to delete stale artifacts18:08
tobiashpabelanger: you might be interested in swift logs upload, cleanup there is really neat18:11
*** elyezer has quit IRC18:14
*** panda|ruck is now known as panda|ruck|off18:22
*** elyezer has joined #zuul18:27
pabelangertobiash: yah, we are hoping to start using it too, after zuul.o.o :)18:29
*** electrofelix has quit IRC18:31
*** samccann has quit IRC19:14
*** pcaruana has quit IRC19:33
openstackgerritMerged openstack-infra/zuul master: Scope config line comments to change  https://review.openstack.org/59044619:37
*** elyezer has quit IRC20:01
corvusclarkb: can you review https://review.openstack.org/581754 pls?20:06
corvustobiash: jhesketh has 2 questions on https://review.openstack.org/588201  i'm hoping you can respond and he'll +3 when he wakes up :)20:12
tobiashlooking20:12
*** elyezer has joined #zuul20:14
tobiashresponded, hope that are good answers20:17
corvustobiash: makes sense to me20:17
clarkbcorvus: looking20:17
tobiash:)20:17
openstackgerritMerged openstack-infra/zuul master: Add winrm certificate handling  https://review.openstack.org/53571720:28
openstackgerritMerged openstack-infra/zuul master: Support job pause  https://review.openstack.org/58538920:29
pabelangeryay20:29
tobiashclarkb: responded on 58175420:30
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Add allowed-triggers and allowed-reporters tenant settings  https://review.openstack.org/55408220:31
openstackgerritLogan V proposed openstack-infra/zuul master: encrypt_secret: Allow file scheme for public key  https://review.openstack.org/58142920:31
corvustobiash, clarkb: i expect a nice long deprecation period for sighup.  but i do think we should get rid of it eventually.20:32
openstackgerritMerged openstack-infra/zuul-jobs master: Add a yarn role  https://review.openstack.org/58655020:34
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Add command socket handler for full reconfiguration  https://review.openstack.org/58175420:36
corvustobiash: i think i'm about convinced that https://review.openstack.org/590442 is the better way.  you want to talk me out of it?20:36
corvusclarkb, mordred: do you have an opinion on 590429 vs 590442 ?20:36
tobiashcorvus: actually you talked me out of it ;)20:37
tobiashcorvus: but I see that's a valid compromise concerning system load20:37
corvustobiash: i know, at the same time i talked me into it.  :)  i think perfect may be the enemy of good here.20:37
tobiashcorvus: I think it's good enough20:37
tobiashtbh I didn't see any project yet that messes with the repo on localhost20:38
tobiashso that's imho a limitation I can perfectly live with20:38
openstackgerritMerged openstack-infra/zuul master: Add request reference when hitting a node failure  https://review.openstack.org/58985620:43
tobiashclarkb: maybe you could add 580295 to your review queue?20:44
tobiashit's an easy one20:44
mordredcorvus: I agree, I think messing with the localhost repo is unlikely for the sorts of things that are going to want line comments20:47
clarkbtobiash: sure, I'm juggling paperwork things and code reviews (it is benefits election time of year for me, something you probably don't do in germany)20:47
clarkbtobiash: thinking about that one, can we just not log that at all? (I'm not sure how useful it is when running as a github app)20:52
openstackgerritMerged openstack-infra/zuul master: Allow get_mime: False on localhost  https://review.openstack.org/54947520:52
*** ssbarnea has quit IRC20:52
*** gouthamr_away is now known as gouthamr20:53
tobiashclarkb: when running as a github app with rate limit enabled (like it is enforced against github.com) it is a really useful logging20:54
tobiashclarkb: but not in our case with a ghe instance that doesn't do rate limiting20:54
clarkbtobiash: even when run as an app? I thought when you run as an app its a non issue20:54
tobiashclarkb: in that case you have a rate for every installation but you have a rate20:54
clarkbah20:55
clarkbstill applies then20:55
tobiashclarkb: so when running as an app it's a less issue but no non issue20:55
clarkbgot it20:55
openstackgerritMerged openstack-infra/zuul master: Use copy instead of symlink for multi-tenant dashboard  https://review.openstack.org/58755421:09
openstackgerritMerged openstack-infra/zuul master: Make GitHub rate limit logging configurable  https://review.openstack.org/58029521:13
openstackgerritIan Wienand proposed openstack-infra/zuul-jobs master: [WIP] kerberos & afs client roles  https://review.openstack.org/58933421:19
*** gouthamr is now known as gouthamr|brb21:20
*** gouthamr|brb is now known as gouthamr21:20
*** gouthamr is now known as gouthamr|brb21:21
openstackgerritMerged openstack-infra/zuul master: Add command socket handler for full reconfiguration  https://review.openstack.org/58175421:22
mordredcorvus, tristanC: required_projects is a dict21:24
mordredso I think it's just missing a .values() in the for loop21:24
mordredotherwise we're iterating over the keys, which are strings21:25
mordrednow - the real question is - why didnt the tests trigger that21:25
corvusmordred: agreed; i think we need a test (or an expanded test case)21:29
openstackgerritIan Wienand proposed openstack-infra/zuul-jobs master: [WIP] kerberos & afs client roles  https://review.openstack.org/58933421:33
corvusi need to see a github payload when a branch is created21:36
corvusshould i just look for 40 '0' characters in the zuul-debug log?21:36
mordredcorvus: you could tail the logs and go create a branch in a gtest repo?21:37
mordredI guess we probably don't log that - we could create a branch in a gtest repo and then go look at the webhook payload debug page?21:38
corvusmordred: i think we do log all the payload21:38
mordredcorvus: neat21:38
corvusand it looks like we watch gtest-org/ansible so i'll go do that21:38
* mordred is helping21:38
corvushow do i make a new branch?21:40
mordredyou click the branch button I think?21:40
*** gouthamr|brb is now known as gouthamr21:40
corvusthat just lets me switch branches in the ui21:41
mordredcorvus: hrm. maybe you just push a new branch?21:41
corvusis that how people do that in github?21:42
mordredcorvus: yeah - https://github.com/gtest-org/ansible/settings/branches only has protection settings21:42
mordredcorvus: I think so?21:42
corvus(i'm asking because the goal here isn't for me to create a branch, as such, but for me to create a branch in the way that normally happens in github so i can verify the test behavior :)21:42
mordredcorvus: oh!21:42
mordredcorvus: in the branch dropdown ...21:42
mordred"find or create branch"21:43
mordredcorvus: so type the new name and it'll show you a create button21:43
corvusmordred: that is intuitive21:43
mordredcorvus: very much so21:43
mordredcorvus: it might be worthwhile checking both flows21:44
corvusmordred: agreed21:44
mordredcorvus: I know when I'm sending in ansible PRs, I create branches in my ansible fork by pushing21:44
clarkbmordred: ya I think thats common for the PR use case21:48
corvusthey behave similarly and everything checks out.  thanks :)21:50
mordredyay!21:53
*** tobasco is now known as tobias-urdin22:14
openstackgerritJames E. Blair proposed openstack-infra/zuul master: WIP: cache branches in connections/sources  https://review.openstack.org/58997522:21
*** elyezer has quit IRC22:54
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Cache branches in connections/sources  https://review.openstack.org/58997523:05
*** elyezer has joined #zuul23:07
*** ssbarnea has joined #zuul23:54

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!