Wednesday, 2017-10-18

fungiheads up for those not following #openstack-infra, suspecting we merged a new memory leak in the past few days. zuulv3.o.o only made it ~10 hours from the last restart before exhausting ram00:24
*** xinliang has quit IRC00:37
jlkdamn :(00:39
*** xinliang has joined #zuul00:50
*** xinliang has quit IRC00:50
*** xinliang has joined #zuul00:50
jeblairi've restarted openstack's zuul with a manual revert of 511355 which seems like an obvious first candidate01:07
*** tobiash has quit IRC01:15
mnasergrowth rate seems much better http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=63979&rra_id=all01:22
*** tobiash has joined #zuul01:24
openstackgerritTom Barron proposed openstack-infra/zuul-jobs master: Collect output from coverage job  https://review.openstack.org/51291001:57
SpamapSjeblair: weakref has failed me in such things before. Turns out you still have to find all the references :-/02:36
dmsimardjeblair: I wonder if it is worth considering moving from 'git push' to a synchronize (which also uses rsync) here: https://github.com/openstack-infra/zuul-jobs/blob/ecd0d55ce175caf8bc66404c65d2b35c39cf5e49/roles/mirror-workspace-git-repos/tasks/main.yaml#L703:19
dmsimardjeblair: It would allow us to drop the requirement on delegating to localhost and thus making the role testable/runnable outside of trusted context03:20
*** openstackgerrit has quit IRC03:22
tobiashdmsimard: that's an optimization to push changes incrementally to the nodes and make use of the cache03:33
dmsimardtobiash: I don't see what's the difference in the outcome between using git push vs rsync in this context03:33
dmsimardbut I'm testing a patch :)03:33
tobiashSynchronize could defeat that as the git data structure on disk can be very different even with same content (pack files)03:34
tobiashI think it's safer and cheaper to rely on git for that incremental update03:36
dmsimardtobiash: perhaps we can consider that, it seems that in practice, rsync is even sometimes faster than git: https://serverfault.com/questions/344546/which-is-better-for-website-backup-rsync-or-git-push03:38
dmsimardtobiash: I'll put up a patch and we can discuss it there03:40
tobiashOk03:40
clarkbdmsimard: we already had a big discussion about it fwiw03:41
clarkbgit doesn't work that way03:41
dmsimardclarkb: why not03:41
dmsimardclarkb: I've tested it locally to make sure I was making sense before proposing that you know :(03:41
clarkbdmsimard: because if I have a nova repo and you have a nova repo nothing requires their files to be the same03:42
clarkbso an rsync can end up doing a complete copy of the repo03:42
dmsimardi.e, on one machine "git clone http://git.o.o/foo/bar", one another machine with a fetch+checked out review, rsync that and it just works -- no need to check out anything etc03:42
dmsimardclarkb: is that really going to happen in practice ? the delta between the git repos and the checked out ref should be minimal03:42
clarkbdmsimard: yes it happens in practice03:43
clarkbparticularly since things get repacked at different times so the packfiles will be different03:43
dmsimardblehhhhhhhhh03:43
clarkbso if you want to make use of the cache you have to let git figure out what data needs to be moved03:44
clarkbbasically oyu can only use rsync reliably if you want ot make an entire copy of the repo, not to make incremental updates03:44
dmsimardI suppose the node couldn't do a git pull from the merger ? Rather than the executor pushing it out ?03:44
dmsimardI'm really just trying to make that role not rely on localhost (executor) tasks so that it can be untrusted03:45
tobiashZuul3 has a push architecture for reasons03:45
tobiashSuch that the nodes don't need a back channel to the control plane03:46
clarkbya, v2 did the pull but in many setups that is problematic because access to things03:46
dmsimardI guess03:48
dmsimardfine then, I'll find another way03:48
dmsimardಠ_ಠ03:48
tobiashMaybe there is a git module which could do this03:48
dmsimardtobiash: what do you mean ?03:49
tobiash(which could be allowed for untrusted projects)03:49
dmsimardtobiash: there is an ansible git module, but that doesn't change the fact that we'd run it from the executor03:49
dmsimardand running things on the executor in untrusted context is a no-no03:49
tobiashdmsimard: it's not that strict03:49
tobiashThere are a few modules where zuul hooks into and checks paths03:50
tobiashThese are then (partly) a03:50
tobiashAllowed un untrusted context03:50
dmsimardtobiash: is review.openstack.org up for you ?03:50
tobiashNo03:51
tobiashUnreachable03:51
dmsimardoh, saw the backlog in #openstack-infra03:51
*** xinliang has quit IRC06:17
*** xinliang has joined #zuul06:26
*** hashar has joined #zuul06:48
*** _ari_ has quit IRC08:06
*** weshay|ruck has quit IRC08:07
*** _ari_ has joined #zuul08:09
*** weshay has joined #zuul08:09
*** mnaser has quit IRC08:30
*** mnaser has joined #zuul08:37
*** electrofelix has joined #zuul08:45
*** hashar is now known as hasharAway09:53
kklimondais there a requirement that nodepool-launcher will be connected to the same internal network as nodes it's spawning? interface_ip of the VM is used for keyscan, but that resolves to private IP (and not FIP) in my deployment https://github.com/openstack-infra/nodepool/blob/feature/zuulv3/nodepool/driver/openstack/handler.py#L20010:46
kklimondaalso, https://github.com/openstack-infra/nodepool/blob/feature/zuulv3/nodepool/driver/openstack/handler.py#L175 talks about Public IPs, but checks interface_ip10:47
tobiashkklimonda: that sounds like a bug in clouds.yaml or shade11:12
tobiashthe interface_ip should be the ip nodepool can use for connecting11:13
*** openstackgerrit has joined #zuul11:25
openstackgerritTobias Henkel proposed openstack-infra/zuul feature/zuulv3: WIP: Add script for deterministic key generation  https://review.openstack.org/51300311:25
tobiashjeblair: this is my poc script which generates the keys for every project in the main.yaml (supports only simple main.yamls yet) ^11:26
tobiashmaybe it makes sense also to introduce a driver concept for this and let the admin decide if he wants to generate a secret per project or let them be derived from a master key11:27
tobiashwith the master key we also could do on-the-fly key generation (with caching) and avoid writing them to disk11:28
kklimondatobiash: thanks, seems to be my misinterpretation of the "default_interface" in clouds.yaml11:30
tobiashkklimonda: you have default_interface in clouds.yaml?11:30
kklimondatobiash: yes, I've been struggling with shade and tried couple of different settings to get it to assign FIPs and that was part of the config11:32
kklimondatobiash: apparently it wasn't needed, so I just removed it11:33
kklimondashade is still assigning FIP from the wrong pool - I can't figure out how to make it work, but at least it's assigning something11:33
kklimonda(before, I was spawning instances in 1.1.1.0/24 network and shade decides it's public, so no FIP needed)11:34
tobiashkklimonda: fip pools is probably a mordred question11:35
kklimondayeah, already pinged him on #openstack-dev but I think he's in US TZ?11:35
tobiashYes, he's in us tz11:36
Shrewskklimonda: i'm not really around just yet (gotta go deal with breakfast), but maybe some of things here will help: http://git.openstack.org/cgit/openstack/os-client-config/tree/doc/source/user/network-config.rst11:51
mordredkklimonda, Shrews: I see a couple of issues (also, I'm in europe this week)12:03
mordredkklimonda, Shrews, tobiash: the first is that we don't have any flags for overriding whether a network is a FIP network or not ... BUT ... more importantly, I think we need to swap the order of how we're finding the correct network from which to get a FIP to using the _by_router method first12:05
mordredand we need to update that to be able to look for a router that is attached to the network that the port on the server is attached to12:06
tobiashlooks like it returns the external network of the last router found?12:06
tobiashhttps://github.com/openstack-infra/shade/blob/master/shade/openstackcloud.py#L541212:06
mordredtobiash: yah - it's a bit too simplistic - it doesn't take in to account whether or not the router in quesiton is attached to the same network as the server12:07
tobiashkklimonda: you maybe also just have to add the external network you want to use to the clouds.yaml with routes_externally: true (see Shrews link)12:08
tobiashmordred: so I'm lucky, also just had the case that I have two external networks and shade used the correct one by accident :)12:08
mordredtobiash: I think the issue is actually https://github.com/openstack-infra/shade/blob/master/shade/openstackcloud.py#L5462-L546412:09
mordredtobiash: we collect all router:external networks and then just use the first one if there is more than one ... that's not a great approach12:09
mordred(of course, it would be SUPER great if we didn't have to do any of this logic :) )12:09
mordredtobiash: but I think if we swap the order of those AND update the router method to be able to be queried with an internal network id - "find floating network for internal network by router" - I think it should work more better12:11
mordredbut I think we also need to add the ability to turn floating_network on and off in clouds.yaml like the other qualities12:11
tobiashthere's probably a hack which already works if you specify all networks in clouds.yaml and use routes_externally=false for every net except the fip net you want to use12:12
mordredtobiash: yah - I thought so too roriginally - but we don't seem to filter external_ipv4_floating_networks at all like we do for the others12:13
tobiashwow, then I'm probably super lucky :)12:14
mordredyah. I think so :)12:14
kklimondayes, routes_externally isn't taken into account - I have configuration like that http://paste.openstack.org/show/623958/ but as mordred just said, shade picks the first FIP pool it's been given12:15
tobiashkklimonda: can you switch the gateway on the router to the ither fip network as a workaround?12:16
mordredkklimonda: yah - sorry about that ... I think you and tobiash are the first two people I'm aware of who have more than one router:external network - and tobiash seems to have been lucky so far -so you're the first person to report the issue. lucky you!12:16
pabelangermorning!12:17
kklimondamordred: this could be as well due to opencontrail not behaving 100% as expected12:17
mordredkklimonda: I don't think it'll be super hard to fix - but I might not be able to fully finish fixing it until tomorrow (at a conference, still need to write some slides)12:17
kklimondamordred: no problem, short term I can just patch things myself, and wait for the proper solution :)12:17
mordredkklimonda: well - so far I think whatyou have described is just a scenario not taken in to account. opencontrail could also be not behaving - but I tink there is a definite error12:17
tobiashin my case I'll try to avoid floating ips in my new deployment12:18
kklimondayes, I still think this is something that should be handled in a more robust way, just saying that opencontrail may be the reason I'm the lucky guy to hit this :D12:18
pabelangerso, with zuulv3.o.o and hung web streaming links, I think the issue is we start stream when the job launches, but if zuul-merger is slow, or times out, it results in a web stream that doesn't work, until merging is finished.  So, maybe we only setup webstream after we are done merging and actually running ansible12:20
pabelangerour we add queued / preparing / blue (web stream url starts) / red or green or failure12:20
pabelangerto the zuulv3.o.o status page for jobs12:21
tobiashpabelanger: does the scheduler get intermediate results from the executor?12:21
tobiashpabelanger: is this possible with gearman?12:21
*** dkranz has joined #zuul12:22
mordredtobiash: always avoid floating ips if possible - real ips are so much better ... :)12:23
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool master: Migrate legacy jobs  https://review.openstack.org/51263712:23
pabelangertobiash: in the case last night, no because we couldn't couldn't merge to changes (gerrit went offline)12:23
tobiashmordred: I'm planning to use only internal ips and placing zuul and nodepool into the same private network12:23
mordred++12:23
tobiashpabelanger: hrm, the executor sends the work data just before calling runPlaybooks12:27
tobiashpabelanger: http://git.openstack.org/cgit/openstack-infra/zuul/tree/zuul/executor/server.py?h=feature/zuulv3#n71112:27
pabelangertobiash: is that went our webui changes links to stream.html?12:31
pabelangerwhen*12:31
*** jhesketh_ has joined #zuul12:38
*** jhesketh has quit IRC12:43
tobiashpabelanger: I think so, the executor tells the scheduler the build url12:54
tobiashpabelanger: and as soon as the build url is set the scheduler treats the job as running in the status.json12:54
tobiashpabelanger: but I didn't find the point yet where the executor starts serving the stream for the build12:55
openstackgerritDavid Moreau Simard proposed openstack-infra/zuul-jobs master: Add zuul.{pipeline,nodepool.provider,executor.hostname} to job header  https://review.openstack.org/50943612:58
tobiashpabelanger: maybe we just have to touch the job-output.txt before anouncing the url12:59
dmsimardjeblair, mordred: ^ squashed https://review.openstack.org/#/c/511821/ into https://review.openstack.org/509436/ as per requested, also adjusted the format to make it more parseable.12:59
tobiashpabelanger: http://git.openstack.org/cgit/openstack-infra/zuul/tree/zuul/lib/log_streamer.py?h=feature/zuulv3#n9713:00
mordreddmsimard: I think we want to swap the depends-on13:02
mordreddmsimard: so that we can land the test - then when we land the debug it has a test to make sure it doesn't break13:02
mordredbut it looks great otherwise13:02
dmsimardoh, sure13:02
mordreddmsimard: I abandoned the otherthing13:02
openstackgerritDavid Moreau Simard proposed openstack-infra/zuul-jobs master: Add zuul.{pipeline,nodepool.provider,executor.hostname} to job header  https://review.openstack.org/50943613:03
openstackgerritDavid Moreau Simard proposed openstack-infra/zuul-jobs master: Add zuul.{pipeline,nodepool.provider,executor.hostname} to job header  https://review.openstack.org/50943613:03
Shrewscan someone enlighten me as to why https://review.openstack.org/512637 is running the tox-py35 job when i've explicitly asked it not to with my branch matcher?13:06
pabelangerShrews: http://logs.openstack.org/37/512637/14/check/tox-py35/acba5e6/zuul-info/inventory.yaml13:13
pabelangersays a variant is getting applied13:13
pabelangerbut not sure why13:15
Shrewspabelanger: my .zuul.yaml is defining the variant, but it seems to be ignoring it13:15
pabelangeris openstack-python35-jobs-no-constraints template being used?13:15
Shrewsyes13:15
Shrewsthat's where tox-py35 is coming from13:15
pabelangeryah, it seems like your variable of tox-py35 didn't apply13:17
pabelangerI would expect to see it in inventory13:17
pabelangerHmm13:21
pabelangerit is possible that your .zuul.yaml is matching the branch matcher13:21
pabelangerthen falls back to openstack-python35-jobs-no-constraints ?13:21
pabelangerno, that does seem likely13:22
pabelangerI am not sure, sounds like a jeblair question13:22
pabelangernow I am curious what the answer is13:22
Shrewsi would expect templates to be processed first, then in-repo config13:22
Shrewsi wonder if it's a bug with branch matcher logic when the branch references itself? like, i wonder if setting it non-voting would work?13:23
pabelangeryah, I would expect non-voting to work13:24
pabelangerI haven't done much testing / work with branch logic yet13:25
mordredtobiash: question on https://review.openstack.org/#/c/51185813:42
mordredtobiash: tl;dr on both changes - should the default be to do upload-artifacts-to-logs-always and publish-artifacts-always with flags to make logs-only-on-change and artifacts-only-on-not-change? or the opposite13:44
mordredtobiash: now that you mention it - I can argue for either direction in my head :)13:44
mordredShrews, pabelanger: I agree - I would expect that to not run13:47
tobiashmordred: hm both ways sound reasonable at first thought13:58
tobiashmordred: but as a user I would expect that if I put somthing in artifacts (I think of that as the api) it should be uploaded unless I force skip it14:00
tobiashmordred: skipping would then be an optimization to save space in certain conditions (although I think in this case the job should not put the artifacts there in the first place)14:02
tobiashmordred: this artifacts folder could be thought of like the git staging area14:03
mordredwell - yes - I agree that the job shoulnd't put it there - I think the goal isn't specifically to skip uploading as much as to have a split-upload system - so artifacts are uploaded to logs for changes and artifacts server for post/release type jobs14:03
mordredso the 'I want these uploaded' is handled by putting them into artifacts/ - but where they are uploaded to is *hopefully* an admin decision14:03
mordred(if we do this correctly)14:03
tobiashah, now I get your point14:04
mordredit's possible that a deployer might not have a logs/tarballs split like we do - in which case the distinction is extra complexity ...14:04
tobiashmordred: what do you think about having this decision earlier (in the base job?) which set_facts the destination?14:06
mordredtobiash: we'd have to persist those facts with a zuul_return ... but we could do that - what does that do for us?14:07
tobiashmordred: aka having an logs/artifacts dir and a publish/artifacts dir14:07
mordredinteresting14:09
mordredtobiash: so - one of the things I was hoping to solve for on our side of things is not letting proposed patches overwrite published artifacts14:10
mordredtobiash: but instead to only upload those after a patch has merged (so we don't accidentally publish a nova-master.tar.gz that contains content thatnever actually landed)14:10
mordredthat might be more of a concern for us than for other people though ...14:11
tobiashmordred: and you don't want to have a release job for this which adds the publish artifact on top?14:12
tobiashmordred: just thinking if it makes sense to just upload all artifacts to logs and do the publish decision based on a child job which has a publish post playbook14:13
mordredtobiash: ah - what I want to have actually is a release job that has a dependency - so that we can have a single job that builds artifacts regardless of whether it's check, gate, post or release ...14:14
*** sambetts|afk is now known as sambetts14:14
mordredtobiash: and then a 'release-openstack-tarball' job could run in release pipeline and download the artifact from tarballs and upload it to pypi14:14
tobiashmordred: sounds like a good idea to me14:15
Shrewsmordred: good, so it's more than just one of us that's confused14:15
mordredtobiash: the amount of copy/paste at the moment between build and publish jobs got on my nerves :)14:15
*** yolanda has joined #zuul14:16
tobiashmordred: you could do this either with job dependency and download-upload dance or you could solve that with job inheritance and the release job just inherits from the package job14:17
dmsimardWow, incoming in 2.5: https://user-images.githubusercontent.com/26403/31723399-59674f14-b3e4-11e7-9e25-0080ce5378a4.png14:17
tobiashmordred: where the download-upload dance would be more flexible and generic probably14:18
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool master: Migrate legacy jobs  https://review.openstack.org/51263714:18
tobiashdmsimard: what's rescured?14:18
dmsimardtobiash: block/rescue/always14:18
tobiashah, exception handling :)14:19
mordredtobiash: yah - I kind of like flexible/generic download/upload - the dance could ALSO be implemented potentially as a promotion too14:19
odyssey4metobiash yeah, that's been there since 2.0, but pre 2.5 it reports failed tasks as failed, without reporting them as rescued too14:19
odyssey4meit's hella confusing14:19
mordredtobiash: like if we started using swift, we could have the job just tell swift to copy the object to the 'production' location - or we could have the job add a tag to a docker image in a registry or something14:19
* mordred wave shands14:20
dmsimardodyssey4me: yeah ara struggles with that too, there's no easy way to tell that a failure was rescued14:22
dmsimardodyssey4me: so it leads to false positives in playbook status result14:23
tobiashmordred: that sounds cool14:47
jeblairShrews, pabelanger: yeah, i think if we don't want a job to run on certain branches, we need to avoid adding that job with no branch matcher in project-config, because that's going to add a variant that matches every branch.  the best thing to do here may be to drop the python3-jobs template, and just add tox-py35 in-repo to feature/zuulv3.14:52
dmsimardAnsible 2.4.1 pushed back next week, they're spinning a new rc14:52
openstackgerritMerged openstack-infra/zuul-jobs master: Collect output from coverage job  https://review.openstack.org/51291014:52
jeblairShrews, pabelanger: we can also drop the template and add tox-py35 in project-config.  but either way, i think we need to drop the template.14:53
jeblairShrews, pabelanger: once a project-pipeline has at least one job variant that matches that combination, the job is going to run.  further variants with matchers can only *alter* how it runs.14:54
jeblairShrews, pabelanger: so if we never want a job to run on master, we need to never have a variant without a branch matcher (implied or explicit)14:55
pabelangerokay cool! Good information to know15:03
*** maxamillion has quit IRC15:44
*** maxamillion has joined #zuul16:15
Shrewsjeblair: thanks for explaining. submitted https://review.openstack.org/513092 to remove that template. <--- pabelanger16:31
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Fix branch ordering when loading in-repo config  https://review.openstack.org/51309416:34
pabelangerso, just thinking, is there any reason not to have projects.yaml live in opentack-zuul-jobs? Then we could have proposed a depends-on for that to confirm, right? But since it is trusted-project, we need to land it first?16:34
pabelangeror is my brain not working16:34
jeblairpabelanger: what's "projects.yaml" ?16:35
jeblairpabelanger: are you suggesting moving project definitions from project-config to ozj?16:36
jeblairpabelanger: project definitions for projects other than the current project have to be in a config-project16:37
jeblairShrews: if you have a sec to look into "open finger connections from zuul-web can keep executor subprocesses around even after stopping zuul-executor" that would be helpful16:38
pabelangerjeblair: okay, ya, that makes sense. config-project can only hold that16:40
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Revert "Use weakref for change cache"  https://review.openstack.org/51309716:40
pabelangermark that up to brain not working16:40
*** bhavik1 has joined #zuul17:18
*** electrofelix has quit IRC17:41
*** sambetts is now known as sambetts|afk17:46
*** bhavik1 has quit IRC17:47
Shrewsjeblair: while comment on that "open finger connections" issue, i think i discovered a very valid reason those remain.18:07
Shrewsjeblair: tl;dr The forked process is doing the streaming. Streaming doesn't end until the log is removed. Without the executor running to remove it, they linger.18:08
Shrewsthat's the current working theory, anyway18:09
jeblairShrews: hrm, the executor should have stopped the job and removed the dir by that point18:11
jeblairthese are, afaik, clean shutdowns... though maybe something went wrong with these specific directories18:11
Shrewsjeblair: can we validate that? if so, that blows that theory out18:12
jeblairShrews: we may be able to... we should log those exceptions.  but i think we'll need to wait for this to happen again and then lsof the process to find its uuid, etc.  we can also inspect the state of the filesystem then.18:16
jeblairat least we have a direction to look now18:16
Shrewsjeblair: agreed. i'll continue looking at the code and pondering scenarios, but having something concrete to inspect would be helpful18:17
clarkbwe were leaking build dirs on the executors on shutdown18:17
clarkbI think the older ones we attributed to keep setting but there were ones from thatday too18:18
Shrewsoh, yeah, keep definitely affects streaming18:18
*** weshay is now known as weshay|ruck|brb18:30
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool master: Migrate legacy jobs  https://review.openstack.org/51263719:05
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Update jobs for features/zuulv3 branch  https://review.openstack.org/51264219:07
*** weshay|ruck|brb is now known as weshay|ruck19:18
*** yolanda has quit IRC19:36
*** jkilpatr has quit IRC20:10
*** openstackgerrit has quit IRC20:17
*** hasharAway has quit IRC20:18
*** openstackgerrit has joined #zuul20:19
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Fix branch ordering when loading in-repo config  https://review.openstack.org/51309420:19
*** jkilpatr has joined #zuul20:26
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Revert "Use weakref for change cache"  https://review.openstack.org/51309720:26
*** dkranz has quit IRC20:30
*** dkranz has joined #zuul20:34
*** dkranz has quit IRC20:52
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Add management event queue length to status json  https://review.openstack.org/51318221:00
*** yolanda has joined #zuul21:20
*** openstackgerrit has quit IRC21:48
*** bstinson has quit IRC22:12
*** openstackgerrit has joined #zuul22:13
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Merge tenant reconfiguration events  https://review.openstack.org/51319522:13
*** bstinson has joined #zuul22:19
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Support upper-constraints in tox-siblings  https://review.openstack.org/51319922:35
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Merge tenant reconfiguration events  https://review.openstack.org/51319522:41
openstackgerritJames E. Blair proposed openstack-infra/zuul-sphinx master: Fix error in job parser  https://review.openstack.org/51320122:53
openstackgerritJames E. Blair proposed openstack-infra/zuul-sphinx master: Fix error in job parser  https://review.openstack.org/51320122:56
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Disable tox-siblings  https://review.openstack.org/51320523:10
openstackgerritIan Wienand proposed openstack-infra/zuul feature/zuulv3: Add _projects to convert project list to dictionary  https://review.openstack.org/51286823:17
openstackgerritMerged openstack-infra/zuul-sphinx master: Fix error in job parser  https://review.openstack.org/51320123:19
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Give layout objects a unique ID  https://review.openstack.org/51320723:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!