Thursday, 2017-11-16

*** yolanda has quit IRC00:27
*** toabctl has quit IRC01:22
*** hogepodge has quit IRC01:22
*** timrc has quit IRC01:22
*** jamielennox has quit IRC01:22
*** mordred has quit IRC01:22
*** mrhillsman has quit IRC01:22
*** _ari_ has quit IRC01:22
*** maxamillion has quit IRC01:22
*** smyers has quit IRC01:22
*** robled has quit IRC01:22
*** harlowja has quit IRC01:22
*** kmalloc has quit IRC01:22
*** zhuli has quit IRC01:22
*** mnaser has quit IRC01:22
*** patrickeast has quit IRC01:22
*** kklimonda has quit IRC01:22
*** zaro has quit IRC01:22
*** patriciadomin has quit IRC01:22
*** EmilienM has quit IRC01:22
*** jlk has quit IRC01:22
*** mmedvede has quit IRC01:22
*** SotK has quit IRC01:22
*** pleia2 has quit IRC01:22
*** openstackgerrit has quit IRC01:22
*** ianw has quit IRC01:22
*** clarkb has quit IRC01:22
*** rbergeron has quit IRC01:22
*** fungi has quit IRC01:22
*** TheJulia has quit IRC01:22
*** robcresswell has quit IRC01:22
*** fdegir has quit IRC01:22
*** logan- has quit IRC01:22
*** ChanServ has quit IRC01:22
*** bstinson has quit IRC01:22
*** mgagne has quit IRC01:22
*** xinliang has quit IRC01:22
*** tristanC has quit IRC01:22
*** rcarrillocruz has quit IRC01:22
*** leifmadsen has quit IRC01:22
*** nguyentrihai has quit IRC01:22
*** zigo has quit IRC01:22
*** jianghuaw_ has quit IRC01:22
*** cinerama has quit IRC01:22
*** qwc has quit IRC01:22
*** jtanner has quit IRC01:22
*** mattclay has quit IRC01:22
*** jkilpatr has quit IRC01:22
*** dmellado has quit IRC01:22
*** fbouliane has quit IRC01:22
*** persia has quit IRC01:22
*** pbrobinson has quit IRC01:22
*** gothicmindfood has quit IRC01:22
*** odyssey4me has quit IRC01:22
*** threestrands has quit IRC01:22
*** tobiash has quit IRC01:22
*** lennyb has quit IRC01:22
*** dmsimard has quit IRC01:22
*** SpamapS has quit IRC01:22
*** jhesketh has quit IRC01:22
*** eventingmonkey has quit IRC01:22
*** pabelanger has quit IRC01:22
*** jesusaur has quit IRC01:22
*** tflink has quit IRC01:22
*** rcarrillocruz has joined #zuul01:28
*** threestrands has joined #zuul01:28
*** tobiash has joined #zuul01:28
*** nguyentrihai has joined #zuul01:28
*** mmedvede has joined #zuul01:28
*** zigo has joined #zuul01:28
*** jkilpatr has joined #zuul01:28
*** clarkb has joined #zuul01:28
*** jianghuaw_ has joined #zuul01:28
*** rbergeron has joined #zuul01:28
*** xinliang has joined #zuul01:28
*** jtanner has joined #zuul01:28
*** fungi has joined #zuul01:28
*** patrickeast has joined #zuul01:28
*** zaro has joined #zuul01:28
*** kklimonda has joined #zuul01:28
*** smyers has joined #zuul01:28
*** robled has joined #zuul01:28
*** lennyb has joined #zuul01:28
*** cinerama has joined #zuul01:28
*** qwc has joined #zuul01:28
*** dmsimard has joined #zuul01:28
*** SotK has joined #zuul01:28
*** patriciadomin has joined #zuul01:28
*** EmilienM has joined #zuul01:28
*** toabctl has joined #zuul01:28
*** pleia2 has joined #zuul01:28
*** dmellado has joined #zuul01:28
*** leifmadsen has joined #zuul01:28
*** fbouliane has joined #zuul01:28
*** _ari_ has joined #zuul01:28
*** openstackgerrit has joined #zuul01:28
*** mrhillsman has joined #zuul01:28
*** mordred has joined #zuul01:28
*** jamielennox has joined #zuul01:28
*** timrc has joined #zuul01:28
*** hogepodge has joined #zuul01:28
*** harlowja has joined #zuul01:28
*** kmalloc has joined #zuul01:28
*** ianw has joined #zuul01:28
*** maxamillion has joined #zuul01:28
*** TheJulia has joined #zuul01:28
*** zhuli has joined #zuul01:28
*** SpamapS has joined #zuul01:28
*** mnaser has joined #zuul01:28
*** robcresswell has joined #zuul01:28
*** fdegir has joined #zuul01:28
*** bstinson has joined #zuul01:28
*** logan- has joined #zuul01:28
*** tristanC has joined #zuul01:28
*** persia has joined #zuul01:28
*** pbrobinson has joined #zuul01:28
*** jhesketh has joined #zuul01:28
*** mattclay has joined #zuul01:28
*** mgagne has joined #zuul01:28
*** eventingmonkey has joined #zuul01:28
*** pabelanger has joined #zuul01:28
*** jesusaur has joined #zuul01:28
*** gothicmindfood has joined #zuul01:28
*** tflink has joined #zuul01:28
*** jlk has joined #zuul01:28
*** odyssey4me has joined #zuul01:28
*** ChanServ has joined #zuul01:28
*** barjavel.freenode.net sets mode: +o ChanServ01:28
*** patriciadomin_ has joined #zuul01:38
*** patriciadomin has quit IRC01:41
*** patriciadomin_ has quit IRC01:55
*** patriciadomin has joined #zuul01:57
*** nguyentrihai has quit IRC02:04
*** dmellado has quit IRC05:46
*** dmellado has joined #zuul05:55
*** yolanda has joined #zuul06:47
*** rbergeron has quit IRC07:04
*** xinliang has quit IRC07:15
*** jianghuaw_ has quit IRC07:17
*** jianghuaw_ has joined #zuul07:17
*** xinliang has joined #zuul07:28
*** threestrands has quit IRC07:30
*** dmellado has quit IRC07:57
*** dmellado has joined #zuul07:58
*** rbergeron has joined #zuul08:25
*** rbergeron has quit IRC09:14
*** rbergeron has joined #zuul09:14
*** smyers has quit IRC10:04
*** smyers has joined #zuul10:15
*** nguyentrihai has joined #zuul10:25
openstackgerritRui Chen proposed openstack-infra/nodepool feature/zuulv3: Apply floating ip for node according to configuration  https://review.openstack.org/51887511:52
openstackgerritRui Chen proposed openstack-infra/nodepool feature/zuulv3: Fix nodepool cmd TypeError when no arguemnts  https://review.openstack.org/51958211:53
openstackgerritAndrea Frittoli proposed openstack-infra/zuul-jobs master: Add compress capabilities to stage artifacts  https://review.openstack.org/50923412:02
openstackgerritAndrea Frittoli proposed openstack-infra/zuul-jobs master: Add a generic process-test-results role  https://review.openstack.org/50945912:02
*** toabctl has quit IRC12:11
Shrewsmordred: I found one more instance of override-branch in that file.12:25
*** tobiash has quit IRC12:58
rcarrillocruzso question13:17
rcarrillocruzi see zuul-executor defaults to 'zuul' as connecting user13:17
rcarrillocruzthat can be changed with default-username13:18
rcarrillocruzdoes this mean we don't allow remote_user on playbooks, or it will be just ignored13:18
rcarrillocruz?13:18
dmsimardrcarrillocruz: You can set inventory-wide vars inside the job parameters, but I don't think you can set host-level vars (yet). The way I see it, this is something you would define inside a nodeset, not through the executor config13:20
dmsimardrcarrillocruz: Can try defining "ansible_user: foo" in a job's vars and see if that works13:20
dmsimardIt'd be a good improvement to be able to define hostvars inside a nodeset IMO13:21
rcarrillocruzsome will be plumbed from nodepool i assume13:21
rcarrillocruzso13:21
rcarrillocruzthere are changes13:21
rcarrillocruzin nodepool13:21
rcarrillocruzfor having username and port13:21
rcarrillocruzi assume zuul will consume those if defined13:22
rcarrillocruzand in the end13:22
rcarrillocruzthose become ansible_port13:22
rcarrillocruzansible_user13:22
rcarrillocruzotoh, haven't seen anything for ansible_connection13:22
rcarrillocruztobias was working on windows support13:22
rcarrillocruzwe must somehow pass that up13:22
rcarrillocruzas windows is moslty a winrm game13:23
odyssey4meI've picked up what one might consider a bug for nodepool - not a serious one though. The min-ready state check will launch builds if it's not met, even if builds are already in progress to fulfill the min-ready state.13:24
odyssey4meIt's not much of an issue because nodes will get consumed at some point, so it'll rectify back to the min-ready state - and perhaps this is working as designed to more quickly fulfill tests if they're coming in thick and fast.13:25
dmsimardrcarrillocruz: ah, yeah.. it's probably perhaps more of a nodepool thing, you're right. Sometimes the lines blur a bit between zuul and nodepool :)13:25
dmsimardodyssey4me: nodepoolv3 ?13:26
odyssey4medmsimard yep13:26
dmsimardHaven't had the chance to play with it much yet. In v2 the behavior seems fine, maybe they've changed the logic13:26
odyssey4meit's only something you'd see on a quiet system, because you can deliberately hold nodes and manage the usage/depletion of the pool13:27
odyssey4methe side-effect of the behaviour is that if you have 5 launchers, then when the min-ready quota is not met you'll have 5 launchers kick off builds to meet the min-ready quota, resulting in a bunch of nodes13:28
dmsimardohhhh, I see13:29
dmsimardI guess we haven't scaled beyond one launcher yet :p13:30
dmsimardWe're getting rid of jenkins soon with zuul v3.. didn't want to needlessly migrate to zuul-launcher knowing v3 was coming13:30
dmsimardwe're actually reaching the limits of a single jenkins master, seeing a couple issues13:30
odyssey4meyeah, I'm building a multi-region setup so that if one region fails the others can still do the work needed13:35
dmsimardsounds fun13:41
Shrewsodyssey4me: yeah, the min-ready mechanism is not an exact thing. we try to guarantee "at least" min-ready nodes, but we may actually end up building more. especially noticable to more launchers are in use13:51
Shrewss/to more/the more/13:51
dmsimardShrews: could that be handled through zookeeper ? like, if a launcher claims to be launching a node, the other launchers take it into account ?13:51
Shrewsdmsimard: possibly. we'd have to examine the request queue along with the current node count (we don't consider the req queue right now). it's still not going to be an exact thing13:52
odyssey4medmsimard given that this is desirable behaviour in a busy environment, and perhaps less desirable behaviour in some environments, it might be nice to have two algorithms... something like optimistic and conservative13:52
Shrewsimprovements to it are welcome13:52
odyssey4meconservative takes the queue into account, and optimistic doesn't :)13:53
Shrewsit's difficult to capture an exact state of the system without locking the entire system13:53
odyssey4meyeah, that's true13:55
odyssey4mehmm, it looks like the image list cleans itself, which is neat - but it doesn't clean up the cloudfiles uploads for rax... just the saved image list14:16
odyssey4meand the files on disk14:16
SpamapSShrews: all one needs is a stat cache that is locked for incr/decr/calc.14:29
SpamapSBut I'm kind of with you.. patches accepted, but for the most part it's not that terrible if one ends up with a few extra nodes on a busy system.14:29
Shrewskeeping a stat cache in sync with reality could be... interesting14:31
mordredodyssey4me: oh - thanks for the reminder on that ...14:32
mordredodyssey4me: I verified with the glance team in sydney that glance imports from swift copy the data, so the swift objects are not needed once the import is complete14:32
SpamapSShrews: not too many entry points to incr/decr ready/building, and you can always cross-tab check it for bugs occasionally (requiring a broader lock014:33
mordredodyssey4me: I think there are two missing things ... one is cleaning objects when an image is deleted when the objects were created for the user by create_image ...14:33
mordredodyssey4me: the second is that I *actually* think we should delete the objects once create_image has succeeded, since once the image is imported they are no longer needed by anything14:34
mordredodyssey4me: one is purely a shade patch, one is a shade patch to add a shade api call and then a nodepool patch using it from nodepool14:35
odyssey4memordred where's an appropriate place to register a bug for that as a reminder?14:37
ShrewsSpamapS: there are actually more entry points than that (e.g., sub-statuses for assigning/unassigning ready nodes). but i don't think it's worth the effort anyway14:38
mordredodyssey4me: no need - I'm writing them right now14:48
odyssey4memordred awesome, thanks :)14:55
openstackgerritJesse Pretorius (odyssey4me) proposed openstack-infra/nodepool feature/zuulv3: [docs] Correct default image name  https://review.openstack.org/52062815:50
*** openstackgerrit has quit IRC16:03
rcarrillocruzhey folks, how the push logs work? does the executor pull logs from nodepool nodes locally, then push to log server? or is maybe an indirect rsync, where executor drives the copy from hte nodepool node to the log server?16:46
*** weshay is now known as weshay_bbiab16:46
*** tobiash has joined #zuul16:48
Shrewsmordred: with the native devstack job, what's the proper way to run the post_test_hook?16:50
Shrewsi'm not finding any roles to do that16:50
mordredShrews: I just put the content of the post_test_hook  into the run playbook for shade's devstack jobs16:52
mordredShrews: post_test_hook itself is a devstack-gate thing16:52
Shrewsmordred: i'm not finding that16:53
mordredShrews: so - basically - after the run-devstack role - just kind of do whatever16:53
mordredShrews: playbooks/devstack/pre.yaml and playbooks/devstack/run.yaml16:53
mordredShrews: for shade, I had it run devstack in pre-run, since we're not testing devstack with shade, we just need a devstack so that we can run shade's tests16:53
mordredShrews: I'd imagine the same pattern could hold for nodepool16:54
mordredShrews: the existing nodepool devstack-gate job installs nodepool via a devstack plugin though - so if we keep that model, you'd want to do the run-devstack role in the run playbook, as a patch to nodepool could cause the install to stop working16:55
Shrewsmordred: oh, our plugin for shade doesn't really do much other than install shade. nodepool's does a bit more.16:55
mordredShrews: BUT - we could also change approaches and just use ansible to install nodepool after devstack is done rather than as a devstack plugin16:55
mordredShrews: in fact, we could even consider using openstack/ansible-role-nodepool to do the nodepool16:57
mordredinstall16:57
mordredit has support for installing from git already - if we did that, we could add the nodepool devstack job to openstack/ansible-role-nodepool too and we'd have good validation of both things16:58
mordredAND - we could also consider making jobs for non-devstack installs - like an OSA-nodepool job - to get a little more coverage of different types of clouds16:59
mordredanyway - just thoughts16:59
* Shrews wants to just start simple here for now16:59
Shrewsi think the thing to do is just call check_devstack_plugin.sh using 'command'17:00
mordrednod. then I think just doing run-devstack role in a run playbook, and then a playbook/role to run post_test_hook is your best bet17:00
odyssey4mewindmill does a single node with zookeeper and all that17:00
mordredyah17:00
odyssey4meit's built for testing, so it could work well17:01
mordredodyssey4me: yah - I think that'll be a good followup once Shrews has the simple conversion working17:01
odyssey4mefor production the ansible-role-zookeeper lacks cluster support, so we're using a different one to cover the zookeeper setup17:01
mordredodyssey4me: is that our/windmill's ansible-role-zookeeper? should we maybe shift to the role you're using?17:02
mordredodyssey4me, Shrews: btw ...17:02
mordredremote:   https://review.openstack.org/520652 Cleanup objects that we create on behalf of images17:02
mordredremote:   https://review.openstack.org/520653 Add method to cleanup autocreated image objects17:02
odyssey4meyeah, after doing some inspection, https://github.com/AnsibleShipyard/ansible-zookeeper meets our needs at this stage - it's pretty straightforward to setup17:03
odyssey4mehad to set two vars https://gist.github.com/odyssey4me/d1a202d6e340d165513f9cec1d19d5f0#file-setup-nodepool-yml-L105-L106 and pre-install a jre https://gist.github.com/odyssey4me/d1a202d6e340d165513f9cec1d19d5f0#file-setup-nodepool-yml-L12817:03
odyssey4meoh, and make sure the nodes can resolve each other: https://gist.github.com/odyssey4me/d1a202d6e340d165513f9cec1d19d5f0#file-setup-nodepool-yml-L189-L21017:04
*** openstackgerrit has joined #zuul17:05
openstackgerritMonty Taylor proposed openstack-infra/nodepool feature/zuulv3: Run image object autocleanup after uploading images  https://review.openstack.org/52065717:05
mordredand ^^ that consumes it17:05
mordredodyssey4me: the resolve each other step should be skippable if DNS yeah?17:06
odyssey4meyep17:06
mordredodyssey4me: also - your paste reminds me I need to follow up on seeing if qemu can make working vhd images now ..17:06
odyssey4mealthough you might still have to correct the local /etc/hosts file if something added a private IP as the first entry for the host name17:06
odyssey4memordred I tried without vhd-util from the infra ppa and it failed17:07
mordredI think I made a patch to qemu to support it - but never finished validating that it all worked. sigh17:07
odyssey4meI then tried the distro file, but it still has not 'convert' action17:07
mordredodyssey4me: yah - it definitely needs a patched/newer qemu ... but I honestly don't remember if I even got as far as submitting the patch ...17:08
odyssey4methat's why the long treatise comment to explain the task, otherwise others will waste their time too17:08
mordred++17:08
odyssey4meI have another gist somewhere giving the procedure for compiling that toolset too. IIRC it needs a physical host. :/17:08
odyssey4meso I was very happy to find the infra ppa :)17:09
odyssey4metyvm :)17:09
mordredyou're welcome! I'm sad it's needed, but ... yeah- it's a real mess17:09
clarkbat this point even aws is giving up on it so ya17:11
mordredodyssey4me: in case you get bored ... https://github.com/emonty/do-not-use-patched-qemu17:11
odyssey4mehahaha17:11
mordredodyssey4me: that has a repo with a debian package patch that should theoretically add support to qemu based on conversaions with BobBall about what xenserver is looking for in the image metadata header thing17:12
odyssey4mestarred, so that when I find myself struggling to sleep I can give it a whirl17:12
mordred(qemu can make vhd images, but xenserver expects creator_app to be 'tap'17:12
odyssey4mesrsly - that's it?17:13
mordredodyssey4me: yah - also something about blanking out a batmap flag17:13
mordredhttps://github.com/emonty/do-not-use-patched-qemu/blob/master/debian/patches/xenserver-support.patch is the patch itself17:14
odyssey4meja, reading it now17:14
mordredit's ... it's really sad17:14
odyssey4menot sure which is worse - having to patch vhd-util or having to patch qemu17:15
odyssey4meI'm thinking the latter, actually.17:15
mordredyah. well - the vhd-util patch is unacceptably bad and won't ever get upstreamed17:15
mordredif we can verify the qemu patch works, I'm pretty sure we can get it landed upstream17:15
mordredI just keep getting distracted from that task17:16
mordredso maybe just maybe we could get lucky enough to have the next ubuntu LTS have a qemu that can make vhd images17:16
odyssey4meit's probably already too late for that unless you push for it now until the release17:18
openstackgerritMerged openstack-infra/nodepool feature/zuulv3: [docs] Correct default image name  https://review.openstack.org/52062817:28
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: DNM: Convert from legacy to native devstack job  https://review.openstack.org/52066417:39
*** nguyentrihai has quit IRC17:43
Shrewsheh. i expected ^^^ to not work, but i at least expected some sort of error from zuul18:11
* Shrews makes tea to ponder this18:12
Shrewsoh, i guess no playbooks, no run.18:14
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: WIP: Convert from legacy to native devstack job  https://review.openstack.org/52066418:20
* jlk waves19:06
jlksorry I've not been around lately, I've had to go do some other tasks for a bit. But I think I have some time to throw at Zuul again. Cranking up the gertty.19:07
jlkOne thing I'm very curious about is whether there is appetite yet to think/talk/prototype Nodepool drivers, specifically a k8s driver.19:08
jlkI know SpamapS would be very interested in such a thing. And I might be able to throw time into it19:08
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: WIP: Convert from legacy to native devstack job  https://review.openstack.org/52066419:18
* jlk hopes he doesn't step on toes by giving out some +3s19:19
jlkjeblair: mordred: is there anything I should be aware of before doing (more) +3s?19:20
SpamapSjlk: yes I want a k8s driver like yesterday ;)19:20
SpamapSI mean.. honestly.. I have the cloud capacity to run 20x1GB VMs, no problem19:21
SpamapSbut feels a bit silly to use a 1GB VM for what takes about 64MB of RAM (run a ruby markdown linter.. or a yaml syntax checker)19:21
jlkyeah, you at least have an OpenStack to play with. I may not.19:22
jlkWhat does Zuul do when there is a set of jobs running for a given change:patchset and a new patchset for the change comes in? Does it cancel the running jobs? I know it'll not run a NEW change:jobset if when it starts it finds that the patchset isn't the newest one19:24
dmsimardrcarrillocruz: the logs are pulled from the nodepool VM by the executor and then pushed to the logserver19:25
dmsimardwe were discussing that just yesterday in fact19:25
SpamapSjlk: it aborts yes19:25
dmsimardrcarrillocruz: http://eavesdrop.openstack.org/irclogs/%23zuul/%23zuul.2017-11-15.log.html#t2017-11-15T21:06:3319:25
SpamapSjlk: well, cancels them yes19:25
jlkokay19:25
SpamapSjlk: it's kind of a nice feature actually. :)19:26
jlkindeed.19:26
jlkkeeps the queue small19:26
SpamapSjlk: if you don't have an openstack, you can play by doing trusted jobs on the executor19:27
pabelangerShrews: Hmm, do you mind helping see why an autohold didn't work19:27
SpamapSno nodes required ;)19:27
pabelangersudo zuul autohold --tenant openstack --project openstack/requirements --job build-wheel-mirror-centos-7 --reason pabelanger19:27
pabelangerwas the command19:27
pabelangerand trying to hold a job failure in periodic pipeline19:27
pabelanger| openstack | git.openstack.org/openstack/requirements | build-wheel-mirror-centos-7 |   1   | pabelanger |19:27
pabelangerthat is what I see in autohost-list but nothing in nodepool19:27
Shrewspabelanger: has that job run and failed since you set it?19:30
pabelangerShrews: yup19:30
pabelangerlet me get job19:30
pabelangerhttp://logs.openstack.org/78/520178/5/periodic/build-wheel-mirror-centos-7/7b4408019:31
openstackgerritMerged openstack-infra/zuul feature/zuulv3: web: add /tenants route  https://review.openstack.org/50326819:31
Shrewspabelanger: i have no idea. maybe has something to do with being periodic? I don't see it being de-registered in zuul logs.19:38
Shrewspabelanger: but the ara logs show this error:    tar (child): bzip2: Cannot exec: No such file or directory19:38
*** toabctl has joined #zuul19:40
Shrewspabelanger: i don't have enough knowledge of inner zuul workings to understand if there is a different code path for periodic jobs19:41
pabelangerShrews: yah, there is a build failure, and wanted to get into node and see why19:41
pabelangerkk19:41
pabelangerno rush, I have a patch up to collect logs19:41
pabelangerfigured I ask19:41
Shrewsfirst i've seen of autohold not working19:42
Shrewspabelanger: let's ask jeblair when he returns (i'll be out next week so won't be able to do so)19:43
Shrewswhy does our bindep role default to 'test' profile and not the default profile?19:44
Shrewsthat seems counterintuitive19:45
Shrewsi'd think that the default would be, ya know, the default19:46
pabelangerthink the idea was to have bindep test, pull in things needs to testing19:52
pabelangerand bindep production19:53
pabelangerfor deployment19:53
pabelangerbut, hasn't gone that way yet19:53
openstackgerritMerged openstack-infra/zuul feature/zuulv3: web: add /{tenant}/status route  https://review.openstack.org/50326920:01
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Switch to threading model of socketserver  https://review.openstack.org/51743720:15
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Remove zuul-migrate job  https://review.openstack.org/51602820:15
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Improve error handling in webapp /keys  https://review.openstack.org/51705320:16
SpamapSoh nice, web improvements. :-D20:27
openstackgerritMerged openstack-infra/zuul feature/zuulv3: More documentation for enqueue-ref  https://review.openstack.org/51866220:29
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Make enqueue-ref <new|old>rev optional  https://review.openstack.org/51866320:29
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Override HOME environment variable in bubblewrap  https://review.openstack.org/51965420:29
tobiashjlk: I left a comment on your comment at https://review.openstack.org/#/c/517078/20:45
mordredwoot! web things landing20:51
mordredpabelanger: I think we need to update our puppet to add the new js lib the dashboard uses20:51
* mordred apologizes for being not super around today - day of calls ...20:51
*** _ari_ has quit IRC20:55
rcarrillocruzCool, thx dmsimard21:01
*** _ari_ has joined #zuul21:06
*** docaedo has joined #zuul21:14
*** weshay_bbiab is now known as weshay21:17
*** docaedo has left #zuul21:28
pabelangermordred: kk, I can look into that21:33
jlktobiash: oh weird, gertty is showing them in a weird order23:29
jlkdoesn't list one as a parent of the other23:29

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!