Monday, 2017-08-28

*** dkranz has quit IRC01:59
*** pbelamge has joined #zuul05:14
*** pbelamge has quit IRC05:18
*** hashar has joined #zuul05:56
*** dmsimard has quit IRC06:50
*** pbelamge has joined #zuul07:00
*** dmsimard has joined #zuul07:19
*** bhavik1 has joined #zuul09:08
*** bhavik1 has quit IRC09:40
*** SotK has quit IRC10:22
*** SotK has joined #zuul10:22
*** jkilpatr has quit IRC10:36
*** hashar is now known as hasharLunch10:38
*** jamielennox has quit IRC10:51
*** jamielennox has joined #zuul10:52
*** jkilpatr has joined #zuul10:58
*** hasharLunch is now known as hashar12:40
leifmadsenjeblair: mordred: did a patch get pushed to gerrit yet? if so, I'd like to mention the link on the notes from last week13:03
*** dkranz has joined #zuul13:27
mordredleifmadsen: yah - https://review.openstack.org/#/c/498050/13:44
leifmadsenhawt13:44
leifmadsenI'll note it!13:44
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Add unittest for secrets data  https://review.openstack.org/48491113:56
Shrewshrm, we should get tristanC to look at 498050, too13:59
leifmadsenall teh lookz!14:02
leifmadsenShrews: made really good progress on Friday with mordred and jeblair. We have a bunch of notes while stepping through if you're interested. Made it I think about half way. Got through to the point that nodepool can instantitate VMs in rdocloud14:02
Shrewsleifmadsen: great. was out of town friday, so i'd be interested in seeing those notes14:03
leifmadsenlinking14:04
leifmadsenhttps://etherpad.openstack.org/p/zuulv3-quickstart14:04
leifmadsenstill more to do, but we started from a fresh CentOS 7.3 VM, and then started building things out14:04
leifmadsenthe goal is to build a hello world, triggered from a GitHub PR and resulting in an Ansible run that does something dumb like echo "Hello world" :)14:05
leifmadsenper our conversation a couple weeks ago14:05
*** _ari_ is now known as _ari_|conf14:05
leifmadsenmordred: I had been thinking more over the weekend about why I like doing this, "start from scratch" type documentation, and realized that it's the same thing we did for the Asterisk book. Documenting things from pure scratch typically results in bugs found and fixed. We found (and fixed) a ton while writing the Asterisk book.14:06
leifmadsenFound at least one here too :)14:06
mordredleifmadsen: yah - I think it was really useful14:07
leifmadsenfor sure, I learned a ton, and you learned a ton :D14:08
leifmadsenamazing what happens when you have to go back and redeploy your own software from scratch lol14:09
leifmadsenI'm really looking forward to getting this all done so that I can run with it and build out some larger scenarios14:09
leifmadsenget this out of the way, then start taking all the ideas jeblair had, and we could probably end up writing a small Zuul book14:10
mordredleifmadsen: we should write a large Zuul book ... because we're going to need chapters on the idea of gating in the first place, and how to philosophically appraoch development in a world where you have the ability to do multi-repo gating14:49
mordredleifmadsen: it'll be a fun book to write14:49
mordredjeblair: I've done the refactor on the nodepool patch14:59
mordredjeblair: one remaining thing is that I kind of think we should error in some way if you have a config that references a cloud-image that doesn't exist in your config - but we don't have much in the way of erroring on invalid config14:59
mordredjeblair: I'm just putting in a code TODO for now15:00
mordredor, actually, we do raise in one place - I'm going to try that and see how it feels15:00
openstackgerritMonty Taylor proposed openstack-infra/nodepool feature/zuulv3: WIP: Fix broken use of pre-existing cloud images  https://review.openstack.org/49805015:03
mordredjeblair, leifmadsen, Shrews: ^^ I think that's nicer now - I moved the logic into the ProviderCloudImage class and had the config reading put the ProviderCloudImage into the ProviderLabel instead of just the cloud-image name15:05
mordredShrews: ^^ that's the bug we found in nodepool working through setting up a minimal nodepool wth leifmadsen - it still needs a unit test which I haven't gotten to. makes me think we should also add a pre-existing image entry to the devstack job, maybe referencing that cirros image that devstack uploads or something15:06
Shrewsmordred: i seem to recall tristanC intentionally used labelReady() instead of imageReady() for reasons. i'll let tristan comment on that15:06
mordredShrews: yah - I re-updated the patch and it's back to labelReady15:06
Shrewsoh15:07
mordrednow that ProviderCloudImage is in place, the awkward dance of passing stuff around isn't needed anymore15:07
Shrewsmordred: ??? https://review.openstack.org/#/c/498050/2/nodepool/driver/__init__.py15:07
mordredand I agree - the thing the launcher is asking about is "do we have images ready for this label"15:07
mordredgah15:07
mordredone sec15:07
openstackgerritMonty Taylor proposed openstack-infra/nodepool feature/zuulv3: WIP: Fix broken use of pre-existing cloud images  https://review.openstack.org/49805015:11
mordredShrews: thanks - I had updated it in the method and forgot to fix the call sites :)15:11
Shrewswow. that seems to break the world15:19
mordredyeah?15:20
mordredpiddle15:20
jeblairthe general approach makes sense to me though15:20
Shrewsis "python -m venv env" the same as "virtualenv env" ? The first is new to me15:22
mordredShrews: python -m venv doesn't seem to work for me -where's that?15:22
Shrewserr, python3 -m venv15:23
Shrewsit's in the etherpad leifmadsen linked15:23
Shrewsline 22715:24
mordredinteresting - I think it may have something to do with ensurepip - it breaks in a different way on my laptop15:25
Shrewsmordred: i guess it's new to you, too  :)15:25
clarkbpython3 has a built in virtualenv tool. I don't know that anyone actually uses it15:26
mordredShrews, clarkb: yes - I have just verified that on centos7 if you install epel then instlal python34-pip and then run python3 -m venv env15:28
mordredthat will create a virtualenv called env15:28
openstackgerritMonty Taylor proposed openstack-infra/nodepool feature/zuulv3: WIP: Fix broken use of pre-existing cloud images  https://review.openstack.org/49805015:29
leifmadsenI'm the guy that throws a grenade in the room and lets all the engineers develop a system to put the pin back in15:35
Shrewsjeblair: care to +A https://review.openstack.org/497480 so nodepool tests are stable again for us?15:36
jeblairShrews: :( done15:37
Shrewsjeblair: it's a head scratcher, for sure15:38
openstackgerritMerged openstack-infra/nodepool feature/zuulv3: Revert "Allow launcher to stop quicker when asked"  https://review.openstack.org/49748015:42
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Run post playbooks on job timeout  https://review.openstack.org/49827015:57
jeblairhah!  i'm pretty sure the current job timeout is in seconds, but the docs say minutes.  which should we use?16:02
jeblair(or should we stick with integer seconds but also have the configloader handle string values such as "90m" and "2h"  ?)16:04
pabelanger90m / 2h might be more user friendly16:06
pabelangerbut, no preference here16:06
jeblairokay, i'm going to set another autohold so i can iterate on devstack-legacy in place16:08
mordredjeblair: cool16:14
mordredjeblair: I also agree - seconds - but if we can do friendly names with suffixes like 90m / 2h that would be nice16:15
*** pbelamge has quit IRC16:16
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Fix job timeout docs  https://review.openstack.org/49851316:20
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Fail early if people attempt to add zuul vars or secrets  https://review.openstack.org/48400016:22
jlko/16:41
openstackgerritMerged openstack-infra/zuul-jobs master: Add role for adding ssh key to remote nodes  https://review.openstack.org/49810916:45
jeblairmordred: not sure if you saw jlk's post-approval comment on https://review.openstack.org/49810916:46
pabelangerjeblair: just mocking up afs publisher here locally, if we kinit / aklog before our rsync, we should be able to pull directly from node to /afs/.openstack.org/docs. Or, would you like to first stage into jobsdir, then kinit / aklog, rsync to /afs/.openstack.org/docs16:47
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Run post playbooks on job timeout  https://review.openstack.org/49827016:47
jeblairpabelanger: if you can swing it in one pass, i think that's fine.  i assume we had it in two because of some complication in matching the jenkins behavior.16:50
pabelangerk, I'll attempt that first16:51
dmsimardjlk: re: block dashes17:01
dmsimardjlk: I guess you're right, it's probably a syntax I picked up from openshift-ansible :)17:02
jlkit's somewhat sad it works both ways.17:02
jlkhonestly I don't care much, so long as we all agree on a way of doing it17:02
dmsimardjlk: well, it's not so much ansible as it is yaml17:02
dmsimardyaml doesn't care about the order of the keys17:03
dmsimardbut ++ on using the same syntax consistently everywhere17:03
dmsimardI don't like seeing the yaml syntax mixed with the ansiblyaml syntax (with ='s)17:04
jlksame17:04
dmsimardansible-lint allows us to write custom rules iirc, maybe we can consider doing that later17:04
jlkmordred: jeblair: I'm poking at a follow up role to remove the added key bit that just got merged to zuul-jobs.17:04
jeblairjlk: thanks17:05
jlkjeblair: mordred: I wonder if add-build-sshkey and add-sshkey will trample on each other.17:07
dmsimardmordred: did you need to land anything in zuul *before* I ship the ara dot release ?17:07
jeblairdmsimard: nah we should be fine17:07
jlkit LOOKS like they might touch the same file (~/.ssh/id_rsa of whatever user ansible is connecting as)17:08
dmsimardok, going through a last sanity check and more than likely tagging ara for release17:08
jeblairjlk: i think so.  that's probably okay by virtue of their being used by different jobs, though really we should alter the proposal jobs to use a non-default key.  but maybe we can do that post-cutover where there are fewer moving parts.17:09
jlkokay17:09
jeblairjlk: (by different jobs, i mean that the proposal jobs are not multinode, and so won't be performing any inter-node ssh ops)17:09
openstackgerritJesse Keating proposed openstack-infra/zuul-jobs master: Add a role to remove an ssh private key  https://review.openstack.org/49853017:11
*** bhavik1 has joined #zuul17:39
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Add create / destory roles for AFS tokens  https://review.openstack.org/49854717:53
*** bhavik1 has quit IRC18:00
dmsimardjeblair, mordred: btw, a question on openstack-infra http://lists.openstack.org/pipermail/openstack-infra/2017-August/005566.html18:47
jeblairdmsimard: well, the plan is that every project in openstack-infra will use zuulv3 if we can finish our outstanding tasks by the ptg18:52
dmsimardjeblair: technically ara is not openstack-infra :)18:52
jeblairdmsimard: every openstack project18:52
jeblairdmsimard: every project using the infrastructure run by openstack-infra18:53
dmsimardjeblair: oh, I see what you mean. I know the cutover is right before the PTG but I meant to ask if we could enable it sooner rather than later so I can start addressing potential changes required between zuul v3 and ara 1.0 progressively rather than in one big chunk. There's already a lot of work done in ara 1.0 and there will be a bunch more by the PTG.18:55
jeblairdmsimard: i think we need to focus exclusively on preparing for the ptg cutover or it won't happen18:55
dmsimardI don't necessarily disagree, but most of my work on ARA is on my personal time, not work time. Is enabling v3 for ara an arduous process ?18:57
dmsimardThis won't prevent me from continuing to contribute to the migration efforts, but I can wait I guess18:59
jeblairdmsimard: thanks, i appreciate that18:59
Shrewsmordred: curious about https://review.openstack.org/497968 ...  zookeeper is required, but it's not required on the same host as nodepool.19:03
jeblairShrews: that's a good point; i think we were very focused on the all-in-one scenario.  maybe we can add another tag for that?19:06
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Set base environment as python3  https://review.openstack.org/49625119:06
Shrewsjeblair: tag?19:08
Shrewsi assume that's a bindep thing. i haven't taken the time to learn that. perhaps i should  :)19:09
jeblairShrews: bindep [bracket thingies] are arbitrary19:09
jeblairShrews: like "[test]"19:09
jeblairso maybe "zookeeperd [allinone]" or something?19:09
Shrewsah, i see. yes. i think that would be better19:10
pabelangerjeblair: I've been reading up on rsync and filters, by default ansible will pass the -F flag ( --filter='dir-merge /.rsync-filter') by default. This then allows us to potentially use the .rsync-filter file in our directories to exclude things. As an example, I am testing http://paste.openstack.org/show/619673/ locally and it does appear to be working how I _think_ it should be.  When you did the original afs19:22
pabelangerdocs, did you consider using -F for root-markers?19:22
dmsimardmordred, jeblair: ara 0.14.1 is tagged, it will be released when the gate is in a bit of a better shape19:28
jeblairpabelanger: i don't think i understand the connection you're making19:33
jeblairpabelanger: how is -F related to root-markers?19:33
pabelangerjeblair: sure, looking at the rsync command we are using today in zuulv2.5, I see we are setting up a filter, based on how the root-marker was found: http://git.openstack.org/cgit/openstack-infra/zuul/tree/zuul/ansible/library/zuul_afs.py#n72 We dynamically create the excludes file, then pass it to rsync.19:35
pabelangerI am curious if we still need to do that, and if so, could using -F (.rsync-filter) be an option moving forward in zuulv319:36
jeblairpabelanger: do you have a specific proposal in mind?19:36
pabelangerjeblair: not really, I'm just looking to see if we can leverage the existing functionality today with synchronize module otherwise, I am wonder if we might just import zuul_afs into project-config / zuul-jobs and continue using that for the sync logic.19:39
pabelangerI mean, the building of the root-marker logic, kinit / aklog is handled19:40
openstackgerritMerged openstack-infra/nodepool master: Clarify diskimage names in docs  https://review.openstack.org/49078119:41
jeblairpabelanger: something has to build the root-marker list, right?  after that, whether the exclude list is incorporated implicitly (-F) or explicitly (--filter=) doesn't seem consequential to me -- except one of them opens up an avenue for jobs to manipulate the result by laying down their own rsync filter files.19:42
jeblairpabelanger: at any rate, building the exclude list is the complicated part.  you could probably do it with a find command.  or you could use a module that uses the existing python code19:43
jeblairpabelanger: after that, i don't think using 'synchronize' vs 'command: rsync' buys us much, so i wouldn't expend too much energy trying to use it if it doesn't fit19:44
mordredyah- my thought was just putting zuul_afs module into zuul-jobs19:45
pabelangerokay, so are we open to the idea of having a job manipulate the result by adding their own rsync filter files?19:45
jeblairpabelanger: i think it's a bad idea19:46
pabelangerokay19:46
jeblairi think docs publishing is hard enough to understand as it is19:46
dmsimardjlk: this inventory and network overlay stuff is eating away at my patience :)19:49
jlkhaha :(19:49
pabelangerokay, so sounds like we could move zuul_afs into zuul-jobs, extract out bits for kinit / aklog, and leave the rest of root-marker / rsync logic alone for now19:49
dmsimardjlk: found two new issues, fixed one.. working on the other19:50
dmsimardjlk: both fixed, https://review.openstack.org/#/c/498207/ had a stray host_counter left and https://review.openstack.org/#/c/435933/ I forgot to add [0] when retrieving the hostvars19:55
dmsimardhopefully this one is the good one19:55
mordredjlk, jeblair: from scrollback, I agree re: add-sskey and add-build-sshkey - add-sshkey will definitely step on the stuff multinode does - but as jeblair said I think that's fine - it's certainly worth clarifying things though19:57
dmsimardclarkb, jeblair: still looking for a last +2 on 496355, 496356 (I actually hit a 2.2 specific issue in a review) and 49693519:57
dmsimardafk for a bit19:59
*** jkilpatr has quit IRC20:00
openstackgerritMonty Taylor proposed openstack-infra/nodepool feature/zuulv3: WIP: Fix broken use of pre-existing cloud images  https://review.openstack.org/49805020:00
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Create upload-afs role  https://review.openstack.org/49858820:19
*** dkranz has quit IRC20:29
jeblairreloading zuulv3 to pick up new tenant config21:03
openstackgerritMonty Taylor proposed openstack-infra/nodepool feature/zuulv3: WIP: Fix broken use of pre-existing cloud images  https://review.openstack.org/49805021:04
jeblairjlk: can you help make sense of this error?  http://paste.openstack.org/show/619694/21:04
jeblairjlk, mordred: also, based on logs, it looks like we're getting github events for 3 ansible forks that aren't familiar to me21:05
jeblairthey arrive together: http://paste.openstack.org/show/619696/21:06
*** jkilpatr has joined #zuul21:06
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Create upload-afs role  https://review.openstack.org/49858821:09
mordredjeblair: looking21:09
jeblairmordred: it doesn't seem to be urgent, so no worries if we just want to note that for later21:10
mordredjeblair: fascinating - yah, let's look at that later, I bet there is somethign in the fork data model we didn't know about21:11
mordredjeblair, pabelanger: ^^ oh neat! so with v3+bubblewrap we can just kinit and not have to wrap each call in k5start?21:30
jeblairmordred: yep (of course we can still use k5start if it's convenient)21:31
openstackgerritMerged openstack-infra/zuul-jobs master: Add create / destory roles for AFS tokens  https://review.openstack.org/49854721:33
mordredjeblair: woot21:35
mordredpabelanger: +2 on the publish role - with comments. (as in, I'd rather forward progress continue, but the docs are confusing if anybody gets a moment)21:36
pabelangersure, I can update it shortly21:37
mordredpabelanger: also, if you get a sec, 498111 and its two parents, 498559 and 497618 are all good to go21:43
pabelangersure, looking now21:44
pabelanger559 still in check, so only did +2 to wait un results. Everything +321:48
mordred\o/21:49
dmsimardZuul meeting in 10 minutes in #openstack-meeting-alt21:51
dmsimardmordred, jeblair, SpamapS, jlk, pabelanger, fungi, leifmadsen, clarkb ^21:51
fungiyay!21:52
dmsimardEtherpads of interest: https://etherpad.openstack.org/p/zuulv3-pre-ptg && https://etherpad.openstack.org/p/AIFz4wRKQm21:54
*** hashar has quit IRC22:10
clarkbdmsimard: I actually see why it works now, cryptography publishes a generic linux wheel now22:18
clarkbdmsimard: dvr grenade multinode job doesn't look so happy22:40
dmsimardclarkb: yeah I commented on it22:40
clarkbI'm trying to find where (if anywhere) we've logged the steps taken to set up the ovs tunnel22:40
dmsimardclarkb: the initial tempest run came back green22:40
clarkbara errors when I try to see the things22:41
dmsimardclarkb: actually it fails on the parent commit22:41
dmsimardclarkb: the inventory rework22:41
dmsimardclarkb: https://review.openstack.org/#/c/498207/22:41
dmsimardtempest is green on what I assume is the original deployment http://logs.openstack.org/07/498207/6/check/gate-grenade-dsvm-neutron-dvr-multinode-ubuntu-xenial-nv/7ad728b/logs/old/testr_results.html.gz22:41
dmsimardbut then fails ... pinging cinder ? http://logs.openstack.org/07/498207/6/check/gate-grenade-dsvm-neutron-dvr-multinode-ubuntu-xenial-nv/7ad728b/logs/grenade.sh.txt.gz#_2017-08-28_20_58_47_95722:42
* dmsimard not super familiar with grenade or devstack jobs22:42
mordreddmsimard: I think mapping v3 git repo states into what's needed for grenade may still need some braining22:43
mordredoh - but that's a v2 run22:43
clarkbdmsimard: http://logs.openstack.org/33/435933/46/check/gate-grenade-dsvm-neutron-dvr-multinode-ubuntu-xenial-nv/18ea9dc/logs/old/testr_results.html.gz its failing22:43
clarkbthe link you have is an old run22:43
clarkber its failing the first time around on failing to ssh so something is broken with the overlay I think22:43
mordredclarkb: those are two different job22:44
dmsimardclarkb: it's not worth looking at overlay patch until the parent inventory patch is passing22:44
clarkboh got it22:44
dmsimardclarkb: which is the job results I was referring to came from22:44
dmsimardinventory patch: https://review.openstack.org/#/c/498207/22:45
clarkbcall me insane but I really don't like that patch...22:45
clarkbit is way more difficult to read22:46
clarkbalso treating those bash files as ini is not really correct... though at this point unlikely to cause problems22:46
dmsimardclarkb: it's a way to get key=value vars from the file22:47
clarkbI think keeping that in bash will be significantly smaller since you can just source the files22:48
clarkb(and be much easier to read)22:48
clarkband it will parse correctly22:48
dmsimardclarkb: there are things that ansible makes easier, like differentiating v4 vs v622:49
dmsimardclarkb: ultimately I use the "v3-style" hostvars in the network overlay patch to use the right ip addresses at the right place22:50
dmsimardwriting it in bash kind of sucks because the var format sucks, it's like nodepool='{"key": "value", "anotherkey": "anothervalue"}'22:50
dmsimarddoesn't mean I can't do it, but it's not going to be particularly clean22:51
clarkbya but you can actually safely parse those fiels (which your ansible is not doing, its working by way of hacks)22:51
dmsimardIt's not too hacky, the format is static and it's using a "properties" style of ini lookup22:52
*** olaph1 is now known as olaph22:52
dmsimardunless we start changing the /etc/nodepool/provider format it should not break22:52
clarkbso you'd have localhost connection=local nodepool='{ "az": "$AZ", "ip": "$IP" }'22:52
clarkband you'd end up with like 10 lines of bash22:52
SpamapSdamn.. meatspace got me twice in that hour :-P22:52
pabelangerSpamapS: meatspace, I need to use that more22:53
fungithat and "wetware"22:53
dmsimardclarkb: I don't have a strong opinion, I just write less and less bash now that Ansible is in my life.. I can be swayed one way or the other22:54
fungiclarkb: yeah, the manylinux1 wheel on pypi for cryptography is the one i was referring to anyway, not the ones from our wheel builders22:54
fungispeaking of which, i still wish i had time to figure out how to get our wheel builders to only build wheels when there aren't any already in the pypi mirror22:55
pabelangermordred: re: wheel mirrors, are you thinking we'd still rsync data back via executor bwrap environment?22:55
mordredpabelanger: nope22:55
mordredpabelanger: for wheel mirrors I believe we need to keep the wheel mirror build hosts around for now - because one of the important things they do is keep a local cache of wheels built22:56
clarkbdmsimard: I'm writing up what it would like like keeping bash processing of bash files. I think it may help convince it is nice and simple :)22:56
mordredpabelanger: so I think for the wheel mirror jobs we do the add-to-inventory trick from tarballs.o.o until we can handle them with static nodes22:57
pabelangermordred: Right, but we might be able to use afs for that part too? Or would the network preformance on that be too bad22:57
mordredpabelanger: the structure is a bit different unfortunately22:57
dmsimardclarkb: sure, one challenge I potentially see with bash is emulating the "with_together" from Ansible but ultimately we don't really care about the public ipv4's of the subnodes so it's probably not that big of a deal to just not set them22:57
fungipabelanger: afs on the executor (so there's some cache maintained) or on the individual nodes?22:58
mordredpabelanger: so we'd have to engineer something clever - which we probably shoudl do - but for now I think keeping those as makeshift static hosts will get us over the hump22:58
clarkbdmsimard: yes I am going to completely ignore public and interface ip as they are unnecessary here22:58
fungithe latter seems like it would not work out so well (likely worse than rebuilding them there)22:58
mordredfungi: yah22:58
pabelangermordred: okay, static node works22:58
mordredpabelanger: luckily we have these nice roles made already to do that :)22:58
clarkbalso ipv622:59
pabelangerfungi: haven't tested it, but jeblair added a trusted bind mount to bubblewrap that should allow us to kinit / aklog to the /afs/.openstack.org volumes22:59
dmsimardclarkb: like, even stuff like az, cloud and stuff. We probably don't care about it. I just went with parity over what v3 hostvars provided22:59
pabelangermordred: indeed22:59
fungionce we can grow an extensible artifact stashing and sharing solution between job runs, something like a reused wheel build cache is probably a viable solution22:59
mordredfungi, pabelanger: yah - that's what's gonna let us do docs publication and the like22:59
dmsimardclarkb: maybe I'm making this complicated on myself by overdoing it :)22:59
clarkbdmsimard: yes I think so :)22:59
clarkbdmsimard: if we trim it down to only the necessary info the change becomes tiny (and much easier to udnerstand/debug)23:00
jeblairdmsimard, clarkb: remember that this inventory shim is temporary; it won't be used in the forward-looking v3 jobs.23:00
clarkbjeblair: ya exactly23:00
dmsimardjeblair: aye23:00
fungihrm... thinking about that... maybe afs thruogh the executors (with a cache) _is_ an eventual solution to sharing intermediate build artifacts between jobs23:00
mordredpabelanger: I'll push up some jobs real quick for the wheel mirror building23:00
dmsimardclarkb: with that in mind, let me put a patch up with bash instead23:00
dmsimardclarkb: unless you want to send a patch, that's cool too23:01
pabelangermordred: wfm23:01
clarkbdmsimard: I've got it half written if you want to wait a moment23:02
dmsimardack23:02
mordredpabelanger: I think once your "publsh XXX to AFS from artifacts" stuff is good to go, then I think that actually takes care of _all_ of the jobs that publish to afs and we cna take care of them in migration23:02
mordredpabelanger: so that one is one of the ones that is a big bang-for-the-buck :)23:03
pabelangermordred: ya, I think we can start testing something tomorrow morning. The part that still needs work is some of the .root-marker setup. But for testing, I'm going to keep it super simple.23:03
mordredpabelanger: luckily zuul publishes its docs via afs - so we've got a good test case :)23:04
pabelangerexactly23:04
clarkbdmsimard: http://paste.openstack.org/show/619713/ how does that look?23:04
clarkbdmsimard: I am somewhat assuming the format there is json?23:05
dmsimardclarkb: yeah, it's a var (nodepool) that is a dict (json)23:05
* dmsimard looks23:05
clarkbdmsimard: I'll let you push it if you like it and feel free to make hints as I was somewhat inferring the json stuff23:05
clarkbdmsimard: oh except that isn't backward compatible23:06
clarkbyou have to keep the counter stuff to be backward compat23:06
* clarkb edits23:06
dmsimardclarkb: yeah I had replaced the counter by the index of the host in the group23:06
dmsimardclarkb: I guess we can keep the counter to change even less things :)23:07
clarkbhttp://paste.openstack.org/show/619714/23:07
clarkbya I forgot we had to keep counter to be forward and backward compatible23:07
clarkbdmsimard: should I go ahead and push that up?23:14
dmsimardclarkb: I'm sending it now23:15
clarkbok23:15
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Include the prepared projects list in zuul variables  https://review.openstack.org/49861823:25
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Include the prepared projects list in zuul variables  https://review.openstack.org/49861823:31
dmsimardclarkb: because I wanted to confirm that I wasn't crazy -- the wheel building stuff is relatively new indeed https://github.com/pyca/cryptography/commit/cc78c30fd95bb57fe75330ff7c702eb1f2ac469523:38
dmsimardsome amount of follow up patches to that23:39
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Create upload-afs role  https://review.openstack.org/49858823:39
mordredjeblair: dependencies can be added to a job variant in a project-pipeline definition right?23:42
jeblairmordred: should be23:42
mordredjeblair: k, cool. I just wrote a job that wants to depend on a different job - but which job it depends on could vary by project, so putting the dependon the job itself seemed wrong23:43
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Create zuul_executor_dest variable for fetch-sphinx-output  https://review.openstack.org/49862423:44
jeblairmordred: ya, i think of that as the primary use case actually23:48
mordredjeblair: in your prepared projects list patch ... it says "the projects of any items this item depends on, as well as the projects that appead in job.required-projects" ... are those different sets? - like, does adding a depends-on automatically get that project added even if it's not in required-projects?23:49
jeblairmordred: yes23:51
jeblairmaybe that's worth reconsidering -- after all, why include that project if the job doesn't use it?  :)23:52
jeblairhttps://review.openstack.org/498270 and https://review.openstack.org/498513 are also ready to go23:53
jeblairmight be nice to get the timeout fix in the same restart23:54
openstackgerritMonty Taylor proposed openstack-infra/zuul-jobs master: Add trigger-readthedocs job  https://review.openstack.org/49862623:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!