Monday, 2017-11-13

*** jkilpatr has quit IRC00:16
openstackgerritRui Chen proposed openstack-infra/nodepool feature/zuulv3: Apply floating ip for node according to configuration  https://review.openstack.org/51887503:53
openstackgerritIan Wienand proposed openstack-infra/zuul feature/zuulv3: Add client unit testing  https://review.openstack.org/51922204:14
openstackgerritIan Wienand proposed openstack-infra/zuul feature/zuulv3: More documentation for enqueue-ref  https://review.openstack.org/51866204:19
openstackgerritIan Wienand proposed openstack-infra/zuul feature/zuulv3: Make enqueue-ref <new|old>rev optional  https://review.openstack.org/51866304:19
openstackgerritIan Wienand proposed openstack-infra/zuul feature/zuulv3: Add client unit testing  https://review.openstack.org/51922204:19
*** bhavik1 has joined #zuul04:52
*** lennyb has quit IRC05:51
*** lennyb has joined #zuul05:56
*** smyers has quit IRC05:59
*** robled has quit IRC06:00
*** lennyb has quit IRC06:03
*** lennyb has joined #zuul06:05
*** bhavik1 has quit IRC06:23
*** smyers has joined #zuul06:28
*** robled has joined #zuul07:33
*** hashar has joined #zuul08:05
*** smyers has quit IRC10:03
*** smyers has joined #zuul10:15
*** jkilpatr has joined #zuul11:55
*** rcarrillocruz has quit IRC11:57
*** rcarrillocruz has joined #zuul12:13
tobiashdoes anything speak against adding the commit sha's of the changes to be verified into the zuul.items[] data structure in ansible?13:37
tobiashmy use case is a commit message verification13:37
tobiashcurrently only the change url and refs are there and I don't like adding a secret just to ask gerrit/github for the sha's of the changes13:38
tobiashhaving a list of sha's of all commits of the change or pull request would make that possible entirely using git on the node without any interaction with gerrit/github13:39
*** hashar is now known as hasharAway14:07
* Shrews waves to the post-Sydney crowd and hope they have come back with fantastic ideas14:25
ShrewsI have enjoyed the quiet14:25
pabelangero/16:04
pabelangerbody still recovering from jetleg16:04
rbergeronno kidding16:06
* mordred was fine on day one of being back - except that day one of being back wound up not ending until 5am the next day - so day two of being back was O HAI JETLAG16:40
mordredtobiash: I can't think of any specific reason not to do that - can you not do the same thing with just the git repo on disk?16:42
mordredtobiash: I guess you don't really know what origin/master is anymore once you're in the git repo zuul has prepared though - so you can't just look at the non-landed commit messages ...16:42
tobiashmordred: exactly that's my problem16:43
rcarrillocruzi envy badly people who can sleep on planes16:44
mordrednod. yah - it seems like having them in the items dict wouldn't be terrible ... that is, if we actualy know them - I know we do for gerrit - do we have a good way to get the list of shas for github PRs?16:44
rcarrillocruzback on friday, i spent 2 days and a half completely catatonic16:44
rcarrillocruzre: summit jetlag16:44
mordredrcarrillocruz: I can sleep on planes, but flying back from australia I didn't sleep much due to time of day and stuff16:44
tobiashmordred: oldrev, newrev would be probavly sufficient16:48
tobiashhave to check if change objects are populated with that16:48
mordredtobiash: yah - oldrev/newrev is likely all you need - you should be able to walk the commits with git given those16:49
fdegirwe are starting Zuulv3 prototype for OPNFV17:12
fdegirand the first question I got from Linux Foundation was resource requirements17:12
fdegirlike cpu/ram etc for the machine17:12
fdegirit will be simple prototype, just doing hello world and sleep to imitate our long running jobs17:13
*** pabelanger is now known as keerthi17:13
*** keerthi is now known as pabelanger17:13
fdegiranyone has any advice about requirements?17:13
clarkbfdegir: http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=558 that is one of 10 zuul executor instances17:15
clarkbhttp://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=117 one of eight merger instances17:15
clarkbhttp://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=557 the single scheduler instance17:16
tobiashfdegir: for just hello world jobs I think a quad core vm with 8gb ram would be sufficient17:16
pabelangerYah, so far 8GB per instance, per process seems to work well. Scheduler might be larger, depending on amount of projects17:16
clarkbthis us what we are running in production for ~1k jobs per hour and seems to hold up17:16
clarkbbut ya hello world wont need much17:16
fdegirok17:16
pabelangerwe have 15GB instances for all-in-one hands-on workshop, seemed to work well17:17
fdegirthe idea is to see if it stays up for a week without load17:17
fdegirand gradually increase the load17:17
pabelangerfdegir: yah, would be interesting help onthe issue the linux foundation perviously had17:18
fdegirpabelanger: yep - i told them about our conversation last week17:18
fdegirthere wasn't a resistance this time so things look positive17:19
fdegirwe should get it running quickly and report back and hopefully contribute fixes if we identify any17:19
clarkbshould be noted that our v3 install is roughly the same size as our v2 install17:21
fdegirthanks for the responses17:21
fdegiri have some numbers to pass to them now17:21
clarkbthe biggest thing other v2 installs will be lacking is the launcher -> executors becauss I expect most were running jenkins17:22
fdegiri think the previous attempt was with v2.517:22
clarkbbut you can provbably start by replacing jenkins with a zuul executor17:22
fdegirzuul will run in parallel to jenkins initially17:23
fdegirand then onboard few projects17:23
fdegirso we can have real users to roll it out for the rest of the projects17:23
mordredfdegir: fwiw, I shared https://etherpad.openstack.org/p/zuulv3-jenkins-integration with odyssey4me a few days ago as well - he may have use for such a thing where he is17:24
mordredfdegir: so maybe between the group of us we can find enough interest to cause code to be written17:24
clarkbfdegir: ya thats fine just trying to describe how one might size things and I think a mostly 1:1 of existing installs is a decent place to start17:24
mordred(I'll eventually hit the point where I rage-code it, but my stack is deep enough at the moment it'll be a little while before I do)17:25
fdegirmordred: i personally want to go full zuul17:25
mordredclarkb: +17:25
mordredclarkb: ++17:25
mordredfdegir: me too :)17:25
fdegirbut17:25
fdegirthe plugin stuff is the biggest concern i have17:25
mordredfdegir: but if we have those things it might make a phased transition easier to deal with from a resource perspective17:25
mordredyah. for sure - the plugin usage is the thing that will take the most thinking/consideration of how to migrate - since each one will be a case-by-case concern17:26
fdegiryep17:26
fdegirI tried hard to avoid installing new plugins but one can object limited no of times17:26
fdegirI think we need to have some kind of strategy17:26
fdegirand might have some kind of hybrid setup initially for the projects that do not want to get rid of jenkins17:27
fdegiralso, I will have some thoughts written about this "real" e2e integration stuff17:28
fdegirit will be a bit conceptual level which would be good to get your thoughts on17:28
odyssey4mefdegir mordred the path we're taking for now is to implement a watcher for nodepool nodes going into a 'ready' state, which will kick off a thing to register that node with jenkins... then jenkins will mark each node it uses as 'testing' when it uses it and 'used' when done. I've done a little prep work to validate that in https://gist.github.com/odyssey4me/d1a202d6e340d165513f9cec1d19d5f0#file-nodepool-watcher-py but it's17:34
odyssey4me incomplete right now. It was simply a PoC to get to a place where I think this is a plausible option.17:34
odyssey4meit's good for us because adding plugins to jenkins is a little more complicated than using this method17:35
fdegirone thing I must mention is that all the nodes we use are static17:36
fdegirso we need this static node driver/provider17:37
odyssey4mehmm, how does that work?17:38
odyssey4menodepool specifically implements single-use nodes17:38
mordredodyssey4me: we have a static node provider up for review17:38
odyssey4meORLY? what's the use-case for that?17:38
mordredhttps://review.openstack.org/#/c/468624/17:38
mordredodyssey4me: well, fdegir is the main one :)17:39
SpamapSfdegir: I'm running a low traffic Zuul (10 - 20 jobs per day, 20 or so repos) on a single 8GB VM. I expect to need to move the executor onto its own VM when we get to a place where we have more than 2 - 3 concurrent jobs17:39
mordredodyssey4me: but it's for if you have gear that's maybe special - like "I want to run tests on the machine that's connected to the 3par san"17:39
fdegirodyssey4me: we heavily use baremetal PODs in OPNFV that consist of 6 nodes; 1 jumphost, 3 controllers, 2 computes17:40
fdegirodyssey4me: jumphosts of these nodes are connected to jenkins where it schedules jobs on17:40
odyssey4meinteresting, that could also then provide the ability to do tests in other ways17:41
mordredodyssey4me: or something similar to that - so the static driver would take the request from zuul for that node and the would 'check it out' (take out the lock on it) and hand it to zuul17:41
odyssey4meok, and this is differentiated by the label17:41
fdegiryes17:41
SpamapSAnd I currently have a set of labels that is basically 20 "big" vms (8gb 4cpu), and 8 "tiny" (1gb 1cpu).17:42
fdegirif I continue speculating, we want to put all the baremetal nodes we have into a common pool17:42
SpamapSMost of my jobs are 2-nodes.17:42
fdegirsomething like 250 nodes17:42
SpamapSwell.. actually no, I have two classes. static analysis and easy stuff are on the tinies. An end-to-end deploy of kubernetes goes on 2 big vms.17:42
fdegirand depending on what type of testing we need to do, that number of nodes need to be checked out/reserved which zuul is running things on jumphost of that set of nodes17:42
tobiashodyssey4me: on target testing for embedded software is also a use case for static nodes17:43
mordredtobiash: ++17:44
clarkbtobiash: fwiw we've done that sort of testing without static nodes using a resource broker in job (but no reason nodepool cant do that itself)17:44
odyssey4mefdegir oh yes, that sounds good - I suppose that could be done now with ironic though... but perhaps in this case there are things that make using ironic complicated17:44
fdegirI think one idea was to have bifrost provider17:45
fdegirmordred: ^17:45
mordredodyssey4me: yah - we're also talking about / pondering ironic/bifrost driver for nodepool17:45
tobiashclarkb: sure, that would also work but nodepool is one less indirection and you don't need to spawn a machine just as a proxy to the test hardware17:46
odyssey4mebut does it need one? surely you set the right things and it just works (tm)17:46
mordredodyssey4me: well - for folks who don't have a nova in front of their ironic17:46
odyssey4meah17:46
clarkbodyssey4me: ya it does already work as is with ironic + nova17:47
odyssey4mehow would one register these static nodes? just the node list in https://review.openstack.org/#/c/468624/29/doc/source/configuration.rst ?17:47
mordredodyssey4me: yup17:49
mordredodyssey4me: it's worth noting we also need to add a $mechanism to allow restricting where certain nodepool labels can be used - adding a label for "the giant machine attached to the SAN" it maybe not a thing you then want a random user tobe able to add a job that uses in their .zuul.yaml file17:51
mordredmaybe it is - but maybe it isn't :)17:51
odyssey4meit may make sense to use pools there17:52
mordredyah, for sure17:56
odyssey4mealright - thanks for the heads-up, that's an interesting set of developments17:59
odyssey4meI'm out for the night - cheers all!17:59
mordredodyssey4me: woot! have a good night18:00
Shrewsodyssey4me: that watcher script is interesting, but there needs to be *something* to make a request for a node, otherwise you may not have a ready node, unless you set min-ready > 0, and then your concurrency is limited to that min-ready number.18:10
Shrewsodyssey4me: the existence of a node request is the thing that causes a node to be created (if necessary), then assigned. Also, you're bypassing the nodepool safety mechanisms that keep a READY node from being assigned more than once.18:12
Shrewsunless you're doing locking as well, which it doesn't appear that you are18:12
Shrewsodyssey4me: basically, doing anything other than the expected request-response protocol that nodepool expects is at your own risk  :)18:14
rcarrillocruzhey folks, rebased tobiash change https://review.openstack.org/#/c/500808/18:56
rcarrillocruzcan someone else review/approve18:57
mordredrcarrillocruz: done. https://review.openstack.org/#/c/453968 needs a +A to make the stack happy19:01
rcarrillocruz+A19:02
rcarrillocruzthx19:02
*** weshay is now known as weshay_bbiab19:03
Shrewsrcarrillocruz: can you remove the +A for a bit?19:04
Shrewsrcarrillocruz: i have a concern about that before we put it through19:04
Shrewsnamely, i don't see that we're defaulting the username, which could break our existing setup w/o a config change first. but i haven't reviewed it all yet19:05
Shrewsi think we should also probably have jeblair weigh in on it, too, since he was on the fence about it19:08
mordredShrews: I agree with your review comment19:11
Shrewsrcarrillocruz: mordred: sorry, but i have to go -1 on that, unless someone wants to take on the responsibility of changing our config files and notifying any others that may be using it19:11
mordredShrews: I think it's a good suggested default value - shouldn't change anything for the common case, and it won't make it any harder for the folks who do need to set the value to something else19:13
clarkbShrews: ya you should leave that ocmmetn on the change looks like line 271 of config.py is where that would be addressed?19:13
Shrewsclarkb: ++19:13
Shrewsclarkb: done19:14
rcarrillocruzhappy to push a patch19:15
pabelangerShrews: ze03.o.o appears to nolonger be listening on port 79. Is there anything I should do before I restart the executor?19:16
Shrewsclarkb: hmm, could also be problematic reading old ZK values where no username exists, so need to change zk.py too, i think19:16
clarkboh right, because that data would be passed through the db19:17
clarkbShrews: as far as summit feedback, the big thing I took away from it is people want an easy local execution of playbooks/roles/jobs method19:17
clarkbalso we should address the timeout behavior where each playbook gets a complete timeout of its own19:18
Shrewspabelanger: *sigh*  no19:18
Shrewspabelanger: i have a change up to try to help: https://review.openstack.org/51743719:18
pabelangerShrews: kk, I'll restart in a few minutes19:19
Shrewspabelanger: i'm basically out of ideas at this point19:19
Shrewsvery frustrating not having *anything* in the logs19:19
pabelanger+2, so others can view. Or if you want to +3 yourself19:22
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/nodepool feature/zuulv3: Add username to build and upload information  https://review.openstack.org/45396819:23
Shrewspabelanger: maybe let jeblair have a look?19:23
pabelangeryah19:23
Shrewsrcarrillocruz: you'll need to see the late review comment i left. also, update the docs for the default value19:25
rcarrillocruz yeah, just noticed19:25
rcarrillocruzrepushing19:25
* Shrews steps away to make some tea. biab19:25
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/nodepool feature/zuulv3: Add username to build and upload information  https://review.openstack.org/45396819:26
*** weshay_bbiab is now known as weshay19:51
rcarrillocruzShrews , mordred : https://review.openstack.org/#/c/453968/ passed src func tests, expect the other should pass too20:01
* rcarrillocruz goes for dinner20:01
*** tobiash has quit IRC20:06
*** tobiash has joined #zuul20:07
fungii forget, were we still planning to have a zuul meeting again today? want to make sure i'm around for it if so20:27
fungioh, i bet it's in the last minutes20:28
fungiyeah, we agreed to skip the meeting for last week but not this week, from the look of things? http://eavesdrop.openstack.org/meetings/zuul/2017/zuul.2017-10-30-22.03.log.html#l-3920:31
fungiso meeting it is, i guess20:31
Shrewsfungi: that's what i remember20:32
fungiperfect, thanks Shrews!20:32
Shrewsfungi: i'm just expecting a recap of summit happenings for those that stayed home20:32
fungii'm expecting a recap of all the sessions i already forgot i attended ;)20:33
pabelangerI think we discussed this before, but cannot remember if we fixed20:46
pabelangerhttp://logs.openstack.org/a5/a5892e231bdf80f6b36348b8a1dee0ae09d4fb15/periodic/build-wheel-mirror/8e3c0b1/zuul-info/host-info.wheel-mirror-ubuntu-trusty-python2.yaml20:46
pabelanger^ is labeled ubuntu-trusty in hostname, but is infact ubuntu-xenial (you can see in ansible_lsb)20:46
pabelangerI want to say we had an issue in nodepool20:48
pabelangerbut can't find that discussion20:49
Shrewsfungi: fwiw, i'm fine skipping this meeting too (since it occurs when i want to eat). no one has volunteered to run it today20:51
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/nodepool feature/zuulv3: Add username to build and upload information  https://review.openstack.org/45396820:52
Shrewspabelanger: there was something similar long ago when nodepool returned the nodeset in random order, but that was fixed long ago20:54
pabelangerShrews: k, happen to know the last time we restart launchers?20:56
Shrewspabelanger: i did it last week or the one before, iirc20:56
pabelangerk20:56
Shrewspabelanger: no, not last week. before summit definitely20:57
pabelangerhttps://review.openstack.org/507182/20:57
pabelangerlooks like our fix20:57
pabelangeryah, so we should be running that20:57
Shrewsyeah, that was the fix20:57
pabelangerk, for some reason still not getting the correct nodes on our build-wheel-mirror jobs20:58
pabelangerI've opted to split them into a single OS, until jobs are working correctly: https://review.openstack.org/519469/20:59
Shrewspossibly a new issue20:59
pabelangermaybe20:59
Shrewsrcarrillocruz: mordred: I -1'd https://review.openstack.org/500800 because it suffers from a similar backward compat issue21:04
Shrewswe need to keep upgrade paths in mind on these changes21:05
rcarrillocruzso, alias ssh_port to connection_port ?21:05
rcarrillocruzin constructor21:06
Shrewsrcarrillocruz: something like: d.get('connection_port', d.get('ssh_port', 22)) maybe in zk.py21:06
Shrewsrcarrillocruz: the point being that we don't want to force a reset of all zookeeper data. we have pre-existing data using the old values that we need to support.21:11
ShrewsIs the zuul meeting in t-minus 48min? I think the time change has thrown it off21:12
Shrewsah, yep. nm.21:13
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/nodepool feature/zuulv3: Rename ssh_port to connection_port  https://review.openstack.org/50080021:27
rcarrillocruzShrews: like that ^ ?21:27
Shrewsrcarrillocruz: almost. you can't reference self.ssh_port21:36
rcarrillocruzah yeah, i said earlier aliasing but didn't in the end int he constructor21:37
rcarrillocruzwill remove21:37
rcarrillocruzalso, got failures on merge on the previous change i +A21:37
rcarrillocruzlooking21:37
*** hasharAway has quit IRC21:43
*** kklimonda_ has joined #zuul21:46
*** zaro_ has joined #zuul21:46
*** patrickeast_ has joined #zuul21:46
*** patrickeast has quit IRC21:51
*** zaro has quit IRC21:51
*** fungi has quit IRC21:51
*** jtanner has quit IRC21:51
*** kklimonda has quit IRC21:51
*** patrickeast_ is now known as patrickeast21:51
*** zaro_ is now known as zaro21:51
*** kklimonda_ is now known as kklimonda21:52
*** fungi has joined #zuul21:54
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/nodepool feature/zuulv3: Add username to build and upload information  https://review.openstack.org/45396821:54
*** hashar has joined #zuul21:55
fungimeeting time, i guess? was jeblair planning to be around to chair, or are we winging it?22:00
clarkboh right dst22:02
*** hashar has quit IRC22:10
openstackgerritMohammed Naser proposed openstack-infra/zuul-jobs master: Add role to build Puppet module  https://review.openstack.org/51948922:33
SpamapSoh sorry I missed the meeting. Prepping for a trip and missed the calendar ding22:39
SpamapSNext week I'll have more time for it anyway. I'd really like to discuss the possibility of paring down the requirements for 3.0. I don't see much in the way of potential interface breakages, and we've now seen another potential user (last week while you were all hanging upside down on the earth) who is struggling to explain feature/zuulv3 to potential users at their org.22:40
clarkbSpamapS: I'm not sure I understand how a quicker release addresses that issue though?22:41
SpamapSclarkb: People need to know it's a project the authors want them to run.22:41
SpamapSRight now it looks like a "maybe some day we'll release this" dev preview or RC or something.22:42
SpamapSTakes some explaining.22:42
SpamapSEyebrows remain raised.22:42
SpamapSThe question remains "what are you going to break if I run this today?"22:42
clarkbI guess, though that is probably how I would describe it at this point :)22:42
SpamapSYeah, I'm not saying tag feature/zuulv3 as 3.0 _today_22:43
SpamapSbut... February _might_ be too late for me, for instance.22:43
SpamapSI have a tiny team and we're using the crap out of it right now. But there are larger teams who are interested and we're evaluating things like CircleCI and Werker right now. By February, the ship may have sailed.22:44
clarkbthat seems like it would be independent of any tag though right? like if we tag a 3.0 without whatever features they want then that won't change anything?22:44
SpamapSIf you don't tag 3.0, they will not even try it.22:45
SpamapSSo a gap analysis isn't even going to happen.22:45
clarkbeven though you are already running it?22:45
SpamapSIt's the first question I've gotten on the two demos I"ve done for broader audiences.22:45
SpamapS"I can't find where to download 3.0. It's 2.5 in master. What is v3? How can I try it?"22:46
clarkbI see so its people wanting to run it themselves and being confused and less they won't even touch yours because it doesn't have some arbitrary tag22:46
SpamapSclarkb: yes, even though I am running it. I have other responsibilities.22:46
clarkbwould just tagging a beta like today help that at all?22:47
SpamapSI'm going to challenge your premise. A 3.0 tag is anything but arbitrary.22:47
clarkbwell it is if you want ot apply it before the features we've agreed to being in 3.0 are present22:47
SpamapSFeatures can go in .1, .2, .322:47
clarkbis I guess what I'm saying. 3.0 is a defined thing, we are n't there yet22:47
SpamapS3.0 is about the interface.22:48
SpamapSAre we going to stand by zuul.yaml, zuul.d, ansible with the zuul.* dict, etc. Right now, 99% yes, but the 1% needs finishing off, and then 3.0. I don't think the features that are marked as must-haves for 3.0, are must-haves.22:48
clarkbso my question is would a prerelease tag like a beta address the concerns of discoverability?22:48
clarkband fwiw we have made several non backward compat changes to job definitions since the openstack deployment went up22:49
SpamapSIf the prerelease tag carries a "we're not going to break these interfaces." promise.22:49
SpamapSYes I know22:49
SpamapSthey broke all my jobs22:49
SpamapSI stumbled on them while git pulling.22:49
SpamapSHave it on my TODO to make an intermediary zuul fork and somehow PR and test that my config loads.22:50
clarkbI'm not sure we've operated it long enough to know that we won't make anymore of those changes22:51
mnaserhttps://review.openstack.org/#/c/519489/ -- is this failure because of my change or is there issues with zuul tox-py35-on-zuul job?22:52
clarkbthose were unexpected changes based on feedback from users/reviewers and behavior we were seeing22:52
SpamapSThere won't be much of a community until such a promise is made.22:52
SpamapSFolks like me, who are deeply invested, will stick around through the breakages, because we know where things are headed. But newcomers need a gentler hand.22:53
clarkbmnaser: http://logs.openstack.org/89/519489/1/check/tox-py35-on-zuul/8fe9438/job-output.txt.gz#_2017-11-13_22_47_12_234675 that may be the problem?22:53
clarkbSpamapS: right, problem is now is the time to make those changes because we are learning and haven't committed yet22:54
clarkbSpamapS: once you have committed its too late22:54
mnaserclarkb: i dont think that issue is related to my change? all i did was add an ansible role22:54
mnaserbut i dont want to blindly recheck in case its an issue22:54
clarkb(so I think there is quite a bit of value in being somewhat conservative with that and using real world experience to base the decision off of)22:54
SpamapSclarkb: it's a trade off. You can go fast and break all the users, potentially killing off early adoption, or go a little slower, and keep us happy and keep the community growing.22:56
clarkbSpamapS: but also potentially kill the community because nothing works properly22:56
clarkbmnaser: ya untrusted-secrets is a zuul unittest fixture job22:56
SpamapSThings work fantastically well for me, and they seem like they're going ok for OpenStack.22:56
SpamapSAlso, release early, release often.22:57
clarkbmnaser: so I agree that seems orthogonal to your change. It is probably worth a recheck to see if it is consistent22:57
SpamapS3.0.1 can fix the broken stuff that we haven't found yet. 3.1.0 can add the features we wish we'd finished.22:57
mnaserclarkb: cool will do22:57
clarkbSpamapS: I think my biggest concern would be something like the branch changes that we've already had to do22:59
clarkbSpamapS: not necessarily something in how you write the jobs but something subtle in how the jobs are executed22:59
clarkband I'm not sure we've exercised it enough to rule another one of those out in the near future23:00
SpamapSAnyway, every day we spend as 'feature/zuulv3' is another day that users invest in Jenkins pipeline, or Prow, or whatever other tool that isn't half as well thought through or suited to the challenge as Zuulv3 is already today. Feeling the clock ticking on me right now. Hopefully 3.0 comes in time for me to convince my employer to keep on Zuuling.23:00
clarkbbut jeblair likely has a better handle on that23:00
SpamapSclarkb: Needing to support legacy bugs is just the price you pay for growing the community. Time to start paying it at some point. The network effect is near 0 right now.23:01
SpamapSI can't in good conscience create a swell of support for zuulv3 right now.23:02
clarkbbugs like that are quite fundamental though23:02
SpamapSI can say "it's nearly there!".. but... the barrier to entry remains at the "willing to run a feature branch of a thing" level.23:02
clarkbwe would not have bee able to support v3 with openstack with those bugs in place for example23:02
clarkbso there is legacy, and there is so broken you can't use it23:03
SpamapSclarkb: I don't know how to say this more clearly. I understand and respect that. But that bug doesn't matter _at all_ to me, or any of my users. The interface, as it is today, works great. I'd be happy to release as-is.23:03
clarkbSpamapS: how would you handle that in your propose scenario?23:04
SpamapSMeanwhile I need to convince GoDaddy to give me resources to do things like contribute kubernetes support (so we can run jobs on a k8s cluster), and possibly add AWS and/or GCE support to nodepool. Etc. etc. etc.23:04
clarkbSpamapS: go on to a 4.0?23:04
clarkbhave a big switch in config?23:05
SpamapSclarkb: no, 3.1 has the new way. 3.x carries the broken way. 4.0 drops it.23:05
SpamapSThe branch change was indeed a nasty one. Wouldn't want to have to carry that legacy forward.23:06
SpamapSAnd I"m not saying _today do it now_. I'm saying, stabilize the right-now interface, and release. The features planned are not needed to grow a community. If we grow the community, however, they might just help us create those features.23:07
openstackgerritMohammed Naser proposed openstack-infra/zuul-jobs master: Add role to build Puppet module  https://review.openstack.org/51948923:33
mordredSpamapS: yah - I think we're mostly in agreement on that. I don't think the intent for the tag is to get the entire todo-list done- it's to get the things on it done where we still think there's a decent chance we might need to break something like the branch change again.23:41
mordredSpamapS: well, that and the getting started docs - because "hey look, we released it!" followed by "getting started docs are still todo" feels like avoidable pain23:42
*** jtanner has joined #zuul23:43
SpamapSI'm committing right now to helping in any way I can with getting started docs. I've gotten started 3 times now. I think I can tell others how. :)23:46
mordredSpamapS: yes - I figure you probably know more about getting started than anybody else :)23:49
leifmadsenare there patches up already?23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!