*** jkilpatr has quit IRC | 00:16 | |
openstackgerrit | Rui Chen proposed openstack-infra/nodepool feature/zuulv3: Apply floating ip for node according to configuration https://review.openstack.org/518875 | 03:53 |
---|---|---|
openstackgerrit | Ian Wienand proposed openstack-infra/zuul feature/zuulv3: Add client unit testing https://review.openstack.org/519222 | 04:14 |
openstackgerrit | Ian Wienand proposed openstack-infra/zuul feature/zuulv3: More documentation for enqueue-ref https://review.openstack.org/518662 | 04:19 |
openstackgerrit | Ian Wienand proposed openstack-infra/zuul feature/zuulv3: Make enqueue-ref <new|old>rev optional https://review.openstack.org/518663 | 04:19 |
openstackgerrit | Ian Wienand proposed openstack-infra/zuul feature/zuulv3: Add client unit testing https://review.openstack.org/519222 | 04:19 |
*** bhavik1 has joined #zuul | 04:52 | |
*** lennyb has quit IRC | 05:51 | |
*** lennyb has joined #zuul | 05:56 | |
*** smyers has quit IRC | 05:59 | |
*** robled has quit IRC | 06:00 | |
*** lennyb has quit IRC | 06:03 | |
*** lennyb has joined #zuul | 06:05 | |
*** bhavik1 has quit IRC | 06:23 | |
*** smyers has joined #zuul | 06:28 | |
*** robled has joined #zuul | 07:33 | |
*** hashar has joined #zuul | 08:05 | |
*** smyers has quit IRC | 10:03 | |
*** smyers has joined #zuul | 10:15 | |
*** jkilpatr has joined #zuul | 11:55 | |
*** rcarrillocruz has quit IRC | 11:57 | |
*** rcarrillocruz has joined #zuul | 12:13 | |
tobiash | does anything speak against adding the commit sha's of the changes to be verified into the zuul.items[] data structure in ansible? | 13:37 |
tobiash | my use case is a commit message verification | 13:37 |
tobiash | currently only the change url and refs are there and I don't like adding a secret just to ask gerrit/github for the sha's of the changes | 13:38 |
tobiash | having a list of sha's of all commits of the change or pull request would make that possible entirely using git on the node without any interaction with gerrit/github | 13:39 |
*** hashar is now known as hasharAway | 14:07 | |
* Shrews waves to the post-Sydney crowd and hope they have come back with fantastic ideas | 14:25 | |
Shrews | I have enjoyed the quiet | 14:25 |
pabelanger | o/ | 16:04 |
pabelanger | body still recovering from jetleg | 16:04 |
rbergeron | no kidding | 16:06 |
* mordred was fine on day one of being back - except that day one of being back wound up not ending until 5am the next day - so day two of being back was O HAI JETLAG | 16:40 | |
mordred | tobiash: I can't think of any specific reason not to do that - can you not do the same thing with just the git repo on disk? | 16:42 |
mordred | tobiash: I guess you don't really know what origin/master is anymore once you're in the git repo zuul has prepared though - so you can't just look at the non-landed commit messages ... | 16:42 |
tobiash | mordred: exactly that's my problem | 16:43 |
rcarrillocruz | i envy badly people who can sleep on planes | 16:44 |
mordred | nod. yah - it seems like having them in the items dict wouldn't be terrible ... that is, if we actualy know them - I know we do for gerrit - do we have a good way to get the list of shas for github PRs? | 16:44 |
rcarrillocruz | back on friday, i spent 2 days and a half completely catatonic | 16:44 |
rcarrillocruz | re: summit jetlag | 16:44 |
mordred | rcarrillocruz: I can sleep on planes, but flying back from australia I didn't sleep much due to time of day and stuff | 16:44 |
tobiash | mordred: oldrev, newrev would be probavly sufficient | 16:48 |
tobiash | have to check if change objects are populated with that | 16:48 |
mordred | tobiash: yah - oldrev/newrev is likely all you need - you should be able to walk the commits with git given those | 16:49 |
fdegir | we are starting Zuulv3 prototype for OPNFV | 17:12 |
fdegir | and the first question I got from Linux Foundation was resource requirements | 17:12 |
fdegir | like cpu/ram etc for the machine | 17:12 |
fdegir | it will be simple prototype, just doing hello world and sleep to imitate our long running jobs | 17:13 |
*** pabelanger is now known as keerthi | 17:13 | |
*** keerthi is now known as pabelanger | 17:13 | |
fdegir | anyone has any advice about requirements? | 17:13 |
clarkb | fdegir: http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=558 that is one of 10 zuul executor instances | 17:15 |
clarkb | http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=117 one of eight merger instances | 17:15 |
clarkb | http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=557 the single scheduler instance | 17:16 |
tobiash | fdegir: for just hello world jobs I think a quad core vm with 8gb ram would be sufficient | 17:16 |
pabelanger | Yah, so far 8GB per instance, per process seems to work well. Scheduler might be larger, depending on amount of projects | 17:16 |
clarkb | this us what we are running in production for ~1k jobs per hour and seems to hold up | 17:16 |
clarkb | but ya hello world wont need much | 17:16 |
fdegir | ok | 17:16 |
pabelanger | we have 15GB instances for all-in-one hands-on workshop, seemed to work well | 17:17 |
fdegir | the idea is to see if it stays up for a week without load | 17:17 |
fdegir | and gradually increase the load | 17:17 |
pabelanger | fdegir: yah, would be interesting help onthe issue the linux foundation perviously had | 17:18 |
fdegir | pabelanger: yep - i told them about our conversation last week | 17:18 |
fdegir | there wasn't a resistance this time so things look positive | 17:19 |
fdegir | we should get it running quickly and report back and hopefully contribute fixes if we identify any | 17:19 |
clarkb | should be noted that our v3 install is roughly the same size as our v2 install | 17:21 |
fdegir | thanks for the responses | 17:21 |
fdegir | i have some numbers to pass to them now | 17:21 |
clarkb | the biggest thing other v2 installs will be lacking is the launcher -> executors becauss I expect most were running jenkins | 17:22 |
fdegir | i think the previous attempt was with v2.5 | 17:22 |
clarkb | but you can provbably start by replacing jenkins with a zuul executor | 17:22 |
fdegir | zuul will run in parallel to jenkins initially | 17:23 |
fdegir | and then onboard few projects | 17:23 |
fdegir | so we can have real users to roll it out for the rest of the projects | 17:23 |
mordred | fdegir: fwiw, I shared https://etherpad.openstack.org/p/zuulv3-jenkins-integration with odyssey4me a few days ago as well - he may have use for such a thing where he is | 17:24 |
mordred | fdegir: so maybe between the group of us we can find enough interest to cause code to be written | 17:24 |
clarkb | fdegir: ya thats fine just trying to describe how one might size things and I think a mostly 1:1 of existing installs is a decent place to start | 17:24 |
mordred | (I'll eventually hit the point where I rage-code it, but my stack is deep enough at the moment it'll be a little while before I do) | 17:25 |
fdegir | mordred: i personally want to go full zuul | 17:25 |
mordred | clarkb: + | 17:25 |
mordred | clarkb: ++ | 17:25 |
mordred | fdegir: me too :) | 17:25 |
fdegir | but | 17:25 |
fdegir | the plugin stuff is the biggest concern i have | 17:25 |
mordred | fdegir: but if we have those things it might make a phased transition easier to deal with from a resource perspective | 17:25 |
mordred | yah. for sure - the plugin usage is the thing that will take the most thinking/consideration of how to migrate - since each one will be a case-by-case concern | 17:26 |
fdegir | yep | 17:26 |
fdegir | I tried hard to avoid installing new plugins but one can object limited no of times | 17:26 |
fdegir | I think we need to have some kind of strategy | 17:26 |
fdegir | and might have some kind of hybrid setup initially for the projects that do not want to get rid of jenkins | 17:27 |
fdegir | also, I will have some thoughts written about this "real" e2e integration stuff | 17:28 |
fdegir | it will be a bit conceptual level which would be good to get your thoughts on | 17:28 |
odyssey4me | fdegir mordred the path we're taking for now is to implement a watcher for nodepool nodes going into a 'ready' state, which will kick off a thing to register that node with jenkins... then jenkins will mark each node it uses as 'testing' when it uses it and 'used' when done. I've done a little prep work to validate that in https://gist.github.com/odyssey4me/d1a202d6e340d165513f9cec1d19d5f0#file-nodepool-watcher-py but it's | 17:34 |
odyssey4me | incomplete right now. It was simply a PoC to get to a place where I think this is a plausible option. | 17:34 |
odyssey4me | it's good for us because adding plugins to jenkins is a little more complicated than using this method | 17:35 |
fdegir | one thing I must mention is that all the nodes we use are static | 17:36 |
fdegir | so we need this static node driver/provider | 17:37 |
odyssey4me | hmm, how does that work? | 17:38 |
odyssey4me | nodepool specifically implements single-use nodes | 17:38 |
mordred | odyssey4me: we have a static node provider up for review | 17:38 |
odyssey4me | ORLY? what's the use-case for that? | 17:38 |
mordred | https://review.openstack.org/#/c/468624/ | 17:38 |
mordred | odyssey4me: well, fdegir is the main one :) | 17:39 |
SpamapS | fdegir: I'm running a low traffic Zuul (10 - 20 jobs per day, 20 or so repos) on a single 8GB VM. I expect to need to move the executor onto its own VM when we get to a place where we have more than 2 - 3 concurrent jobs | 17:39 |
mordred | odyssey4me: but it's for if you have gear that's maybe special - like "I want to run tests on the machine that's connected to the 3par san" | 17:39 |
fdegir | odyssey4me: we heavily use baremetal PODs in OPNFV that consist of 6 nodes; 1 jumphost, 3 controllers, 2 computes | 17:40 |
fdegir | odyssey4me: jumphosts of these nodes are connected to jenkins where it schedules jobs on | 17:40 |
odyssey4me | interesting, that could also then provide the ability to do tests in other ways | 17:41 |
mordred | odyssey4me: or something similar to that - so the static driver would take the request from zuul for that node and the would 'check it out' (take out the lock on it) and hand it to zuul | 17:41 |
odyssey4me | ok, and this is differentiated by the label | 17:41 |
fdegir | yes | 17:41 |
SpamapS | And I currently have a set of labels that is basically 20 "big" vms (8gb 4cpu), and 8 "tiny" (1gb 1cpu). | 17:42 |
fdegir | if I continue speculating, we want to put all the baremetal nodes we have into a common pool | 17:42 |
SpamapS | Most of my jobs are 2-nodes. | 17:42 |
fdegir | something like 250 nodes | 17:42 |
SpamapS | well.. actually no, I have two classes. static analysis and easy stuff are on the tinies. An end-to-end deploy of kubernetes goes on 2 big vms. | 17:42 |
fdegir | and depending on what type of testing we need to do, that number of nodes need to be checked out/reserved which zuul is running things on jumphost of that set of nodes | 17:42 |
tobiash | odyssey4me: on target testing for embedded software is also a use case for static nodes | 17:43 |
mordred | tobiash: ++ | 17:44 |
clarkb | tobiash: fwiw we've done that sort of testing without static nodes using a resource broker in job (but no reason nodepool cant do that itself) | 17:44 |
odyssey4me | fdegir oh yes, that sounds good - I suppose that could be done now with ironic though... but perhaps in this case there are things that make using ironic complicated | 17:44 |
fdegir | I think one idea was to have bifrost provider | 17:45 |
fdegir | mordred: ^ | 17:45 |
mordred | odyssey4me: yah - we're also talking about / pondering ironic/bifrost driver for nodepool | 17:45 |
tobiash | clarkb: sure, that would also work but nodepool is one less indirection and you don't need to spawn a machine just as a proxy to the test hardware | 17:46 |
odyssey4me | but does it need one? surely you set the right things and it just works (tm) | 17:46 |
mordred | odyssey4me: well - for folks who don't have a nova in front of their ironic | 17:46 |
odyssey4me | ah | 17:46 |
clarkb | odyssey4me: ya it does already work as is with ironic + nova | 17:47 |
odyssey4me | how would one register these static nodes? just the node list in https://review.openstack.org/#/c/468624/29/doc/source/configuration.rst ? | 17:47 |
mordred | odyssey4me: yup | 17:49 |
mordred | odyssey4me: it's worth noting we also need to add a $mechanism to allow restricting where certain nodepool labels can be used - adding a label for "the giant machine attached to the SAN" it maybe not a thing you then want a random user tobe able to add a job that uses in their .zuul.yaml file | 17:51 |
mordred | maybe it is - but maybe it isn't :) | 17:51 |
odyssey4me | it may make sense to use pools there | 17:52 |
mordred | yah, for sure | 17:56 |
odyssey4me | alright - thanks for the heads-up, that's an interesting set of developments | 17:59 |
odyssey4me | I'm out for the night - cheers all! | 17:59 |
mordred | odyssey4me: woot! have a good night | 18:00 |
Shrews | odyssey4me: that watcher script is interesting, but there needs to be *something* to make a request for a node, otherwise you may not have a ready node, unless you set min-ready > 0, and then your concurrency is limited to that min-ready number. | 18:10 |
Shrews | odyssey4me: the existence of a node request is the thing that causes a node to be created (if necessary), then assigned. Also, you're bypassing the nodepool safety mechanisms that keep a READY node from being assigned more than once. | 18:12 |
Shrews | unless you're doing locking as well, which it doesn't appear that you are | 18:12 |
Shrews | odyssey4me: basically, doing anything other than the expected request-response protocol that nodepool expects is at your own risk :) | 18:14 |
rcarrillocruz | hey folks, rebased tobiash change https://review.openstack.org/#/c/500808/ | 18:56 |
rcarrillocruz | can someone else review/approve | 18:57 |
mordred | rcarrillocruz: done. https://review.openstack.org/#/c/453968 needs a +A to make the stack happy | 19:01 |
rcarrillocruz | +A | 19:02 |
rcarrillocruz | thx | 19:02 |
*** weshay is now known as weshay_bbiab | 19:03 | |
Shrews | rcarrillocruz: can you remove the +A for a bit? | 19:04 |
Shrews | rcarrillocruz: i have a concern about that before we put it through | 19:04 |
Shrews | namely, i don't see that we're defaulting the username, which could break our existing setup w/o a config change first. but i haven't reviewed it all yet | 19:05 |
Shrews | i think we should also probably have jeblair weigh in on it, too, since he was on the fence about it | 19:08 |
mordred | Shrews: I agree with your review comment | 19:11 |
Shrews | rcarrillocruz: mordred: sorry, but i have to go -1 on that, unless someone wants to take on the responsibility of changing our config files and notifying any others that may be using it | 19:11 |
mordred | Shrews: I think it's a good suggested default value - shouldn't change anything for the common case, and it won't make it any harder for the folks who do need to set the value to something else | 19:13 |
clarkb | Shrews: ya you should leave that ocmmetn on the change looks like line 271 of config.py is where that would be addressed? | 19:13 |
Shrews | clarkb: ++ | 19:13 |
Shrews | clarkb: done | 19:14 |
rcarrillocruz | happy to push a patch | 19:15 |
pabelanger | Shrews: ze03.o.o appears to nolonger be listening on port 79. Is there anything I should do before I restart the executor? | 19:16 |
Shrews | clarkb: hmm, could also be problematic reading old ZK values where no username exists, so need to change zk.py too, i think | 19:16 |
clarkb | oh right, because that data would be passed through the db | 19:17 |
clarkb | Shrews: as far as summit feedback, the big thing I took away from it is people want an easy local execution of playbooks/roles/jobs method | 19:17 |
clarkb | also we should address the timeout behavior where each playbook gets a complete timeout of its own | 19:18 |
Shrews | pabelanger: *sigh* no | 19:18 |
Shrews | pabelanger: i have a change up to try to help: https://review.openstack.org/517437 | 19:18 |
pabelanger | Shrews: kk, I'll restart in a few minutes | 19:19 |
Shrews | pabelanger: i'm basically out of ideas at this point | 19:19 |
Shrews | very frustrating not having *anything* in the logs | 19:19 |
pabelanger | +2, so others can view. Or if you want to +3 yourself | 19:22 |
openstackgerrit | Ricardo Carrillo Cruz proposed openstack-infra/nodepool feature/zuulv3: Add username to build and upload information https://review.openstack.org/453968 | 19:23 |
Shrews | pabelanger: maybe let jeblair have a look? | 19:23 |
pabelanger | yah | 19:23 |
Shrews | rcarrillocruz: you'll need to see the late review comment i left. also, update the docs for the default value | 19:25 |
rcarrillocruz | yeah, just noticed | 19:25 |
rcarrillocruz | repushing | 19:25 |
* Shrews steps away to make some tea. biab | 19:25 | |
openstackgerrit | Ricardo Carrillo Cruz proposed openstack-infra/nodepool feature/zuulv3: Add username to build and upload information https://review.openstack.org/453968 | 19:26 |
*** weshay_bbiab is now known as weshay | 19:51 | |
rcarrillocruz | Shrews , mordred : https://review.openstack.org/#/c/453968/ passed src func tests, expect the other should pass too | 20:01 |
* rcarrillocruz goes for dinner | 20:01 | |
*** tobiash has quit IRC | 20:06 | |
*** tobiash has joined #zuul | 20:07 | |
fungi | i forget, were we still planning to have a zuul meeting again today? want to make sure i'm around for it if so | 20:27 |
fungi | oh, i bet it's in the last minutes | 20:28 |
fungi | yeah, we agreed to skip the meeting for last week but not this week, from the look of things? http://eavesdrop.openstack.org/meetings/zuul/2017/zuul.2017-10-30-22.03.log.html#l-39 | 20:31 |
fungi | so meeting it is, i guess | 20:31 |
Shrews | fungi: that's what i remember | 20:32 |
fungi | perfect, thanks Shrews! | 20:32 |
Shrews | fungi: i'm just expecting a recap of summit happenings for those that stayed home | 20:32 |
fungi | i'm expecting a recap of all the sessions i already forgot i attended ;) | 20:33 |
pabelanger | I think we discussed this before, but cannot remember if we fixed | 20:46 |
pabelanger | http://logs.openstack.org/a5/a5892e231bdf80f6b36348b8a1dee0ae09d4fb15/periodic/build-wheel-mirror/8e3c0b1/zuul-info/host-info.wheel-mirror-ubuntu-trusty-python2.yaml | 20:46 |
pabelanger | ^ is labeled ubuntu-trusty in hostname, but is infact ubuntu-xenial (you can see in ansible_lsb) | 20:46 |
pabelanger | I want to say we had an issue in nodepool | 20:48 |
pabelanger | but can't find that discussion | 20:49 |
Shrews | fungi: fwiw, i'm fine skipping this meeting too (since it occurs when i want to eat). no one has volunteered to run it today | 20:51 |
openstackgerrit | Ricardo Carrillo Cruz proposed openstack-infra/nodepool feature/zuulv3: Add username to build and upload information https://review.openstack.org/453968 | 20:52 |
Shrews | pabelanger: there was something similar long ago when nodepool returned the nodeset in random order, but that was fixed long ago | 20:54 |
pabelanger | Shrews: k, happen to know the last time we restart launchers? | 20:56 |
Shrews | pabelanger: i did it last week or the one before, iirc | 20:56 |
pabelanger | k | 20:56 |
Shrews | pabelanger: no, not last week. before summit definitely | 20:57 |
pabelanger | https://review.openstack.org/507182/ | 20:57 |
pabelanger | looks like our fix | 20:57 |
pabelanger | yah, so we should be running that | 20:57 |
Shrews | yeah, that was the fix | 20:57 |
pabelanger | k, for some reason still not getting the correct nodes on our build-wheel-mirror jobs | 20:58 |
pabelanger | I've opted to split them into a single OS, until jobs are working correctly: https://review.openstack.org/519469/ | 20:59 |
Shrews | possibly a new issue | 20:59 |
pabelanger | maybe | 20:59 |
Shrews | rcarrillocruz: mordred: I -1'd https://review.openstack.org/500800 because it suffers from a similar backward compat issue | 21:04 |
Shrews | we need to keep upgrade paths in mind on these changes | 21:05 |
rcarrillocruz | so, alias ssh_port to connection_port ? | 21:05 |
rcarrillocruz | in constructor | 21:06 |
Shrews | rcarrillocruz: something like: d.get('connection_port', d.get('ssh_port', 22)) maybe in zk.py | 21:06 |
Shrews | rcarrillocruz: the point being that we don't want to force a reset of all zookeeper data. we have pre-existing data using the old values that we need to support. | 21:11 |
Shrews | Is the zuul meeting in t-minus 48min? I think the time change has thrown it off | 21:12 |
Shrews | ah, yep. nm. | 21:13 |
openstackgerrit | Ricardo Carrillo Cruz proposed openstack-infra/nodepool feature/zuulv3: Rename ssh_port to connection_port https://review.openstack.org/500800 | 21:27 |
rcarrillocruz | Shrews: like that ^ ? | 21:27 |
Shrews | rcarrillocruz: almost. you can't reference self.ssh_port | 21:36 |
rcarrillocruz | ah yeah, i said earlier aliasing but didn't in the end int he constructor | 21:37 |
rcarrillocruz | will remove | 21:37 |
rcarrillocruz | also, got failures on merge on the previous change i +A | 21:37 |
rcarrillocruz | looking | 21:37 |
*** hasharAway has quit IRC | 21:43 | |
*** kklimonda_ has joined #zuul | 21:46 | |
*** zaro_ has joined #zuul | 21:46 | |
*** patrickeast_ has joined #zuul | 21:46 | |
*** patrickeast has quit IRC | 21:51 | |
*** zaro has quit IRC | 21:51 | |
*** fungi has quit IRC | 21:51 | |
*** jtanner has quit IRC | 21:51 | |
*** kklimonda has quit IRC | 21:51 | |
*** patrickeast_ is now known as patrickeast | 21:51 | |
*** zaro_ is now known as zaro | 21:51 | |
*** kklimonda_ is now known as kklimonda | 21:52 | |
*** fungi has joined #zuul | 21:54 | |
openstackgerrit | Ricardo Carrillo Cruz proposed openstack-infra/nodepool feature/zuulv3: Add username to build and upload information https://review.openstack.org/453968 | 21:54 |
*** hashar has joined #zuul | 21:55 | |
fungi | meeting time, i guess? was jeblair planning to be around to chair, or are we winging it? | 22:00 |
clarkb | oh right dst | 22:02 |
*** hashar has quit IRC | 22:10 | |
openstackgerrit | Mohammed Naser proposed openstack-infra/zuul-jobs master: Add role to build Puppet module https://review.openstack.org/519489 | 22:33 |
SpamapS | oh sorry I missed the meeting. Prepping for a trip and missed the calendar ding | 22:39 |
SpamapS | Next week I'll have more time for it anyway. I'd really like to discuss the possibility of paring down the requirements for 3.0. I don't see much in the way of potential interface breakages, and we've now seen another potential user (last week while you were all hanging upside down on the earth) who is struggling to explain feature/zuulv3 to potential users at their org. | 22:40 |
clarkb | SpamapS: I'm not sure I understand how a quicker release addresses that issue though? | 22:41 |
SpamapS | clarkb: People need to know it's a project the authors want them to run. | 22:41 |
SpamapS | Right now it looks like a "maybe some day we'll release this" dev preview or RC or something. | 22:42 |
SpamapS | Takes some explaining. | 22:42 |
SpamapS | Eyebrows remain raised. | 22:42 |
SpamapS | The question remains "what are you going to break if I run this today?" | 22:42 |
clarkb | I guess, though that is probably how I would describe it at this point :) | 22:42 |
SpamapS | Yeah, I'm not saying tag feature/zuulv3 as 3.0 _today_ | 22:43 |
SpamapS | but... February _might_ be too late for me, for instance. | 22:43 |
SpamapS | I have a tiny team and we're using the crap out of it right now. But there are larger teams who are interested and we're evaluating things like CircleCI and Werker right now. By February, the ship may have sailed. | 22:44 |
clarkb | that seems like it would be independent of any tag though right? like if we tag a 3.0 without whatever features they want then that won't change anything? | 22:44 |
SpamapS | If you don't tag 3.0, they will not even try it. | 22:45 |
SpamapS | So a gap analysis isn't even going to happen. | 22:45 |
clarkb | even though you are already running it? | 22:45 |
SpamapS | It's the first question I've gotten on the two demos I"ve done for broader audiences. | 22:45 |
SpamapS | "I can't find where to download 3.0. It's 2.5 in master. What is v3? How can I try it?" | 22:46 |
clarkb | I see so its people wanting to run it themselves and being confused and less they won't even touch yours because it doesn't have some arbitrary tag | 22:46 |
SpamapS | clarkb: yes, even though I am running it. I have other responsibilities. | 22:46 |
clarkb | would just tagging a beta like today help that at all? | 22:47 |
SpamapS | I'm going to challenge your premise. A 3.0 tag is anything but arbitrary. | 22:47 |
clarkb | well it is if you want ot apply it before the features we've agreed to being in 3.0 are present | 22:47 |
SpamapS | Features can go in .1, .2, .3 | 22:47 |
clarkb | is I guess what I'm saying. 3.0 is a defined thing, we are n't there yet | 22:47 |
SpamapS | 3.0 is about the interface. | 22:48 |
SpamapS | Are we going to stand by zuul.yaml, zuul.d, ansible with the zuul.* dict, etc. Right now, 99% yes, but the 1% needs finishing off, and then 3.0. I don't think the features that are marked as must-haves for 3.0, are must-haves. | 22:48 |
clarkb | so my question is would a prerelease tag like a beta address the concerns of discoverability? | 22:48 |
clarkb | and fwiw we have made several non backward compat changes to job definitions since the openstack deployment went up | 22:49 |
SpamapS | If the prerelease tag carries a "we're not going to break these interfaces." promise. | 22:49 |
SpamapS | Yes I know | 22:49 |
SpamapS | they broke all my jobs | 22:49 |
SpamapS | I stumbled on them while git pulling. | 22:49 |
SpamapS | Have it on my TODO to make an intermediary zuul fork and somehow PR and test that my config loads. | 22:50 |
clarkb | I'm not sure we've operated it long enough to know that we won't make anymore of those changes | 22:51 |
mnaser | https://review.openstack.org/#/c/519489/ -- is this failure because of my change or is there issues with zuul tox-py35-on-zuul job? | 22:52 |
clarkb | those were unexpected changes based on feedback from users/reviewers and behavior we were seeing | 22:52 |
SpamapS | There won't be much of a community until such a promise is made. | 22:52 |
SpamapS | Folks like me, who are deeply invested, will stick around through the breakages, because we know where things are headed. But newcomers need a gentler hand. | 22:53 |
clarkb | mnaser: http://logs.openstack.org/89/519489/1/check/tox-py35-on-zuul/8fe9438/job-output.txt.gz#_2017-11-13_22_47_12_234675 that may be the problem? | 22:53 |
clarkb | SpamapS: right, problem is now is the time to make those changes because we are learning and haven't committed yet | 22:54 |
clarkb | SpamapS: once you have committed its too late | 22:54 |
mnaser | clarkb: i dont think that issue is related to my change? all i did was add an ansible role | 22:54 |
mnaser | but i dont want to blindly recheck in case its an issue | 22:54 |
clarkb | (so I think there is quite a bit of value in being somewhat conservative with that and using real world experience to base the decision off of) | 22:54 |
SpamapS | clarkb: it's a trade off. You can go fast and break all the users, potentially killing off early adoption, or go a little slower, and keep us happy and keep the community growing. | 22:56 |
clarkb | SpamapS: but also potentially kill the community because nothing works properly | 22:56 |
clarkb | mnaser: ya untrusted-secrets is a zuul unittest fixture job | 22:56 |
SpamapS | Things work fantastically well for me, and they seem like they're going ok for OpenStack. | 22:56 |
SpamapS | Also, release early, release often. | 22:57 |
clarkb | mnaser: so I agree that seems orthogonal to your change. It is probably worth a recheck to see if it is consistent | 22:57 |
SpamapS | 3.0.1 can fix the broken stuff that we haven't found yet. 3.1.0 can add the features we wish we'd finished. | 22:57 |
mnaser | clarkb: cool will do | 22:57 |
clarkb | SpamapS: I think my biggest concern would be something like the branch changes that we've already had to do | 22:59 |
clarkb | SpamapS: not necessarily something in how you write the jobs but something subtle in how the jobs are executed | 22:59 |
clarkb | and I'm not sure we've exercised it enough to rule another one of those out in the near future | 23:00 |
SpamapS | Anyway, every day we spend as 'feature/zuulv3' is another day that users invest in Jenkins pipeline, or Prow, or whatever other tool that isn't half as well thought through or suited to the challenge as Zuulv3 is already today. Feeling the clock ticking on me right now. Hopefully 3.0 comes in time for me to convince my employer to keep on Zuuling. | 23:00 |
clarkb | but jeblair likely has a better handle on that | 23:00 |
SpamapS | clarkb: Needing to support legacy bugs is just the price you pay for growing the community. Time to start paying it at some point. The network effect is near 0 right now. | 23:01 |
SpamapS | I can't in good conscience create a swell of support for zuulv3 right now. | 23:02 |
clarkb | bugs like that are quite fundamental though | 23:02 |
SpamapS | I can say "it's nearly there!".. but... the barrier to entry remains at the "willing to run a feature branch of a thing" level. | 23:02 |
clarkb | we would not have bee able to support v3 with openstack with those bugs in place for example | 23:02 |
clarkb | so there is legacy, and there is so broken you can't use it | 23:03 |
SpamapS | clarkb: I don't know how to say this more clearly. I understand and respect that. But that bug doesn't matter _at all_ to me, or any of my users. The interface, as it is today, works great. I'd be happy to release as-is. | 23:03 |
clarkb | SpamapS: how would you handle that in your propose scenario? | 23:04 |
SpamapS | Meanwhile I need to convince GoDaddy to give me resources to do things like contribute kubernetes support (so we can run jobs on a k8s cluster), and possibly add AWS and/or GCE support to nodepool. Etc. etc. etc. | 23:04 |
clarkb | SpamapS: go on to a 4.0? | 23:04 |
clarkb | have a big switch in config? | 23:05 |
SpamapS | clarkb: no, 3.1 has the new way. 3.x carries the broken way. 4.0 drops it. | 23:05 |
SpamapS | The branch change was indeed a nasty one. Wouldn't want to have to carry that legacy forward. | 23:06 |
SpamapS | And I"m not saying _today do it now_. I'm saying, stabilize the right-now interface, and release. The features planned are not needed to grow a community. If we grow the community, however, they might just help us create those features. | 23:07 |
openstackgerrit | Mohammed Naser proposed openstack-infra/zuul-jobs master: Add role to build Puppet module https://review.openstack.org/519489 | 23:33 |
mordred | SpamapS: yah - I think we're mostly in agreement on that. I don't think the intent for the tag is to get the entire todo-list done- it's to get the things on it done where we still think there's a decent chance we might need to break something like the branch change again. | 23:41 |
mordred | SpamapS: well, that and the getting started docs - because "hey look, we released it!" followed by "getting started docs are still todo" feels like avoidable pain | 23:42 |
*** jtanner has joined #zuul | 23:43 | |
SpamapS | I'm committing right now to helping in any way I can with getting started docs. I've gotten started 3 times now. I think I can tell others how. :) | 23:46 |
mordred | SpamapS: yes - I figure you probably know more about getting started than anybody else :) | 23:49 |
leifmadsen | are there patches up already? | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!