Tuesday, 2018-02-13

dmsimardhttps://review.openstack.org/#/c/536319/00:00
dmsimardBy mhu00:00
ianwdmsimard: oh, right, i'm talking about the puppet to deploy that though00:01
ianwcurrently, the launchers don't have an apache00:01
dmsimardOh, I suppose there isn't anything for that yet00:01
ianwand the builders are redirecting /dib-image-list etc to the webapp which isn't running on them00:01
*** weshay is now known as weshay_PTO00:04
corvuslet's talk about that in detail at the ptg...00:21
corvusianw: for now, for openstack-infra, let's maybe just set up apache on the launchers?00:22
ianwcorvus: yep, i'll send a change KISS soon ... what i'm really interested in is having those daily build logs exposed for my automated monitoring00:25
corvusianw: yeah, i guess it's going to be a bit weird for a little while -- api on launchers, logs on builders00:27
*** rlandy is now known as rlandy|bbl00:48
*** elyezer has quit IRC00:51
*** elyezer has joined #zuul00:55
*** openstackgerrit has quit IRC01:03
dmsimardInteresting: Go accepts patches from GitHub pull requests now https://news.ycombinator.com/item?id=1636351101:26
dmsimardThey bridge Github and Gerrit01:26
dmsimardWith a bot that synchronizes between the two.01:26
clarkbdmsimard: there is a plugin for gerrit for that01:27
dmsimardAnd they named it... *drumroll* gerritbot01:27
clarkbits the plugin that nuked all of jenkins on github01:27
dmsimardOh ? I'll admit I'm not familiar with that01:28
ianwcorvus: log/webapp setup (modulo any puppet issues i'm sure i have) - https://review.openstack.org/54367101:29
clarkbhttps://gerrit.googlesource.com/plugins/github/+/master/README.md01:30
clarkbI don't know how well supported it is, I just remember jenkins/ getting nuked by the development of it01:30
ianw"the pull-request model adopted by GitHub is often used as “easy shortcut” to the more comprehensive and structured code-review process in Gerrit." ... tell us what you really think gerrit :)01:34
*** elyezer has quit IRC01:51
*** elyezer has joined #zuul01:53
*** elyezer has quit IRC02:00
*** elyezer has joined #zuul02:01
dmsimardianw: hah, an old post from jd_ is even in that doc02:10
dmsimardianw: jd_ should send a *pull* request to tell them about all the hacks he did to get their stuff working properly once they took gnocchi out02:11
dmsimardhttps://julien.danjou.info/blog/2017/git-pull-request-command-line-tool and https://julien.danjou.info/blog/2017/pastamaker02:11
clarkbI mean you are trying to smash two incompatible methodologies togerher02:19
clarkbthat are both moving jn different dirwctions at the same time02:20
clarkbseema like just not doing it is the only real way to avoid those type of problema02:20
clarkbreading the golang implementation comments arent synced so its a halfway thing which can be useful if the big need is more change ingestion02:43
*** elyezer has quit IRC02:59
*** openstackgerrit has joined #zuul03:01
openstackgerritIan Wienand proposed openstack-infra/nodepool master: Add native distro test jobs  https://review.openstack.org/54326803:01
openstackgerritIan Wienand proposed openstack-infra/nodepool master: Revert fixes for legacy boot jobs  https://review.openstack.org/54335003:01
*** elyezer has joined #zuul03:02
*** harlowja has quit IRC03:04
*** rlandy|bbl is now known as rlandy03:37
*** rlandy has quit IRC03:37
openstackgerritIan Wienand proposed openstack-infra/zuul master: Add target=_blank to log links  https://review.openstack.org/54378203:53
*** threestrands has quit IRC04:50
*** elyezer has quit IRC05:11
*** elyezer has joined #zuul05:14
*** harlowja has joined #zuul05:52
*** sshnaidm|off has quit IRC05:54
*** sshnaidm|off has joined #zuul05:54
*** sshnaidm|off has quit IRC06:06
*** sshnaidm|off has joined #zuul06:21
*** harlowja has quit IRC06:33
*** sshnaidm|off has quit IRC06:33
*** elyezer has quit IRC06:36
*** elyezer has joined #zuul06:46
*** chrnils has joined #zuul07:28
*** AJaeger has quit IRC07:29
*** AJaeger has joined #zuul07:31
*** sshnaidm|off has joined #zuul07:40
openstackgerritIan Wienand proposed openstack-infra/zuul master: Add target=_blank to log links  https://review.openstack.org/54378207:49
openstackgerritMatthieu Huin proposed openstack-infra/zuul master: zuul web: add admin endpoint, enqueue commands  https://review.openstack.org/53900407:57
openstackgerritMerged openstack-infra/zuul master: Make timeout value apply to entire job  https://review.openstack.org/54148508:03
openstackgerritIan Wienand proposed openstack-infra/nodepool master: Add native distro test jobs  https://review.openstack.org/54326808:06
openstackgerritIan Wienand proposed openstack-infra/nodepool master: Revert fixes for legacy boot jobs  https://review.openstack.org/54335008:06
*** jpena|off is now known as jpena08:06
*** hashar has joined #zuul08:08
openstackgerritMatthieu Huin proposed openstack-infra/zuul master: zuul web: add admin endpoint, enqueue commands  https://review.openstack.org/53900408:18
openstackgerritMatthieu Huin proposed openstack-infra/zuul master: zuul web: add admin endpoint, enqueue commands  https://review.openstack.org/53900408:21
*** xinliang has quit IRC08:23
*** sshnaidm|off is now known as sshnaidm|ruck08:44
openstackgerritMatthieu Huin proposed openstack-infra/nodepool master: Clean held nodes automatically after configurable timeout  https://review.openstack.org/53629509:07
*** electrofelix has joined #zuul10:09
*** elyezer has quit IRC10:11
*** elyezer has joined #zuul10:15
*** bhavik1 has joined #zuul10:33
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool master: config: add statsd-server config parameter  https://review.openstack.org/53556010:51
*** bhavik1 has quit IRC10:53
kklimondaI'd like to split our "project-config" moving most roles to either separate zuul-jobs, or even per-repository config - any pointers what to look out for when doing that? For example, jobs are not shadowed by defaults, how about playbooks and roles? If the same role exists in project-config and zuul-jobs which will be picked?11:23
*** mwhahaha has quit IRC11:51
*** mwhahaha has joined #zuul11:52
tobiashkklimonda: regarding roles, you pick them by repo: https://github.com/openstack-infra/project-config/blob/master/zuul.d/jobs.yaml#L5711:54
tobiashplaybooks are directly referenced by the jobs and should be no problem11:55
tobiashif you want to move jobs you may need temporary shadowing11:55
tobiashkklimonda: so moving roles out of that should be pretty simple, first add the role in the new repo, second change/add the role reference in the job definition11:57
*** robcresswell has quit IRC11:58
*** robcresswell has joined #zuul11:59
tobiashkklimonda: related in the doc: https://docs.openstack.org/infra/zuul/user/config.html#attr-job.roles.zuul11:59
tobiashthis misses an example so far12:00
*** jtanner has quit IRC12:00
*** jtanner has joined #zuul12:01
kklimondathanks12:14
openstackgerritMatthieu Huin proposed openstack-infra/zuul master: zuul web: add admin endpoint, enqueue commands  https://review.openstack.org/53900412:19
*** jpena is now known as jpena|lunch12:48
rcarrillocruzodyssey4me: put nodepool.yaml support on https://github.com/ansible/ansible/pull/3560212:54
odyssey4mercarrillocruz awesome, looks good with my eyes :)12:56
*** Wei_Liu has quit IRC13:18
dmsimardrcarrillocruz: that's sexy, but why13:38
*** rlandy has joined #zuul13:38
*** jpena|lunch is now known as jpena13:48
rcarrillocruzdmsimard: non-zuul/just-nodepool CI envs14:17
openstackgerritMatthieu Huin proposed openstack-infra/nodepool master: Clean held nodes automatically after configurable timeout  https://review.openstack.org/53629514:22
*** openstackgerrit has quit IRC14:33
*** JasonCL has quit IRC14:44
*** JasonCL has joined #zuul14:46
*** elyezer has quit IRC14:48
dmsimardrcarrillocruz: hmmm... you got me curious -- how do you assign a particular set of nodes to someone ? and then destroy them ?14:48
*** JasonCL has quit IRC14:49
*** elyezer has joined #zuul14:49
*** JasonCL has joined #zuul14:50
*** JasonCL has quit IRC14:50
rcarrillocruzdmsimard: if you mean how to do the lock/unlock dance, you can use zuul zk.py. However, since now more parties are showing interest in accessing nodepool in non-Zuul envs, it's been 'agreed' to decouple zk.py into 'nodepool-lib', post 3.014:50
*** JasonCL has joined #zuul14:50
rcarrillocruzif you ask how we do today, is super simple and rough: Our nodepool has 1 node per type, so the scheduling is simple :-). Then we run the tests against an ansible group by doing a combination of the openstack dynamic inventory plus static inventory14:51
rcarrillocruzlike14:51
rcarrillocruzon static14:51
rcarrillocruzyou'd have14:51
rcarrillocruz[ios]14:51
rcarrillocruzios-15.blah14:51
rcarrillocruzthe ios-15.blah is itself another group coming from openstack dyn inventory, which contains the instances with that label (i added that patch to nodepool, to get per-label groups)14:52
rcarrillocruzodyssey4me was working on something similar, he posted a gist not a while ago for the lock/unlock dance14:52
dmsimardrcarrillocruz: interesting14:54
dmsimardrcarrillocruz: I think there's definitely a use case for standalone nodepool :)14:54
rcarrillocruzin retrospect, i should have written that inventory earlier14:54
rcarrillocruzit's faster to poke at zk, compared to openstack dyn inventory14:55
rcarrillocruzmoreover, the nodepool inv is intended to have nodepool node logic14:55
rcarrillocruznow that we have static driver (and more coming)14:55
rcarrillocruzdmsimard: to me it's a no brainer, been using it at home for a while to test my patches. Rather than vagrant or doing manual nova boot, i just have a nodepool keeping min-ready 1 for the usual node types14:56
*** openstackgerrit has joined #zuul15:02
openstackgerritMatthieu Huin proposed openstack-infra/zuul master: zuul web: add admin endpoint, enqueue & autohold commands  https://review.openstack.org/53900415:02
dmsimardrcarrillocruz: yeah, I basically do the same thing15:03
dmsimardrcarrillocruz: except it's a bash alias that does an "openstack server rebuild" with the snapshot15:04
rcarrillocruzheh, i have an alias for 'ssh nodepool && nodepool delete'15:05
*** JasonCL has quit IRC15:07
rcarrillocruzcorvus , mordred : reading up the etherpad, just saw your containers item. Just in case it wasn't under your radar, k8s/openshift connection plugins landed recently in ansible https://github.com/ansible/ansible/pull/26668/files15:07
pabelangerrcarrillocruz: somebody at RHT recently created: https://github.com/mpryc/devnest/ Same idea, reservation system but with jenkins. If you look at code, most could be replaced with nodepool for managing nodes, while reservations handled in their app.15:08
rcarrillocruzguess they can be leveraged at somepoint (linked that in the etherpad for ptg reference)15:08
rcarrillocruzpabelanger: yah,  i did something similar at gozer, lol. It was called 'slavemanager'15:09
rcarrillocruzso, i used ssh15:10
rcarrillocruzby restricting a given user on that machine15:10
rcarrillocruzhttps://research.kudelskisecurity.com/2013/05/14/restrict-ssh-logins-to-a-single-command/15:10
rcarrillocruzi wrote down a bash script, which you could do things like 'hold node for $time', unhold15:10
rcarrillocruzthe hold, would put an at job for deleting the node at $time15:10
rcarrillocruzcrazy hacky stuff15:11
rcarrillocruzelectrofelix: ^ you must remember it15:11
rcarrillocruz:-)15:11
*** JasonCL has joined #zuul15:11
rcarrillocruzpabelanger: we should sit down at PTG and talk about a nodepool beaker driver, which i think is what devnest uses under the covers ?15:12
pabelangerrcarrillocruz: the cylons were right, history is destined to repeat :)15:12
rcarrillocruzlulz15:12
pabelangerrcarrillocruz: Yes!15:12
rcarrillocruzyeap15:12
pabelangerrcarrillocruz: when do you get into PTG?15:13
rcarrillocruzsunday15:13
rcarrillocruzreturning friday15:13
pabelangercool15:13
pabelangerrcarrillocruz: any of that code public?15:14
rcarrillocruzyou mean the slavemanager thingy?15:14
rcarrillocruzthat got lost i guess, it was on gozer's gerrit15:14
rcarrillocruzyolanda: you must remember that slavemanager too :D15:15
rcarrillocruzhmm15:15
rcarrillocruzhttps://storyboard.openstack.org/#!/story/200112515:16
*** JasonCL has quit IRC15:16
rcarrillocruzhoping we can chat about it at PTG15:16
corvusrcarrillocruz: add to https://etherpad.openstack.org/p/infra-rocky-ptg15:17
rcarrillocruzi think that may be what i need for 'private' launcher/executor15:17
electrofelixrcarrillocruz: reading backscroll15:17
*** JasonCL has joined #zuul15:18
corvustobiash, clarkb: are we agreed on using git remotes for http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-February/000023.html or do we want further discussion?15:18
rcarrillocruzcorvus: adding15:20
tobiashcorvus: from my point of view I think we can agree on that but haven't talked with my collegue about that yet15:22
corvustobiash: okay, no rush.  let me know when you do.15:25
tobiashkk15:30
corvustobiash: in https://review.openstack.org/535716 why doesn't .remote_url reflect what's on disk?  why can't we use it to determine if we don't have to update the remote url?15:30
tobiashcorvus: the repo object just takes the remote url on construction and not necessary from the git repo on disk15:34
corvustobiash: i can see one case: when we initialize a Repo() object, we don't know what's on disk.  but maybe we could always update it in that case, but otherwise, use remote_url to avoid updating when not needed once we've cached the Repo object.15:34
tobiashcorvus: that's also what I thought but then I thought, that it's just easier to set that regardless15:35
corvustobiash: well, part of the reason they're cached is to reduce some of this overhead15:35
tobiashcorvus: I could also change that to set the remote url regardless in _ensure_cloned15:37
tobiashwould that be better?15:37
tobiash(and add ifunchanged: return to setRemoteUrl)15:40
corvustobiash: i would do it in __init__.  ensure_cloned is called often (in case the operator removes the repo while the process is running).  if ensure_cloned actually clones, we'll automatically get the right value.  the only time we'd be unsure is on __init__.  so i'd just put it right after the ensure_cloned in __init__.15:40
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Set remote url on every getRepo in merger  https://review.openstack.org/53571615:53
tobiashcorvus: like this? ^15:53
corvustobiash: yep!15:54
tobiash:)15:54
openstackgerritMerged openstack-infra/zuul master: Add tenant project group definition example and definition in the doc  https://review.openstack.org/53573015:55
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Set remote url on every getRepo in merger  https://review.openstack.org/53571616:01
tobiashcorvus: docstring fixup in the tests ^16:01
*** openstackgerrit has quit IRC16:04
*** tosky has joined #zuul16:07
andreafcorvus pabelanger rcarrillocruz about group vars at nodeset level16:07
dmsimardandreaf: ansible automatically loads the group_vars, host_vars and vars directories in the directory from where the playbook is executed (so for a playbook in foo/playbooks/playbook.yml, it would load foo/playbooks/group_vars, etc.)16:08
rcarrillocruzcorvus: to my understanding, nodepool job 'vars' => ansible vars. Now, i think there's a valid use case to have vars at node level ( == ansible host_vars) and groups level (== ansible group_vars)16:08
dmsimardrcarrillocruz: job vars are in the inventory16:08
rcarrillocruzi got a very similar question today from dmellado , which was about multigate job16:08
pabelangerpersonally, i try to keep variables out of inventory files when possible. And do what dmsimard suggests with group_vars folders16:08
pabelangerhowever, that might mean the job would have to be updated, since playbooks are not shared between jobs16:08
dmsimardpabelanger: depends where the playbook(s) live16:08
rcarrillocruzdmsimard: fair, but net net what i'm after is that nodepool vars are always global16:09
pabelangerright, using group_vars/foo.yaml would have to live next to playbook16:09
rcarrillocruzwondering if it's worth having a zuul job construct to have host_vars/group_vars16:09
rcarrillocruzrather than having that at playbook level16:09
dmsimardrcarrillocruz: what is "global" ? variables in the inventory are global but they are fairly low in terms of precedence16:09
pabelangerI believe devstack works today, but just setting up vars, with no run playbook, the parent has it16:10
dmsimardrcarrillocruz: I believe we've discussed that before, let me search logs16:10
rcarrillocruzasking, cos i got a very similar question today from dmellado16:10
rcarrillocruzre: multigate jobs16:11
rcarrillocruztoday it seems the services are set in 'vars'16:11
andreafJust re-sharing the link with my use case https://etherpad.openstack.org/p/zuulv3-group-variables16:11
pabelangerrcarrillocruz: my issue with that, is it makes it harder to run ansible playbooks locally, without depending on zuul. since variables are stored in zuul.yaml16:11
rcarrillocruzbut if you want to have per'node services, you have to craft that yourself at playbook level16:11
andreafrcarrillocruz16:11
rcarrillocruzand i think the limitation is due to have just 'vars' as global16:11
dmsimardrcarrillocruz: but it doesn't have to be that way16:11
rcarrillocruzpabelanger: that's fair16:11
andreafrcarrillocruz: yeah multinode is one of the main use cases16:11
dmsimardrcarrillocruz: either you can do it at the playbook level or through var files16:11
rcarrillocruzdmsimard: sure, i know it can be done, just raising if the use case is worth that have the ability to express host_vars/group_vars at zuul job def16:12
dmsimardrcarrillocruz: I mean, we've even discussed doing something at the beginning of the playbook like - include_vars: path: foo/{{ inventory_hostname }}.yml16:12
pabelangerhowever, I recently did run into the issue where I needed to use extra-vars, so this topic is interesting :)16:12
andreafrcarrillocruz, pabelanger, dmsimard: variables in group vars are not job specific so using group vars does not fulfill the requirement16:12
corvuspabelanger: as long as we have job-level variables in zuul, i think we've already crossed the threshold of making it harder to run locally.  the solution is still the same though -- create a local inventory with the variables you need (or copy the one zuul created)16:12
dmsimardrcarrillocruz: nevermind there was no real discussion about group_vars before, it was for a use case pabelanger had for putting the same host in two groups which was getting complicated16:13
andreafif I want to have two jobs that run a different setup of services on multinodes for instance I would have to create different host groups one for each job to be able to set different things in group_vars in the repo16:13
pabelangercorvus: agree, however (personal preference) I've not found a need to use inventory variables when I could instead use playbooks/group_vars16:14
dmsimardpabelanger: I think job-level vars are simpler to use for people not familiar with Ansible16:14
rcarrillocruzright16:14
andreafbesides splitting the job vars definition between job and group vars would make things hard to read16:14
rcarrillocruzi would imagine people more used to ansible will just use pb vars16:14
dmsimardpabelanger: groups_vars/host_vars/vars is a concept only ansible users would know about and it's not necessarily straightforward16:14
pabelangerdmsimard: right, totally agree, however I think we also want users to learn more ansible. Over having zuul grow some of the concepts16:15
*** elyezer has quit IRC16:15
dmsimardrcarrillocruz: I think if we implement something like this in zuul, it would be defined at the nodeset declaration -- like when you declare a host, you have a vars: field, when you declare a group, you have a vars: field16:15
dmsimardrcarrillocruz: and then we would need to plumb that back into a hostvars/groupvars file on the executor or something like that.16:16
rcarrillocruzyeah, that's exactly what i had in mind16:16
dmsimardrcarrillocruz: nevermind the files, we can just use the inventory that we already have16:16
dmsimardpabelanger: yeah we definitely have to be careful about zuul needing to expose every parameter and knob ansible has16:17
*** elyezer has joined #zuul16:17
dmsimardpabelanger: like playbook strategy, serial, max failure rate and whatnot.. could get complicated quickly16:17
dmsimardalthough all the stuff we're talking about here can be set at a playbook level16:17
pabelangerwell, host, serial, etc can be controlled via extra-vars (or secrets in zuul) today. I even have a use case for serial, but that is a different discussion16:18
dmsimardpabelanger: what's your use case for extra-vars ?16:20
andreafdmsimard rcarrillocruz I would consider the possibility of attaching variables to a node-set when it's used in a job, else a job would have to define it's own node set to define group vars on it16:21
pabelangerdmsimard: https://review.openstack.org/526708/16:21
corvusi'm inclined to think that because we allow defining variables, and we allow defining nodesets, we should allow defining variables on nodesets.  i think an implementation like what's in the etherpad looks pretty close to ideal.16:22
rcarrillocruz++16:22
dmsimardandreaf: at this point you have to manage different nodesets though.. with unique names and all16:22
dmsimardandreaf: It might get confusing, I don't know16:22
corvusdmsimard: i think what andreaf is saying is that if we permit setting group_vars on the *job*, we don't have to have a lot of named nodesets.16:23
dmsimardandreaf: something you could consider is doing something like a "scenarios" folder in which you have variable files, and then in your playbook you do something like - include_vars: scenarios/{{ scenario }}.yml -- and then "scenario" ends up being defined at the job level16:23
dmsimardcorvus: how would that work in practice ? Zuul would overload the original nodeset a bit like how job variants work ?16:24
corvusdmsimard: i think andreaf is suggesting (in line 59-75 of the etherpad) that we could copy named nodesets and augment them for a specific job16:26
pabelangeryah, I would be a fan of that. I would make http://git.openstack.org/cgit/openstack/windmill/tree/.zuul.d/jobs.yaml#n42 easier to maintain. Since all 3 jobs have the same nodeset stanza, except for label16:28
dmsimardcorvus: ah, "ephemeral" nodeset definition inside the job16:28
corvus(though we could also consider making group_vars and host_vars job attributes and leaving the anonymous nodeset definition alone; i think that would work just as well)16:30
dmsimardcorvus: it's probably more straightforward at the nodeset level -- otherwise you need a dict for the host and group names ?16:33
corvusdmsimard: you still do.  see the etherpad.16:34
dmsimardcorvus: ah, you're considering both you mean, I read that as an either or16:35
rcarrillocruzy, not mutually exclusive16:36
* rcarrillocruz afk for a bit16:36
SpamapSdmsimard: for the scenario thing that works fine with just regular job vars.16:58
SpamapSvars: {scenario: deploy}16:59
SpamapSidea for today: I wish there was a short-circuit around gate running the same jobs as check when check and gate are effectively running the same repo state.17:00
SpamapSIn a slower moving repo, it's entirely possible only 1 or 2 patches are ever being check'd per day.. and often it's check->gate immediately. For that scenario, waiting for the tests twice is often annoying.17:00
SpamapSSo I wonder if zuul could like, keep a cache of hashes that passed check, and if they end up right back in gate.. just skip the ones that already passed with that hash.17:01
SpamapSCould set something on a job to say it's ok, like: fast-forward-pipeline: check17:04
*** elyezer has quit IRC17:14
*** elyezer has joined #zuul17:14
dmelladorcarrillocruz, just read your message17:33
dmsimardSpamapS: the scenario thing I propose shifts the logic from the job definition to the variable files17:33
dmelladoIt'd be awesome to clarify that, thanks for bringing that up17:33
dmsimardI don't have a strong opinion one way or another, I think I'll really need to hit that use case myself to make up my mind as to what I think would be the best idea17:34
*** jpena is now known as jpena|off17:34
corvusSpamapS: you could just skip check and go to gate17:37
corvusSpamapS: openstack only has a 'must-pass check' requirement because it's big enough and moves fast enough that gate failures are expensive.  for smaller systems, just relying on the gate to reject failures works great.17:38
corvusSpamapS: but also, i like your idea and think it's worth exploring17:39
clarkbis that info the sql db is already trackign? (commit hashes to job results)17:40
dmelladodmsimard: rcarrillocruz17:43
dmelladoso, regarding those options on the subnodes17:43
dmelladoon the devstack-multinode gate17:44
corvusclarkb: i imagine we'd need a bit more info, like sibling repo states17:44
dmelladoI was wondering if there's any plan going on the make that selectable via a default job17:44
dmellado+ I've realized that the support for centos-7 nodes is broken due to virtualenv installation17:44
dmelladoit's done using the package ansible module but the names are diffferent, so kaboom xD17:45
clarkbdmellado: see https://review.openstack.org/#/c/540704/ and its parent17:45
clarkbvirtualenv install should be getting fixed and 540704 shows you how to swap out different nodesets17:45
pabelangerhttps://review.openstack.org/541010/17:46
dmelladoclarkb but IIUC that last patch would just show a failure17:48
clarkbdmellado: ?17:48
dmelladoclarkb: I put this up for centos/fedora where the package name is different17:48
clarkbit shouldn't because the images come with virtualenv17:48
clarkbso you have to preinstall virtualenv17:48
clarkbis essentially what ianw is saying17:48
dmelladogot it17:48
dmelladoso then I shouldn't install anything17:49
dmelladobut the centos7 image is wrong17:49
clarkbdmellado: I'm not sure I have enough context to answer those questions17:49
clarkbbut 540704 shows things working on our centos7 images17:49
dmelladoclarkb: sure, let me give you a little bit of context then17:49
clarkbans suse and fedora17:49
dmelladoI've got a WIP patch for a multinode gate based on centos717:50
dmelladoand it failed on the check17:50
dmelladobasically because it was looking for a package called virtualenv17:50
dmelladowhich is called, python-virtualenv in fedora and rhel17:50
clarkbyes which is now in the process of being addressed. If you depends on 541010 like 540704 does you should be fine17:51
dmelladoian patch should resolve it as it removed the dependency of it17:51
dmelladocool then17:51
dmelladoI'll abandon m ine17:51
clarkband then 540704 shows you how to mix in different node sets against existing jobs17:51
dmelladoclarkb: thanks! I'll take a look17:52
dmelladoclarkb: now that you're around I've also another question17:54
dmelladois there any way to disable *all services* in devstack_services:17:55
dmelladoand just enable some selected ones?17:55
dmelladoI didn't get to see any sample about taht17:55
dmelladoand it might just be a one-liner17:55
*** chrnils has quit IRC17:55
clarkbI think you have to define your own base job with no services set, unless one of those exists already17:56
clarkbI'm looking at the role17:56
dmelladobasically I'd like to mimic what was ENABLED_SERVICES=17:56
dmelladoand then enable_service foo and so17:57
dmelladoin order to be explicit17:57
corvusyou can disable them individually, but there isn't a way to override a set to disable everything.17:58
dmelladoI see, so then I'll just create a base disabling everything as a parent for that17:59
dmelladothanks!17:59
*** openstackgerrit has joined #zuul18:05
openstackgerritMerged openstack-infra/zuul master: Retry more aggressively if merger can't fetch refs  https://review.openstack.org/53615118:05
openstackgerritMatthieu Huin proposed openstack-infra/zuul master: zuul web: add admin endpoint, enqueue & autohold commands  https://review.openstack.org/53900418:17
*** sshnaidm|ruck is now known as sshnaidm|off18:18
dmsimarddmellado: what's the use case in using the devstack job if you don't want to enable any service ?18:31
*** tosky has quit IRC18:33
*** harlowja has joined #zuul19:03
rcarrillocruzHe wants to enable in a node by node basis, not disabling everything for the test purpose. So disable in parent, then enable in child19:44
dmsimardrcarrillocruz: might be easier to just spin off a new job without this default configuration instead of trying to override it19:50
clarkbzuulians. Trying to work up a rough schedule for the PTG and about half the proposed infra topics are zuul or nodepool related. Would wednesday and half of thrusday or half of thursday and friday be ebtter for focused zuul work?20:56
clarkbmy hunch is that wednesday and half of thursday may be better due to travel on friday and generally not being completely exhausted already20:56
corvusclarkb: i favor earlier for those reasons, but don't have anything else on the calendar, so nothing blocking friday.20:57
pabelangerclarkb: wfm20:59
openstackgerritMerged openstack-infra/zuul master: Cleanly shutdown zuul scheduler if startup fails  https://review.openstack.org/54215921:00
openstackgerritMerged openstack-infra/zuul master: Run zuul-stream-functional on every change to ansible  https://review.openstack.org/54220321:01
*** pabelanger has quit IRC21:05
*** pabelanger has joined #zuul21:05
corvusclarkb: can you review https://review.openstack.org/543619 and https://review.openstack.org/540489 when you get a sec?21:08
corvusSpamapS: can you take a look at my revisions on https://review.openstack.org/542518 ?21:09
rcarrillocruzYeah I am off early on Friday, would prefer wed and thu21:18
*** myoung is now known as myoung|bbl21:33
openstackgerritMerged openstack-infra/zuul master: Don't decode str in LogStreamingHandler  https://review.openstack.org/54336521:34
clarkbok stuck zuul in the first half of the hacking days https://ethercalc.openstack.org/cvro305izog221:34
openstackgerritMerged openstack-infra/zuul master: Fix broken fail_json in zuul_console  https://review.openstack.org/54337021:40
*** elyezer has quit IRC21:45
*** openstackstatus has joined #zuul21:46
*** ChanServ sets mode: +v openstackstatus21:46
*** elyezer has joined #zuul21:47
clarkbcorvus: https://review.openstack.org/#/c/543619/2/zuul/model.py line 1076 won't variants have the same name?21:54
clarkbcorvus: put another way should that be == instead of !=21:55
corvusclarkb: yes they will have the same name -- that's trying to detect when we cross from one job to another and so it resets the attribute on that transition21:56
clarkbcorvus: but its setting it to the parent's job value which may be true?21:56
clarkboh but that is specifically in apply variant21:57
* clarkb needs to warp head around this21:57
corvusclarkb: for example, we start by copying the base job, then apply the unittest job on top of that.  so the first time we call that method, "self" is a copy of the base job, and "other" is "unittests".  1076 is true, therefore we forget whatever value of abstract we already had (which we got from "base") and instead set it to whatever unittest is.21:58
jheskethMorning21:58
corvusclarkb: if there's a second unittest variant, then the next invocation "self" is what we ended up with from the last call.  1076 is false (since the names are both "unittest").  so we either want to stick with the value we had before, or, if the second unittest variant set abstract to true, we should be set to true (once a variant sets abstract, we don't want to unset it)22:00
clarkbcorvus: ok thanks, I think I get it now22:00
clarkbI think I had self and other mixed up22:00
corvusi wrote that code 5 times22:01
andreafdmellado, corvus: actually I added a mechanism for that, with devstack_base_services22:03
andreafdmellado, corvus: sorry with the special "base" devtsack_service22:04
corvusandreaf: i don't see devstack_base_services set in the devstack repo22:05
andreafdmellado, corvus: https://github.com/openstack-dev/devstack/blob/master/roles/write-devstack-local-conf/README.rst22:05
andreafcorvus, dmellado: devstack_base_services can be used to add or remove features as defined in the test-matrix22:06
corvusandreaf, dmellado: can we move this to #openstack-qa?22:06
openstackgerritMerged openstack-infra/zuul master: Add abstract job attribute  https://review.openstack.org/54361922:12
openstackgerritMerged openstack-infra/nodepool master: Add native distro test jobs  https://review.openstack.org/54326822:29
*** hashar has quit IRC22:43
*** elyezer has quit IRC23:10
*** elyezer has joined #zuul23:11
openstackgerritIan Wienand proposed openstack-infra/nodepool master: Unpause Xenial build for non-src functional test  https://review.openstack.org/54412523:22
*** jimi|ansible has quit IRC23:33

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!