Tuesday, 2017-07-18

jeblairso, in other words, from my pov, the base job version of this which implements the global limit is something we have to do.  if folks *also* want to further limit unit test jobs, that's fine with me but i don't feel as strongly about it.00:00
pabelangerso, here was the step for the playbook I am writing00:00
jeblairpabelanger: the reason we only have the unittest limit is historical -- it was written when the only jobs we had were unit test jobs.00:00
pabelangerthis might be easier to talk on pbx server00:02
jeblairpabelanger: maybe tomorrow then?  it's past eod for me (and i reckon mordred too)00:02
pabelangeryup, that works00:03
mordredI'll be on a plane tomorrow00:07
mordredbut also it is eod00:07
mordredhappy to do more irc chatting - or if y'all want to pbx it without me that's cool00:07
pabelangersure, let me think about what I need to type out in the morning00:08
mordredpabelanger, jeblair: the main thing from my end is that I'd love for us to _not_ have an openstack-py27 job at all - I'd like to avoid it if we can - even if that means putting somethig in the base job for now that's not _stricly_ applicable but that either wouldn't hurt or that we might take a todo-list item go add support somewhere for00:09
mordredso if we can, I'd like for the basic jobs like tox-py27 to only have an openstack version if it's just flat unavoidable00:09
pabelangerright, but adding things into base job means something like devstack would also get tox variables00:10
mordredour docs jobs is a good exapmle - the way we build docs doesn't map to anywhere else00:10
mordredpabelanger: well, by "base" here I mean zuul-jobs/tox00:10
pabelangermordred: right, I'm really trying to resist the need for if/else logic in the zuul-jobs.00:12
pabelangerI think if we go down that path, they might grow pretty big and complex to manage00:12
mordredyah - and I agree with that sentiment in general00:13
pabelangerwhere if we added it into openstack-py27 for example, then we only need to worry about openstack things00:13
mordredhowever, I think a tox-py27 job that has detection of .testrepository, nose and py.test artifacts and can do all the right things with them is a good tox-py27 job00:13
pabelangerSo, a good exersise would be to have bonnyci or tobiash try and use our tox role. Then, see what needs to be added and what it would look like into zuul-jobs00:14
mordredso some amount of complexity in some of these base jobs is a net win00:14
pabelangerYa, I can see that00:14
pabelangermaybe I should be starting with check_sudo_usage00:16
mordredpabelanger: (also, I think these are the hard ones - translating the definitely-openstack stuff is piece of cake :) )00:16
pabelanger_shrugs_00:16
pabelangermordred: ya, it is a little struggle atm :)00:16
mordredpabelanger: ++00:17
mordredpabelanger: hopefully only a few of them will be ugly like this though00:17
pabelangerYa, having a non openstack zuulv3 would be helpful. :D00:18
pabelangerk, EOD for me too00:19
mordredpabelanger: ++00:34
*** harlowja has quit IRC01:34
*** patrickeast_ has joined #zuul02:06
*** mattclay_ has joined #zuul02:07
*** toabctl_ has joined #zuul02:13
*** patrickeast has quit IRC02:13
*** toabctl has quit IRC02:13
*** rcarrillocruz has quit IRC02:13
*** mattclay has quit IRC02:13
*** patrickeast_ is now known as patrickeast02:14
*** toabctl_ is now known as toabctl02:14
*** mattclay_ is now known as mattclay02:14
*** rcarrillocruz has joined #zuul02:19
*** timrc has quit IRC02:45
*** timrc has joined #zuul02:52
*** timrc has quit IRC02:56
*** timrc has joined #zuul02:58
*** harlowja has joined #zuul03:40
*** harlowja has quit IRC04:54
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Add tenant column to the buildset reporter table  https://review.openstack.org/48425605:33
*** hashar has joined #zuul06:55
*** amoralej|off is now known as amoralej07:44
*** isaacb has joined #zuul07:45
*** isaacb_ has joined #zuul07:56
*** isaacb has quit IRC07:59
tristanCpabelanger: mordred: is it the openstack-infra/zuul-jobs that would be re-usable?08:28
*** isaacb_ has quit IRC08:48
SpamapStristanC: yes.09:25
*** isaacb_ has joined #zuul10:04
*** xinliang has quit IRC10:33
*** xinliang has joined #zuul10:46
*** xinliang has quit IRC10:46
*** xinliang has joined #zuul10:46
*** jkilpatr has quit IRC10:46
*** amoralej is now known as amoralej|lunch11:05
*** jkilpatr has joined #zuul11:07
*** amoralej|lunch is now known as amoralej12:24
*** isaacb_ has quit IRC12:59
*** dkranz has joined #zuul13:05
*** hashar is now known as hasharAway13:15
*** persia__ is now known as persia14:01
pabelangermorning14:05
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Rework tox -e linters  https://review.openstack.org/48483614:50
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Support UUID as builder identifier  https://review.openstack.org/48441415:04
mordredmorning pabelanger !15:07
pabelangero/15:07
Shrewspabelanger: so, in the interest of dropping py2 support from nodepool, what would it take to transition our nodepool builders in production to py3?15:16
pabelangerShrews: I don't think it would be much work, we could either repurpose nb03 / nb04 or just bring up nb01 / nb02 under python3. We are already running xenial now15:19
pabelangerwe need to update puppet-nodepool, but that should be straight-forward too15:19
pabelangerTL;DR: we could do it in a 1/2 I think15:19
pabelanger1/2 day*15:19
Shrewspabelanger: and probably do 1 at a time, yeah?15:20
pabelangerif we do that, we should merge feature/zuulv3 to master too15:20
pabelangerShrews: right15:20
pabelangerwe can stop 1 builder now15:20
pabelangerto force the other to only build images15:20
Shrewspabelanger: merge feature/zuulv3 TO master? or the other way around?15:21
pabelangerya, we should merge master to feature/zuulv3 first I guess, then back to master15:21
pabelangerbut nb03 / nb04 are using master branch today15:22
Shrewslaunchers are pinned to a tagged release though, correct?15:23
jeblairpabelanger: we can't merge zuulv3 to master -- we're still running old nodepool master.15:23
pabelangerjeblair: oh, right.15:23
pabelangerso, would we consider running nodepool-builders from feature/zuulv3 in openstack-infra?15:23
Shrewswe should do a master -> feature/zuulv3 at some point though15:24
pabelangerShrews: nl01 is running from feature/zuulv315:24
Shrewspabelanger: right, but that's non-production15:24
pabelangerya15:24
jeblairpabelanger: just cherry-pick the builder related changes to master15:24
pabelangerShrews: which launchers are you referring too about pined15:25
pabelangerjeblair: I don't think that works in this case? Shrews was asking about dropping py27 support15:25
jeblairoh, that's probably major changes to the v2 launcher.15:26
jeblairbut Shrews only asked about production builders15:26
jeblairwhich may be less work15:26
* Shrews confused now15:27
jeblairyep.  i don't know what we're talking about.15:27
Shrewsif we remove v2 support, that affects both production (builders there are using v3), and our dev environment15:27
jeblairShrews: no -- production builders are running master15:28
jeblairso we can remove py27 support from feature/zuulv3 without affecting production launchers or builders15:28
Shrewsjeblair: oh, we merged the builder changes back to master, didn't we?15:28
jeblairShrews: yes15:28
Shrewsah, k. right. so yes, only dev environment is affected then15:29
Shrewswhich means only nl01 needs migrating, i think?15:29
jeblairyep15:30
Shrewsthat's easier15:31
Shrewspabelanger: If you want to point me to the things that need updating, I'm happy to attempt that. Not well versed in puppet, but can take a stab15:34
pabelangerShrews: sure, we need to add python3 support to http://git.openstack.org/cgit/openstack-infra/puppet-nodepool you can see what we did in http://git.openstack.org/cgit/openstack-infra/puppet-zuul. And system-config will then use python3 http://git.openstack.org/cgit/openstack-infra/system-config15:35
Shrewspabelanger: cool thx. let me see if i can make sense of it15:36
Shrewsah, i think i see how the pieces fit15:46
*** bhavik1 has joined #zuul15:46
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Add more information on variables in jobs  https://review.openstack.org/48453015:52
*** hasharAway has quit IRC15:59
*** hashar has joined #zuul15:59
pabelangermordred: so, if I understand right, you do not want to do: https://review.openstack.org/#/c/48393616:00
jeblairmordred, pabelanger: what do we need to move https://review.openstack.org/479390 forward?  should we add that test now?  do we need more reviews?16:01
pabelangermordred: eg: remove NOSE vars from tox role16:01
pabelangerjeblair: mordred: I don't want to block it, but think tests would be helpful.  I don't mind +3 if we want to do a follow up16:01
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Add sample base job  https://review.openstack.org/48448516:06
*** bhavik1 has quit IRC16:25
*** hashar is now known as hasharDinner16:30
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Add callback plugin to emit json  https://review.openstack.org/48451516:32
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Only output result details on error  https://review.openstack.org/48451616:34
jeblairi'm taking another stab at the implied role change (where we automatically try to add the job's own repo as a zuul role repo)16:54
pabelangerjeblair: okay, since mordred is likely on a metal tube. Do you mind chatting a bit about the subunit file size?  Right now, that lives in tox role.  I think that is too specific to openstack right now.  So, my thought was to move that into openstack-zuul-jobs. https://review.openstack.org/#/c/484518/ I actually think, this could be made into a role, that allows to check filesize of uncompressed data, error if17:05
pabelangerlarge, and gzip if not.  Which might be useful to live in zuul-jobs as a role17:05
pabelangerjeblair: but right now I am blocked a little as I don't know which path to move forward on. The ansible code is written and works17:05
* mordred waves from tube - I'm fine putting things in openstack-zuul-jobs for now so that we don't spin too much on this while I'm less reachable - but I also think pabelanger and jeblair chatting it through higher bandwidth could also be useful and whatever y'all come up with works for me17:08
pabelangermordred: right, we might have this dance for a bit between openstack-zuul-jobs and zuul-jobs for a while longer, but I'd think landing things in openstack-zuul-jobs first makes more sense, and I feel once in zuul-jobs (and people are using it) it will be harder to remove playbooks / roles out of zuul-jobs.17:11
jeblairmordred, pabelanger: okay, i took a look at that change.  i'm guessing the openstack specific parts are: a) should a job fail on subunit too large?  b) what should the threshold be?17:12
jeblairis that a good analysis?  is "should it be gzipped?" also an openstackism, or is that universal?17:13
pabelangerjeblair: right to a and b.17:13
pabelangerto be, gzip should be handled at the log publisher setp17:14
pabelangerstep*17:14
pabelangerotherwise, we might have a lot of duplicate code gzipping things in roles17:14
jeblairpabelanger: yeah, i think i agree with that.  easiest to just gzip everything before upload, and i believe mordred has a change to do that.  so let's discard that for now.  that leaves a and b.17:15
pabelangerjeblair: sure, I'll look for the gzip code on review.o.o17:16
jeblairmordred: so how can we make a and b suitable for everyone in zuul-jobs?  do we look for site-local variables for that, and if not found, default the behavior to "off"?17:16
jeblairpabelanger: oh, https://review.openstack.org/483611 was the change i was thinking of, but it's console log only.17:16
mordredjeblair: yes. OR - we omit it completely and deal with job content size differently17:16
jeblairmordred: right, yes, we should back up and ask ourselves whether we want a subunit size limit.17:17
mordred++17:17
jeblairor just a job size limit.17:17
mordredlike, I don't think we specifically care about subunit size, as much as we care that jobs don't eat too much disk17:17
mordredI could be wrong about that though17:18
jeblairpersonally, i want a job size limit and don't care about subunit.  if someone stands up and says infra wants a subunit size limit, i'm fine with adding it.17:18
jeblairfungi, clarkb: ^ ?17:18
pabelangerdidn't we say yesterday that openstack jobs would care if subunit was 50MB?17:19
clarkbI think youay want arbitrary file limit as well as aggregate limit17:19
clarkbbyt not necessarily a subunit specific thing17:19
mordredclarkb: do we specifically care about subunut size ourselves?17:20
pabelangerI can only think openstack-qa might?17:20
mordred(it's in our job now, but if we had job-size-limit would we care about subunit size limit?)17:20
clarkbmordred: I dont know that we do, opensta k health might though17:21
jeblairoshealth doesn't store the attachments, does it?17:21
pabelangerI don't think zuul-jobs would care about subunit files size, but we should support it for some openstack project job. That was the assumption I went with starting to work on it17:21
fungisummarizing the question: we have run-tox.sh right now limiting the maximum size of subunit files, and wondering whether that should be a general role vs something specific to openstack community tox jobs?17:24
*** harlowja has joined #zuul17:24
jeblairfungi: yes, but -- we're backing up a step and asking whether it's even required at all17:24
jeblairstorytime17:24
SpamapSjeblair: subunit2sql discards attachments IIRC17:24
jeblairwe added the 50mb limit 6 years ago when the only thing we were running was nova unit tests under jenkins, and someone did something silly and they went haywire17:24
jeblairsome things *might have changed* since then17:25
jeblairas an operator of openstack's ci system, all i really care about is not running out of disk space17:25
fungijeblair: thanks! i was in the process of trying to pickaxe through git history to see when/why it was added myself17:25
jeblairmy inclination would be to say "all jobs can store up to X mb" and leave it at that.17:26
jeblairi think we need *that* regardless17:26
fungiyes, we'd already discussed wanting that feature17:26
jeblairso the first question is:  is that sufficient for us, or do we *also* want something like the 50mb limit for subunit files.17:26
pabelangerSo, easy mode is no and remove it17:26
jeblairmy answer is i think the global limit is sufficient, but i'm happy for us to have the subunit limit as well if others see value.17:27
fungiso i think given the circumstances of the subunit cap addition history, it's reasonable to say the new aggregate log size cap solves the original problem in a more clean and generally-applicable fashion17:27
jeblairSpamapS: ok, so the overall size is probably not a big concern to subunit2sql (attachments tend to be where that comes from)17:28
jeblairpabelanger, mordred: okay, so way forward is don't worry about subunit size for now.  just do an overall size check as part of the log upload.  also gzip at the log upload as well.17:30
jeblairpabelanger, mordred: er... proposal ^ :)17:30
pabelangerokay, I can work on that17:31
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Add job's project as implicit role project  https://review.openstack.org/48272617:41
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Remove subunit file size check for tox role  https://review.openstack.org/48451917:44
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Compress testrepository.subunit in fetch-testr-output  https://review.openstack.org/48489617:44
pabelangerjeblair: mordred: ^ removes file check, moves gzip into fetch-testr-output17:45
pabelangernext up will be to make a subunit2html role17:45
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Compress testrepository.subunit in fetch-testr-output  https://review.openstack.org/48489617:53
*** amoralej is now known as amoralej|off18:11
*** bhavik1 has joined #zuul18:13
*** deep-book-gk_ has joined #zuul18:30
*** deep-book-gk_ has left #zuul18:33
*** tobiash_ has joined #zuul18:34
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Compress testrepository.subunit in fetch-testr-output  https://review.openstack.org/48489618:34
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Write secrets into their own file, not into inventory  https://review.openstack.org/47939018:34
pabelangerjeblair: mordred: I am guess we don't care if a tox jobs uses sudo in zuul-jobs but we'd still like it for openstack-infra?18:35
*** bhavik1 has quit IRC18:36
mordredpabelanger: I vote for blocking sudo in both - this is a unittest job we're writing, I think we get to suggest some best practices while doing it :)18:37
SpamapSHeh.. story #2000773 ... still the best way to destroy your FireFox performance.18:38
pabelangermordred: right, we should be able to control that in our playbook for the unit test, since we'd use becomes: true / false for sudo actions now18:38
pabelangerunless somebody adds sudo commands into tox.ini18:40
pabelangerthen, the job should fail18:40
mordred++18:40
mordredyes. and if they want to do things with sudo - they can write another job to do that :)18:40
pabelangerya18:41
mordredbut I think being opinionated about sudo in tox.ini being bad is a gift we can give to the world ;)18:41
pabelangermordred: basically, trying to see if we need to write jenkins-sudo-grep.sh right now as ansible books for the tox job.18:41
pabelangerto me, sudo is disabled by default and if they use sudo, the job will just fail18:42
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Add unittest for secrets data  https://review.openstack.org/48491118:42
mordredpabelanger: yah - that also seems like a holdover from a time before we had disable sudo18:42
mordredpabelanger: this is fun - we haven't cleaned house in a while18:43
pabelanger++18:43
mordredjeblair, pabelanger: ^^ unit test for secrets.yaml18:44
mordredSpamapS: you like tests18:44
SpamapSyesiree18:45
SpamapSjeblair: so having finally caught up on the openstack running zuulv3 transition plans in that etherpad... I don't see much in https://storyboard.openstack.org/#!/board/41 beyond the devstack-gate roles refactoring that needs to get done before that.18:46
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Remove check_sudo_usage logic from tox  https://review.openstack.org/48491618:47
SpamapSSo I wonder if we should flip our plan, leave zuulv3 as "stuff to say zuulv3 is ready for outside consumption" and make a new tag that is "stuff we need to do before Denver"18:47
SpamapSmordred: I especially like adding assertions to existing tests. :)18:52
SpamapSsays more about coverage IMO18:52
mordred\o/18:54
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: WIP: Ansiblify pip freeze logic  https://review.openstack.org/48491718:58
SpamapSmordred: fail :(19:03
*** hasharDinner is now known as hashar19:07
mordredSpamapS: yah - I forgot - the live tests don't keep all those files around - patch coming19:12
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: WIP: Ansiblify pip freeze logic  https://review.openstack.org/48491719:13
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Add unittest for secrets data  https://review.openstack.org/48491119:15
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Fail early if people attempt to add zuul vars or secrets  https://review.openstack.org/48400019:15
SpamapSso...19:21
SpamapSback some time ago19:21
SpamapSwe had a plan to run control masters outside the bwrap19:21
SpamapSI actually think that's going to be super problematic.19:22
SpamapSMainly because usually we let Ansible run SSH for us.19:22
SpamapSand thus, we get hostnames, users, ports, etc., from the inventory we build for ansible.19:22
SpamapSso, I think I've come up with a plan19:23
SpamapSwhich is to run this inside the bwrap to shutdown controlmasters:   'ansible -m ping -i {inventory} -e ansible_ssh_command="ssh -O exit" all'19:24
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: WIP: Ansiblify pip freeze logic  https://review.openstack.org/48491719:32
mordredSpamapS: as a subprocess essentially at the end of the thing? makes sense to me (ansible '*' -m ping ...)19:32
SpamapSmordred: yeah19:33
SpamapSit doesn't seem to work actually19:33
mordredboo19:38
SpamapSoh bah, it uses execve on that value19:38
SpamapSand it's not -e ansible_ssh_command but env var ANSIBLE_SSH_EXECUTABLE19:38
SpamapSso have to use a wrapper script19:38
SpamapS(lame)19:38
SpamapSyeah works with a wrapper19:39
SpamapSthough ansible exits with an error since the command doesn't actually ssh to the remote host19:40
SpamapSoh hm19:42
SpamapSthere are some mechanics deep in the ssh connection plugin to do this19:42
SpamapSwhat's a "meta task" ??19:42
SpamapSoh nice! there's a meta module19:43
SpamapSthat can influence the connection plugin19:43
SpamapSand has a 'reset_connection'19:43
* SpamapS hopes19:43
SpamapSso maybe we can just pop that into a playbook that we always run after19:44
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: WIP: Ansiblify pip freeze logic  https://review.openstack.org/48491719:44
mordredSpamapS: maybe last task of the base job?19:49
SpamapSyeah19:50
SpamapSwell19:50
SpamapSno19:50
SpamapSbecause some playbooks run in and out of different bubblewraps19:50
SpamapSit has to be basically another playbook19:50
SpamapSthat we run after every one finishes19:50
mordredah. we need a playbook baked in to zuul that we run after each playbok19:50
mordrednod19:50
SpamapSit sort of works tho19:51
mordredSpamapS: that's super meta19:51
mordredtotally19:51
SpamapSexcept it 'splodes with a TypeError19:51
* SpamapS tries updating ansible19:51
mordredSpamapS: SOOOOOOOOOOO - while you're considering things19:51
mordredSpamapS: (I promise, you're going to love this one)19:52
SpamapShah, 2.3.0.0 exploded with a TypeError19:52
SpamapS2.3.1.0 explodes with an IndexError19:52
mordredSpamapS: v2_playbook_on_stats in zuul/ansible/callback/zuul_stream.py is the last event ansible-playbook triggers at the end of a playbook run19:52
SpamapSactually we could probably do it with our evil action plugins19:53
mordredSpamapS: and has access to the inventory19:53
mordredwell- the callback plugin is the thing that runs in the right context19:53
SpamapSah hm19:53
mordredaction plugins don't know "the playbook is over" - but the callback plugin does19:53
mordredSpamapS: we could, in fact, write a third callback plugin that ONLY does the ssh reset and add it to the list19:54
mordredso that zuul_stream doens't become any more un-understandable19:54
SpamapSyeah19:55
SpamapSI think that may make a lot more sense.19:55
SpamapSmordred: un-understandable reduces to derstandable19:55
mordred++19:55
mordred:)19:55
SpamapSoh this is dumb19:56
SpamapSso it uses -O stop19:56
SpamapSbut not -O exit19:56
mordredhah19:56
SpamapSso the control master disconnects from the remote end19:56
SpamapSbut does not actually exit19:56
SpamapS>:|19:56
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: WIP: Ansiblify pip freeze logic  https://review.openstack.org/48491719:56
mordredI bet that's the more 'normal' thing to want?19:56
SpamapSwell yeah, it would help with things like having a bunch of phantom users on boxes19:57
SpamapSand not exiting means you can reuse things like loaded SSH keys19:57
mordredpabelanger: ooh! can I make sugestoinon that last patch? (I don't think I would have thought of us if you hadn't written that one)19:57
SpamapSbut >:|19:57
SpamapShowever19:57
SpamapSI think our callback might actually be able to reach in and just do the -O exit19:57
mordred++19:57
* SpamapS will play with that19:57
pabelangermordred: most welcome19:58
mordredpabelanger: instead of post-task in playbook - make the pip freeze a handler that gets triggered from the tasks19:58
mordredpabelanger: that way it always runs at the end - but the role is still self-contained19:58
mordredpabelanger: oh- wait19:58
mordredpabelanger: NEVERMIND - your way is better19:58
mordredpabelanger: because your way the tox role can be used outside of zuul - where that freeze is less likely to be super useful19:59
mordredotoh - your thing does put it into the .otx/logs dir - so it's not like it would HURT20:00
mordredpabelanger: sorry - I may have geeked out slightly too much on that patch :)20:00
mordredpabelanger: another thought - while I'm thinking about it WAY too much ...20:00
pabelangermordred: ya, I think this will be useful to share with devstack also. So trying to think about how we'd share it there too20:00
mordredwell - my next random thought might be helpful there ...20:01
jeblairSpamapS: yeah, i think your ptg tag is a good idea.  i'm about to start reading scrollback on the control socket thing now.  back in a minute.  :)20:02
pabelangerinteresting20:04
pabelangerfinger://ze01.openstack.org/d7bc75a315754c4bae4b4636c3f13f5120:04
pabelangerthat job finished almost 20mins ago20:04
pabelangerbut we can still access info via finger20:04
jeblairSpamapS: i think i don't understand the problem with the plan outlined in 2001072.  you said we let ansible run ssh for us so we get stuff from the inventory.  i believe we will still do that.  the proposal in the story is not to actually ssh anywhere, but rather, just to start the control socket process -- same as the agent process.20:05
jeblairSpamapS: i'm under the impression we can start a control socket process without actually connecting to any hosts.20:06
pabelangermordred: jeblair what are your thoughts about maybe adding a zuul-executor group into our inventory file, so we can references delegate_to: zuul-executor over localhost. My logic, is so we can see which host we run task on in job-output.txt, over localhost: http://logs.openstack.org/17/484917/4/check/tox-linters/577433f/job-output.txt20:07
mordredpabelanger: sorry - lost reception20:07
pabelangerin our case, it would be ze01.openstack.org, not localhost any more20:08
mordredwrite it as a role that does pip-freeze and sets the output as a fact - and that can take n optional path to a virtualenv - and in that rule, actually write the freeze as a little python module that imports pip and runs the freeze method and returns the data in a fact variable20:08
mordredthat way you can control which virtualenv or global it's dumping just with ansible_python_interpreter20:08
mordredpabelanger: WELL - the problem with doing that generally is that makes writing playbooks with hosts: all harder20:08
pabelangermordred: ya, I was actually thinking of patching upstream pip task to support freeze, then we get all that for free20:09
mordredpabelanger: and I think could make it harder for 'normal' users to safely write jobs20:09
mordredpabelanger: ++20:09
jeblairpabelanger: the hostname is in zuul vars20:09
mordredpabelanger: I think that's a great idea - and we can carry a local module in the meantime20:09
*** tobiash_ has quit IRC20:11
pabelangermordred: jeblair: sure, was just somthing I noticed looking at logs. it makes sense that zuul-jobs would have localhost = zuul-executor group, since they only run on zuul.  But other jobs could just have localhost20:11
jeblairpabelanger: i don't understand that.20:12
*** tobiash_ has joined #zuul20:12
*** tobiash_ has quit IRC20:13
mordredOH20:13
mordredI do20:13
mordredI think we can (and should) solve that differently ... in fact, I've been considering that same problem while hacking on zuul-stream20:14
mordredjeblair: the issue is that the logs report local actions as having run on "localhost" - but the logical thing for some of those tasks is "this is something that ran on theexecutor"20:15
mordredlemme find an eample of where this has bothered me too20:15
jeblairthat i got20:15
jeblairwhat i did not understand is: "it makes sense that zuul-jobs would have localhost = zuul-executor group, since they only run on zuul.  But other jobs could just have localhost"20:16
jeblairpabelanger, mordred: can one of you explain that using different words?20:16
pabelangerthat was me replying to: WELL - the problem with doing that generally is that makes writing playbooks with hosts: all harder20:16
mordredyes20:16
jeblairyep.  i understand that too.20:16
mordredjeblair: different words coming ...20:16
pabelangerif I understood right, somebody with their own playbook as a job would delegate_to: localhost.  Where I am suggest using delegate_to: zuul-executor in zuul-jobs20:17
pabelangerboth would do the same thing20:17
mordredright - so that in targetted tasks we can mark a task as "running on zuul-executor" when that's what we want to be logged20:17
mordredbut so that we don't just do a wholesale mapping of localhost -> zuul-executor which would confuse humans who are running their own ansible things20:18
mordredpabelanger: I think the biggest issue with any of the ways of doing the thing you're suggesting is that it'll inject localhost into all20:19
jeblairi don't think any of that changes the "hosts: all" situation.  i think having "hosts: all" mean, well, all of the hosts in the nodeset is very convenient and intuitive.20:19
jeblairmost job authors shouldn't actually have to think about the executor.20:19
mordredright20:19
*** jkilpatr has quit IRC20:20
mordredjeblair: what it changes is that if we added an alias into the inventory so that _we_ could write tasks that explicitly 'run on the executor' that we want labelled that way ...20:20
mordredit has the un-desired side-benefit of adding localhost to 'all' and having all not mean "all the hosts in teh nodeset" anymore20:20
mordredwe benefit currently from magic in the inventory which adds localhost to the inventory but NOT to the all group20:21
mordredbut thatonly works if localhost is not explicitly referenced anywhere in teh inventory20:21
jeblairalso, we went through this last month: https://review.openstack.org/47608120:21
mordredjeblair: yes. exactly - this is why I think the thing pabelanger is suggesting as a solution is a thing we should not do - but, I understand thething that is irking him from logs as it has also irked me20:22
mordredso I'm mostly now noodling to see if there is  diffrent possible solution we could employ20:22
mordredthat would not break anything or confuse people20:23
jeblairmordred: in mean time, want to abandon 081?20:23
mordredyes20:23
mordreddone20:23
mordredoh - speaking of ... bcoca has been landing the new pluggable inventory sources for 2.4 already20:24
mordredI need to look at the API there and see what, if anyhting, we might need to be aware of20:24
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: WIP: Ansiblify pip freeze logic  https://review.openstack.org/48491720:25
pabelangerjeblair: mordred: right, but with 081 if we create zuul-executor group, ze01.o.o ansible_connection=localhost that would just create a single entry and localhost should still work20:26
mordredpabelanger: that's not the problem20:27
mordredpabelanger: it's that because of the way the default inventory works, adding that would _add_ localhost to the all group - which means hosts: all roles: -tox would start to tryto run tox on the executor too20:27
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Add job's project as implicit role project  https://review.openstack.org/48272620:28
mordredand it's important that we be able to do hosts: all - becaue toerhwise playbooks will have a hard time with node-override variants20:28
mordredpabelanger: so we have a happy state currently where all only includes the hosts in the nodeset and not localhost20:28
pabelangermordred: Oh, right. I see what you are saying now.  We all + !localhost20:28
mordred++20:28
pabelangerHmm20:29
mordredpabelanger: I think we'l be better off handling this in the logging layer20:29
pabelangermordred: Ya20:30
mordredpabelanger: since the problem is how the logs are presented20:30
mordredpabelanger: I'll keep it in my brain as I keep poking at log cleanups and let you know if I find a non-crazy way20:30
pabelangercool20:30
jeblairi'm taking 2001105 and 200110620:30
mordredkk20:31
jeblair(report executor errors to user and don't retry)20:31
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: WIP: Ansiblify pip freeze logic  https://review.openstack.org/48491720:31
jeblairi suspect their solutions are connected so i'll do both at once20:31
*** jkilpatr has joined #zuul20:37
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: WIP: Ansiblify pip freeze logic  https://review.openstack.org/48491720:37
pabelangerjeblair: mordred: I think keep-jobs is still enabled20:40
pabelangerthat is why fingur URLs still work20:40
jeblairpabelanger: probably so.  i'm done, i'll turn it off and cleanup.20:41
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: WIP: Ansiblify pip freeze logic  https://review.openstack.org/48491720:50
*** dkranz has quit IRC20:51
pabelangermordred: so, facts don't persist over playbook runs, to with our tox-linters job we are missing tox_envlist being setup. I think I'll have to refactor that and have it work like tox-py3520:51
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: WIP: Ansiblify pip freeze logic  https://review.openstack.org/48491720:54
pabelangercool20:59
pabelangerhttp://logs.openstack.org/17/484917/10/check/openstack-doc-build/9eded10/tox/pip-freeze.txt20:59
jeblairlovely21:02
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Ansiblify pip freeze logic  https://review.openstack.org/48491721:04
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Create tox-pep8 jobs  https://review.openstack.org/48494821:04
pabelangerthat should make linters pass21:04
pabelangerbut creates a new problem21:04
pabelangerneed to think a minute more about it21:04
pabelangermordred: jeblair: it would be helpful to land the following: https://review.openstack.org/#/q/status:open+project:openstack-infra/zuul-jobs+topic:ansible-lint21:09
SpamapSjeblair: I was under that impression too, but I think it is false unfortunately.21:16
SpamapSjeblair: Unless I've missed something, you need to pass hostname args to ssh every time.21:17
mordredpabelanger: looking21:17
mordredpabelanger: (what's the new problem?)21:17
SpamapSjeblair: and that actually makes sense. the hostname is used as a hash for the control socket path.21:17
SpamapSwell, host+port+user is21:18
pabelangermordred: basically https://review.openstack.org/#/c/484948/. having tox-linters do 2 envlists and facts.  I mean, we can maybe add some logic to handle that, write facts to disk, but I created a new job for now.21:19
pabelangermordred: maybe we just add tox-pep8 to openstack ?21:19
mordredpabelanger: yah - for today - I think it would be great to get tox-linters doing the thing you're wanting to make it do21:20
mordredpabelanger: also - I wonder if ansible-test has any helper things for doing what we do in those find commands21:20
pabelangeroh, maybe. I haven't looked there21:21
pabelangerI quickly looked at ansible-review a while back21:21
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Return executor errors to user  https://review.openstack.org/48495321:21
jeblairSpamapS: oh.  :(  well, we think that since we added '--die-with-parent' to bubblewrap, we are no longer seeing the delayed exit issue.  presumably bubblewrap is cleaning up appropriately.21:26
jeblairSpamapS: that turns this into an optimization problem -- it would be nice to have the control socket persist across playbook runs.  but we're no longer seeing the significant delays which made it seem like a requirement earlier.21:27
jeblairmordred, tobiash: implicit role is green now: https://review.openstack.org/48272621:27
SpamapSjeblair: we can still set our own controlpath and such. But it does feel a little bit like a layer violation to start trying to match ansible's ssh args.21:28
SpamapSjeblair: we can, btw, use -N and not run any _commands_ on the remote host.21:28
mordredjeblair: woot!21:28
SpamapSBut yeah, the mux is actually pretty stupid, and basically just hooks up clients who are using the same controlpath to the host that the master was already connected to.21:29
mordredI love tox-py35-on-zuul btw21:29
mordredit just asnwered a question I was going to ask pabelanger :)21:29
jeblairpabelanger has been replaced by zuul21:29
SpamapSjeblair: anyway, I think this one goes in the "nice to have some day" bin. How about we drop the zuulv3 tag from it?21:30
SpamapSI'll add my analysis to the description.21:30
openstackgerritMerged openstack-infra/zuul-jobs master: Ensure we load roles for linting  https://review.openstack.org/48448821:30
openstackgerritMerged openstack-infra/zuul-jobs master: Include ansible-playbook syntax-check for tox pep8  https://review.openstack.org/48449021:30
openstackgerritMerged openstack-infra/zuul-jobs master: Rename pep8 to linters for tox  https://review.openstack.org/48449121:30
jeblairSpamapS: yeah, given the complexities, i don't think we need to push on it now.21:30
jeblairSpamapS: thanks for digging in21:31
jeblairmordred, pabelanger: https://review.openstack.org/484953 executor errors is also green21:32
mordredjeblair, SpamapS: ANOTHER thing we could consider ...21:35
mordredwhich the ansible core team mentioned to us a while back but we did not do because it didn't solve _that_ problem21:35
mordredis pluggable connection strategies21:35
mordredor pluggable connection types21:36
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Remove export commands from tox based roles  https://review.openstack.org/48393621:36
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Remove subunit file size check for tox role  https://review.openstack.org/48451921:36
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Remove check_sudo_usage logic from tox  https://review.openstack.org/48491621:36
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Ansiblify pip freeze logic  https://review.openstack.org/48491721:36
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Create tox-pep8 jobs  https://review.openstack.org/48494821:36
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Compress testrepository.subunit in fetch-testr-output  https://review.openstack.org/48489621:36
mordredthe connection controls the actual transport layer and the eecution strategy plugin controls how ansible uses the connnection plugin AIUI21:37
mordredSpamapS: mostly mentioning becauesit might be another place to mention in the writeup of a place we could hook in to the management of this stuff21:38
pabelangerjeblair: +221:38
SpamapSmordred: indeed, I think the right place to do this might be in the ssh connection plugin itself.21:40
mordredjeblair: feature request re: implicit roles21:40
SpamapShm21:40
SpamapSI wonder if we already have this.21:40
mordredjeblair: on top of what you have, when you do the name inferrance - it is also a 'standard' thing for ansible galaxy to understand that a repo called ansible-role-foo contains a rolenamed foo21:41
mordredjeblair: so stripping that prefix might also be a friendly thing for us to do21:42
SpamapSthe ControlPath by default is $HOME/.ansible/cp/$HASH_OF_THINGS21:42
SpamapSso if we just bind mounted that dir in.............21:42
SpamapSbut no, we run everything in bwrap now21:43
SpamapSso even if a trusted pre ran first, its ssh control master would get killed because of --die-with-parent21:43
jeblairSpamapS: yeah, and we're successfully (yay) killing all subprocs.  yeah that.21:43
SpamapSI'm happy just leaving this until we get into optimization cycles.21:43
mordredjeblair: "Galaxy strips <em>(ansible[-_.+]*)*(role[-_.+]*)*</em> from the beginning of the name." (found the docs on what it strips)21:43
jeblairSpamapS: yeah, i think that puts us right back at "ping all hosts" outside of the bwrap context.21:44
mordredhttps://github.com/ansible/galaxy/blob/9258039e3fd357ab70e742cd0cbaba3277dd8197/galaxy/templates/account/role_add.html#L17 for the record21:44
SpamapSHow many bwraps do you all think we will run on the most common jobs? 3? 4? That's how many extra SSH connection setups we'll start doing.21:44
mordredjeblair, SpamapS: ++21:44
mordred7 or 8 at least21:45
pabelangerjeblair: you can override that however in the requirements.yaml file, if I remember right21:45
pabelangermordred: ^sorry21:45
mordredbase/pre, unittest/pre, tox/pre, run, tox/post, unittest/post, base/post21:45
mordredpabelanger: yes you can - but that's consume-side21:45
mordredand we suport that21:45
mordredbut for implicit 'import git repo into galaxy and expose it to people' galaxy helpfully strips those prefixes21:46
pabelangerI've been thinking maybe we also support that in zuul.yaml, for a single role I cann it ansible-role-foo as my git repo, but want zuul to install it to disk as openstack.foo21:46
mordredpabelanger: you can already! done :)21:46
pabelangerOh21:46
SpamapSmordred: yeah, so each of those will have a 3-4 second startup spike to setup a new SSH connection.21:46
pabelangernice21:46
pabelangermordred: didn't know that21:46
mordredSpamapS: yah - let's live until we don't :)21:47
mordredjeblair: oh - actually - I was just noodling - but we honestly may want to think about the implied name a little ...21:47
SpamapShttps://storyboard.openstack.org/#!/story/2001072 <-- zuulv3 tag removed. Analysis added.21:47
jeblairpabelanger: see the 'roles' section in https://docs.openstack.org/infra/zuul/feature/zuulv3/user/config.html21:48
mordredjeblair: https://galaxy.ansible.com/openstack/cloud-launcher/ is in galaxy - it gets installed as 'openstack.cloud-launcher' if you install it - and it's from the openstack/ansible-role-cloud-launcher repo21:48
pabelangerjeblair: Yay, thanks21:49
jeblairmordred: why does it get installed as 'openstack.cloud-launcher'?21:49
mordredjeblair: so there are two customary transforms performed by galaxy to get to a default role name21:49
pabelangermordred: Wow, who did that21:49
pabelangeransible-role-diskimage-builder is there21:49
SpamapSadam_g: https://storyboard.openstack.org/#!/story/2000879 <-- you still looking at this?21:49
pabelangercool21:49
mordredjeblair: that's how they deal with namespacing of published roles - and because they're github-centric, they adopted the org prefix thing21:50
mordredjeblair: obviosly we don't want to encode github org prefixes in our stuff21:50
jeblairmordred: right, but that's the *galaxy* prefix.  i don't think we can make any assumptions about git path prefixes.21:51
*** hashar has quit IRC21:51
mordredBUT - we probably want to consider two things a) what zuul's relationship to this is,should or should not be b) as infra/openstack how would we LIKE for galaxy to deal with our repos since they do not 'live' on github butthey assume all roles hav ea prefix21:52
SpamapSpabelanger: https://storyboard.openstack.org/#!/story/2000789 is for using SSL for gearman. Isn't that completed?21:52
jeblairmordred: do we need to solve b) now, because we've discussed this before, and the conversation involves patching galaxy to support non-github repos.21:53
jeblairmordred: i'd rather defer that from this time and channel if possible.21:53
jeblairmordred: sorry, that first was intended to be a question21:53
pabelangerSpamapS: it is!21:53
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Ansiblify pip freeze logic  https://review.openstack.org/48491721:53
* SpamapS is still pretty suspicious about galaxy's usefulness in most cases. :-P21:53
SpamapSpabelanger: sweet, I'll mark it as such21:53
pabelangerSpamapS: yay21:54
mordredjeblair:I agree - I mention it because if we want to provide any prefix mapping at all to be similar to galaxy and ry to meet expectatoins- I don't think we want to hard-encode anyhting related to "repo prefix" in to zuul at all21:54
SpamapSwoot, https://storyboard.openstack.org/#!/story/2000897 also just wasn't properly annotated, and so I've closed that too21:55
* SpamapS is crushing it today21:55
mordredjeblair: so while we don't nee to solve the mechanism - it might be useful to think about how we might want to express such a mapping to zuul, if at all21:55
SpamapSwhy write code when you can just mark the bugs irrelevant?21:55
mordredSpamapS: +10021:55
SpamapSmordred: "Make zuul_stream callback plugin more robust": https://storyboard.openstack.org/#!/story/2001085 ... can we .. live without that?21:56
SpamapSKinda feels like we can.21:57
SpamapS"If you're not embarrassed by your first release" ...21:57
mordredSpamapS: 4738 is we can live without21:57
mordredSpamapS: that's just a reminder to follow up with ansibleupstream on our discussion21:58
mordredSpamapS: actually - you kinow what - we shoul dhave a board/list of things to track with ansible core over time21:58
SpamapSAgree. zuul-ansible tag? :)21:58
mordredSpamapS: for instance, I'm loking at the inventory plugin interface from 2.4 - I don't think it matters to us- but it's a task that might spawn async work21:59
mordred++21:59
jeblairmordred: most of what we're looking at here are role collection repos where the implied name doesn't matter anyway.  i'm not sure we have any use cases at the moment where the implied name does matter?21:59
SpamapSIf we ever have to rename the project.. Zanzibar would be fun. :)21:59
mordredjeblair: we can certainly cross the bridge when we hit it - I think when we import ansible-role-cloud-launcher is when we'll have a specific use case to see how it feels22:00
SpamapSmordred: ok, so I'm going to drop zuulv3 from that story, and add zuul-ansible to it, and any other stories which are about interfacing upstream with Ansible.22:00
mordredSpamapS: ++22:00
jeblairmordred: are we going to have any playbooks run by zuul that use cloud-launcher?22:01
mordredjeblair: maybe?22:01
mordredjeblair: but let's put a pin in it for now - I agree, it's not urgent today - sorry for the noise22:03
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Remove ansible-role from implied role names  https://review.openstack.org/48496222:04
jeblairmordred: i'd already written that much at least ^ :)22:04
mordredwoot22:05
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Ansiblify pip freeze logic  https://review.openstack.org/48491722:10
pabelangermordred: jeblair: So, if we do that by default, something like https://review.openstack.org/#/c/484526/ will be interesting. Since we have a bindep role for zuul-jobs22:12
pabelangerI know we talked about namspacing zuul-jobs roles22:13
mordredpabelanger: that is, in fact, an excellent example of a single-role repo that might want to be run by zuul and that might also want to be published to galaxy22:13
mordredpabelanger: although when I'm not a plane I'd like to voice-chat with you about bindep end-to-end story22:14
pabelangermordred: ya, I think to work around conflict we could have name: openstack.bindep for zuul.yaml job22:14
mordredpabelanger: that one is multi-faceted and I think each of us has 1/2 to 2/3 of the facets - so we need to figure out the overlap and also the missing :)22:14
mordredpabelanger: totally. turns out we can be explicit there22:15
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Return executor errors to user  https://review.openstack.org/48495322:15
pabelangermordred: agree, that role just installs bindep from multi sources, and that is it. our zuul-job bindep actually runs bindep.  I think one could depend on the other22:15
pabelangermordred: or we be more explict in role name: eg: ansible-role-install-bindep22:15
pabelangerbut yes, once you are off plane22:16
mordredpabelanger: ++ exactly - because I think using bindep to install packages is a great story we can start telling folks22:17
SpamapSholy cow.. ansible has.... a lot of open issues22:18
mordredpabelanger: oh - what if we just had the bindep role be both, but had a "install: true"  option (or something) so that it'll install bindep if-needed, but without that will just run it as our zuul-jobs rle does?22:18
SpamapSI just threw this on the pile: https://github.com/ansible/ansible/issues/27016  :-P22:18
pabelangermordred: maybe? I'm on the fense about merging them into a single role. I like the idea of a dependency however. That way each role can be purpose specific22:21
pabelangermordred: but it is something I'd like to experiment with22:21
pabelangermordred: that's why I proposed the project-config change to see what it would look like as a role dependency22:21
mordredSpamapS: fwiw - they moved a debug line from afer thePopen to before it in devel22:23
mordredSpamapS: before line 816 of lib/ansible/plugins/connection/ssh.py they added: display.vvv(u'sending stop: %s' % cmd)22:24
mordredSpamapS: which was after the communicate in 2.322:24
pabelangermordred: jeblair: okay, so here is something crazy. Apparently tox will pip freeze today: http://logs.openstack.org/48/484948/2/check/tox-linters/18091d5/tox/linters-1.log.txt22:24
pabelangermordred: jeblair: so, I guess we don't need to write anything in ansible atm22:25
pabelangerunless we want to use pbr freeze22:25
pabelangerbut not sure the difference22:25
mordredpbr freeze isn't superuseful in this contenxt22:26
pabelangerhttp://tox.readthedocs.io/en/latest/config.html#confval-list_dependencies_command22:26
pabelangerpip freeze is the default it seems in 2.422:26
pabelangeralso much easier for jobs to configure that if they want PBR22:27
mordredpabelanger: awesome. let's stop doing it ourselves then22:27
pabelangermordred: ++22:27
mordred\o/22:27
SpamapSmordred: yeah but the IndexError happens because cmd == []22:28
SpamapSI should try it on master22:28
SpamapSor devel I guess22:28
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Remove pip freeze logic  https://review.openstack.org/48491722:28
mordredSpamapS: oh - I wonder if it's a 3.5 bug22:29
mordredSpamapS: cmd = map(to_bytes, self._build_command(self._play_context.ssh_executable, '-O', 'stop', self.host)22:29
mordredSpamapS: like - what if map(to_bytes is failing badly there22:29
mordredwell - I say that- but you shouldn't be even running the command if controlpersist is False, and it should not be True if cmd is empty22:30
mordred(see _persistence_controls)22:30
mordredSpamapS: you have nerd-sniped me22:31
jeblairmordred: abandon https://review.openstack.org/479446 ?22:36
openstackgerritMerged openstack-infra/zuul-jobs master: Add sample base job  https://review.openstack.org/48448522:37
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Require tox_envlist for tox role  https://review.openstack.org/48497522:38
pabelangermordred: jeblair: ^ that stack is ready for some feedback22:38
SpamapSmordred: totally ;)  anyway, tried on devel, still present22:42
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Add job's project as implicit role project  https://review.openstack.org/48272622:44
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Remove ansible-role from implied role names  https://review.openstack.org/48496222:44
jeblairpabelanger: zuul has provided negative feedback on 48497522:48
jeblairi'm going to restart zuulv3.o.o to pick up our recent fixes22:50
jeblairdone22:51
SpamapSmordred: you know what it is? subprocess doesn't take an iterator. It takes a _list_22:52
SpamapSoh hm no, it actually will convert it to a list above, hm22:53
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Require tox_envlist for tox role  https://review.openstack.org/48497522:55
SpamapSmordred: to end the nerd snipe, if I turn it into a list in the reset() method, subprocess works, but then we get a new fun problem:23:02
SpamapSERROR! Cannot reset connection:23:02
SpamapSb'Control socket connect(/home/clint/.ansible/cp/276d6f8b70): No such file or directory\r\n'23:02
jeblairmordred: abandon https://review.openstack.org/479253 ?23:03
* SpamapS lets it go like Elsa23:03
jeblairSpamapS: how many times have you seen Elsa let it go?23:04
* jeblair assumes something like maxint23:05
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Require tox_envlist for tox role  https://review.openstack.org/48497523:06
pabelangerhey, I see a job-output.json file :)23:08
jeblairneat.  ff doesn't like it:  SyntaxError: JSON.parse: expected property name or '}' at line 464 column 18 of the JSON data23:09
jeblairhttp://logs.openstack.org/75/484975/2/check/tox-py35-on-zuul/d1780cf/job-output.json23:09
SpamapSjeblair: we're certainly way past 16-bits23:09
SpamapSand that's just with boys.. who got tired of it already.. ;)23:10
pabelangerjeblair: loaded okay in chrome23:11
jeblairoh that's weird.  now it's fine...23:11
jeblairshrug23:11
jeblairmordred, pabelanger: the log on http://logs.openstack.org/75/484975/2/check/tox-py35-on-zuul/d1780cf/job-output.txt is *much* better, but i think it can still be improved23:14
pabelanger++23:14
mordredjeblair: YES!23:15
mordredfor one the blank line before the OK line isn't necessary23:15
jeblairmordred, pabelanger: i think we can actually get rid of some of the whitespace now... -- like exactly that, mordred.  :)23:15
mordredjeblair: also - I think we can be slightly smarter with task banners ...23:16
jeblairmordred: how so?23:16
pabelanger+1 for smaller task banners23:16
mordredhrm. no -nevermind - my idea was bad23:16
mordredjeblair: I reallykind of want to be able to mark shell tasks as streaming or non-streaming23:17
mordredI mena - I don't actually23:17
mordredit wouldn't work - byt my display brain wants to be able to go back in time for many of these tiny commands23:17
pabelangerI still prefer the ansible default task banner: PLAY [Bootstrap node.] ********************************************************* for example23:17
pabelangerbut agree whitespace before okay can be removed23:18
mordredpabelanger: is it the stars that you like?23:18
openstackgerritMerged openstack-infra/zuul feature/zuulv3: github: prevent getRepoPermission to raise AttributeError  https://review.openstack.org/47536823:18
mordredpabelanger: or the lack of the phase/playbook info?23:18
mordredor both?23:18
pabelangermordred: lack of variable info mostly. But stars to break it up a bit.23:19
jeblairpabelanger: what do you mean by 'variable info'?23:20
pabelanger TASK [add-build-sshkey : Distribute it to all nodes user={{ ansible_ssh_user }}, state=present, key={{ lookup('file', zuul_temp_ssh_key + '.pub') }}]23:20
pabelangerthat seems overloaded to me23:21
mordredyah. sorry - I'd meant to have removed those already23:21
SpamapSthe stars do break things up visually23:25
SpamapSbut I'm not sure if thats just habitually comfortable, or truly important23:25
SpamapSI'm kind of at the point where I just want ARA pages.23:26
jeblairSpamapS: i'm happy we'll be able to fill that desire while having nice text logs.  :)23:26
SpamapSyeah23:27
SpamapStext for the greppin, ARA for the lookin23:27
* dmsimard ARA senses are tingling23:27
SpamapSHonestly, these look great.23:27
jeblairwe'll make them better.  :)23:28
jeblairwhen it comes to text layout, some of us may be... detail oriented.  :)23:28
dmsimardYou just know jeblair will make an ansibletty or an aratty thing anyway :)23:28
jeblairdmsimard: i wasn't even the first person to write a tui for zuul status.  :)23:29
dmsimardThere's a text interface to zuul status ??23:29
mordredpabelanger, jeblair: what about the space between play and task?23:30
jeblairdmsimard: https://github.com/harlowja/gerrit_view/blob/master/README.rst#czuul23:31
jeblair(that helped push me toward urwid for gertty)23:31
pabelangermordred: I think that is fine23:31
mordreddmsimard: writing up some thoughts on zuul text-logging, ara and tristanC's dashboard is on my todo list - I will also have some questions for you - probably not this week, but probably next week23:31
pabelangermordred: something like http://paste.openstack.org/show/615789/ ? (ignore star for now)23:32
mordredpabelanger: k. I'm going to split the playbook part of the banner to its own line- then trim down the task lines23:32
pabelangerfor whitespacing23:32
dmsimardjeblair: that zuul UI is nice!23:33
dmsimardDefinitely going to try it out.23:33
jeblairmordred: i'd probably drop the space between play and task.  but i think it's more important to drop the intra-task whitespace first.  then i'll probably have a stronger opinion on play->task.23:33
dmsimardmordred: sure23:34
pabelangermordred: hmm ansible-playbook --check-syntax is failing because it things zuul_stream is a typo :(23:38
pabelangerwill have to figure out how to fix it23:38
pabelangerI'll play with it more in the morning23:40

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!