jlk | incoming stack of doom | 01:43 |
---|---|---|
jlk | I still need to implement the review/status requirements as a trigger requirement, but that should be a fairly easy to do follow on patch. Only new code required for 'status'. All the existing code works for reviews. | 01:44 |
jlk | I spoke too soon, there's another incompatible change on the tip of feature/zuulv3 | 01:51 |
*** jamielennox is now known as jamielennox|away | 01:59 | |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: support github pull request labels https://review.openstack.org/444511 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Allow github trigger to match on branches/refs https://review.openstack.org/445625 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Set filter according to PR/Change in URL https://review.openstack.org/446782 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Add support for requiring github pr head status https://review.openstack.org/449390 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Support for dependent pipelines with github https://review.openstack.org/445292 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Adds github triggering from status updates https://review.openstack.org/453844 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Implement pipeline requirement on github reviews https://review.openstack.org/453845 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Add basic Github Zuul Reporter. https://review.openstack.org/443323 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Configurable SSH access to GitHub https://review.openstack.org/444034 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Represent github change ID in status page by PR number https://review.openstack.org/460716 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Add 'push' and 'tag' github webhook events. https://review.openstack.org/443947 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Add 'pr-comment' github webhook event https://review.openstack.org/443959 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Support for github commit status https://review.openstack.org/444060 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Better merge message for GitHub pull reqeusts https://review.openstack.org/445644 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Test gerrit and github drivers in same tenant https://review.openstack.org/448257 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Encapsulate determining the event purpose https://review.openstack.org/445242 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Merge pull requests from github reporter https://review.openstack.org/444463 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Include exc_info in reporter failure https://review.openstack.org/460765 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Add support for github enterprise https://review.openstack.org/449258 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Support GitHub PR webhooks https://review.openstack.org/439834 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: GitHub file matching support https://review.openstack.org/446113 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Ensure PRs arent rejected for stale negative reviews https://review.openstack.org/460700 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Add trigger capability on github pr review https://review.openstack.org/449365 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Log GitHub API rate limit https://review.openstack.org/446150 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Comment on PRs if a remote call to merge a change failed https://review.openstack.org/460762 | 02:02 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Add cachecontrol to requests to github https://review.openstack.org/461587 | 02:02 |
*** jamielennox|away is now known as jamielennox | 02:13 | |
openstackgerrit | Ian Wienand proposed openstack-infra/nodepool master: Use full path to socat in devstack plugin https://review.openstack.org/461651 | 06:38 |
*** hashar has joined #zuul | 07:11 | |
*** Cibo_ has joined #zuul | 07:34 | |
*** Cibo_ has quit IRC | 07:58 | |
*** jamielennox is now known as jamielennox|away | 08:02 | |
*** tobiash has quit IRC | 08:06 | |
*** tobiash has joined #zuul | 08:07 | |
*** herlo has quit IRC | 08:18 | |
*** jamielennox|away is now known as jamielennox | 08:22 | |
*** herlo has joined #zuul | 08:31 | |
*** auggy has quit IRC | 09:43 | |
*** mattclay has quit IRC | 09:43 | |
*** auggy has joined #zuul | 09:53 | |
*** mattclay has joined #zuul | 09:57 | |
*** jkilpatr has quit IRC | 10:32 | |
*** jkilpatr has joined #zuul | 11:03 | |
*** hashar is now known as hasharAway | 13:26 | |
*** openstackgerrit has quit IRC | 13:48 | |
*** auggy has quit IRC | 14:43 | |
*** robcresswell has quit IRC | 14:43 | |
*** dmsimard has quit IRC | 14:44 | |
*** mattclay has quit IRC | 14:45 | |
*** Shrews has quit IRC | 14:45 | |
*** eventingmonkey has quit IRC | 14:46 | |
*** dmsimard has joined #zuul | 14:46 | |
*** mattclay has joined #zuul | 14:47 | |
*** greghaynes has quit IRC | 14:47 | |
*** auggy has joined #zuul | 14:50 | |
*** eventingmonkey has joined #zuul | 14:51 | |
*** Shrews has joined #zuul | 14:54 | |
*** greghaynes has joined #zuul | 14:57 | |
pabelanger | Shrews: clarkb: regarding https://review.openstack.org/#/c/461239/ is that something we'd want to move forward with? As jeblair mentioned, we could do this with different nodepool.yaml files today. My only concern about multiple nodepool.yaml files for different builders, is the right in copypasta errors or forgetting to update 1 file | 15:04 |
pabelanger | I don't have much of a preference now to happy to follow the herd on this | 15:05 |
Shrews | pabelanger: dunno, that seems a bit awkward at first glance. i could see someone saying "why do i have to name the builder? i could just leave the image out of my config". also, don't you still have the same problem of forgetting to update a config file (oops, i forgot to name this new builder in the config)? | 15:10 |
Shrews | but this is the first i've seen it, so perhaps i should give it more thought | 15:13 |
jeblair | perhaps we should try the multiple config file approach before we discard it? | 15:13 |
pabelanger | Yes, we can try that approach first | 15:14 |
*** Cibo_ has joined #zuul | 15:16 | |
clarkb | fwiw I am a fan of directly configuring things | 15:16 |
clarkb | it would be odd to me to configure a bunch of stuff in a file then disable half of it | 15:17 |
jeblair | pabelanger: having said that, the patch looks good, so if we decide that we want to add the feature, i think it's about ready to go. | 15:18 |
pabelanger | great | 15:21 |
jlk | incoming :( | 15:27 |
*** openstackgerrit has joined #zuul | 15:28 | |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: support github pull request labels https://review.openstack.org/444511 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Allow github trigger to match on branches/refs https://review.openstack.org/445625 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Add cachecontrol to requests to github https://review.openstack.org/461587 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Set filter according to PR/Change in URL https://review.openstack.org/446782 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Add support for requiring github pr head status https://review.openstack.org/449390 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Support for dependent pipelines with github https://review.openstack.org/445292 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Adds github triggering from status updates https://review.openstack.org/453844 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Implement pipeline requirement on github reviews https://review.openstack.org/453845 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Add basic Github Zuul Reporter. https://review.openstack.org/443323 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Configurable SSH access to GitHub https://review.openstack.org/444034 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Represent github change ID in status page by PR number https://review.openstack.org/460716 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Add 'push' and 'tag' github webhook events. https://review.openstack.org/443947 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Add 'pr-comment' github webhook event https://review.openstack.org/443959 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Support for github commit status https://review.openstack.org/444060 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Better merge message for GitHub pull reqeusts https://review.openstack.org/445644 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Test gerrit and github drivers in same tenant https://review.openstack.org/448257 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Encapsulate determining the event purpose https://review.openstack.org/445242 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Merge pull requests from github reporter https://review.openstack.org/444463 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Include exc_info in reporter failure https://review.openstack.org/460765 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Add support for github enterprise https://review.openstack.org/449258 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Support GitHub PR webhooks https://review.openstack.org/439834 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: GitHub file matching support https://review.openstack.org/446113 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Ensure PRs arent rejected for stale negative reviews https://review.openstack.org/460700 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Add trigger capability on github pr review https://review.openstack.org/449365 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Log GitHub API rate limit https://review.openstack.org/446150 | 15:28 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Comment on PRs if a remote call to merge a change failed https://review.openstack.org/460762 | 15:28 |
clarkb | pabelanger: as far as suse goes why don't we just spin up enough xenial builders to backfill trusty, delete trusty builders, then add suse? | 15:28 |
jeblair | i thought that was the plan | 15:29 |
pabelanger | clarkb: we could, was thinking we could bring xenial online first, ensure it worked, then migrate rest | 15:29 |
dmsimard | jlk: good job on that topic btw, *high five* | 15:29 |
pabelanger | xenail first, with suse... but your way is also fine | 15:29 |
pabelanger | blocked is vhdutils atm | 15:30 |
clarkb | I thought vhd utils got updated? | 15:30 |
pabelanger | failed to build, mordred needs to push up a fix | 15:30 |
clarkb | ah | 15:30 |
pabelanger | but PPA works | 15:30 |
jlk | dmsimard: thanks! Most of it is other people's code. I'm just a shepherd . | 15:30 |
pabelanger | trying to get bubblewrap working now | 15:30 |
jlk | oh son of a..... | 15:31 |
jlk | the very tip of the stack has a merge failure. | 15:33 |
dmsimard | jlk: yay stackalytics stats? | 15:35 |
jlk | jamielennox: I addressed all your concerns in another rebase. jeblair the tip has a merge conflict, but I'm not going to rebase again just for that. I'll let y'all review first. | 15:35 |
jeblair | jlk: ++ | 15:35 |
Shrews | i can't remember a time when i looked at gerrit and didn't see jlk's name consuming the board | 15:38 |
Shrews | :) | 15:38 |
jlk | it's been a long couple of months | 15:38 |
jeblair | i have a separate dashboard for the github patches :) | 15:38 |
jeblair | though, if you want to do that with gertty, you'll need this patch: https://review.openstack.org/447148 | 15:39 |
jeblair | with that, these dashboard queries work: http://paste.openstack.org/show/608610/ | 15:40 |
jeblair | i think i will merge the gertty patch; it's been working. | 15:41 |
jlk | I should probably jump on that band wagon | 15:43 |
jlk | huh, it's not responding to my 'L' press | 15:46 |
jeblair | jlk: is it maybe just really slow? | 15:47 |
jlk | maybe, hard to tell what it's doing. Could also be that I'm on OSX and it's just laughing at me in the background | 15:47 |
jeblair | the first L does a bunch of db queries which takes 4.5s on my machine. it gets better after that since it caches the results. | 15:49 |
jlk | oh hrm, I've a pre-existing ~/.gertty.yaml file. I wonder if that's causing issues. | 15:49 |
jeblair | jlk: running with '-d' and watching .gertty.log may shed light. | 15:49 |
jlk | cool, giving it a shot | 15:49 |
jlk | ah it's doing a lot of syncing in the background | 15:51 |
jlk | oh dear, it's doing git clones in the background too | 15:51 |
*** Cibo_ has quit IRC | 16:04 | |
SpamapS | quick question that I need to gain some clarity on | 16:04 |
SpamapS | if one has a job that builds, say, a go binary.. and one wants to use that go binary in several other jobs after.. how exactly does that mechanism work? | 16:04 |
* SpamapS has never actually done that | 16:04 | |
clarkb | jlk: fwiw it looks like jenkins is happy with your stack? | 16:12 |
pabelanger | https://launchpad.net/~pabelanger/+archive/ubuntu/bubblewrap | 16:27 |
pabelanger | \o/ | 16:27 |
openstackgerrit | Merged openstack-infra/nodepool master: Use full path to socat in devstack plugin https://review.openstack.org/461651 | 16:28 |
SpamapS | pabelanger: w00t | 16:30 |
SpamapS | pabelanger: where does the currently running zuulv3 config live btw? | 16:30 |
pabelanger | SpamapS: project-config has some configs | 16:31 |
SpamapS | pabelanger: could you point me exactly? thanks | 16:32 |
pabelanger | SpamapS: http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.yaml is what I am thinking off | 16:33 |
openstackgerrit | Clark Boylan proposed openstack-infra/nodepool feature/zuulv3: Use full path to socat in devstack plugin https://review.openstack.org/461846 | 16:34 |
clarkb | Shrews: ^ there is the v3 fix, master fix has been approved | 16:34 |
*** robcresswell has joined #zuul | 16:42 | |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: [WIP] Add bubblewrap to bindep / test-setup.sh https://review.openstack.org/461849 | 16:49 |
jlk | clarkb: it's happy with a lot of it, but some needs a recheck, and I think there is a merge failure at the tip | 16:54 |
clarkb | the merge failure should've reported though? | 16:55 |
clarkb | instead the job ran successfully | 16:55 |
jlk | weird, | 16:56 |
jlk | maybe it was the zuul jobs that barked about that, instead of the jenkins jobs | 16:57 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: [WIP] Add bubblewrap to bindep / test-setup.sh https://review.openstack.org/461849 | 16:57 |
jlk | clarkb: yeah it was zuul not jenkins. See https://review.openstack.org/#/c/461587/ and toggle CI | 16:58 |
jlk | zuul commented, but that doesn't show up unless you toggle CI | 16:58 |
jeblair | SpamapS: we don't yet have a good built-in method for reusing artifacts between jobs. it's something we'd like to improve, but we don't have a design yet. in openstack, we copy artifacts to tarballs.openstack.org and then copy them back (though you need a cleanup mechanism). | 17:04 |
*** harlowja has quit IRC | 17:04 | |
SpamapS | pabelanger: ty | 17:04 |
SpamapS | jeblair: ACK. | 17:05 |
jeblair | SpamapS: in zuulv3, you could use swift and some roles to move artifacts around like that fairly easily, as an entirely user-side solution. | 17:05 |
jeblair | SpamapS: further improvements might include building that into zuul itself, or, moving the unit of work handled by an executor to be "buildset" rather than "build". if you can guarantee a given executor handles all the builds for an item, then you can use the executor itself to move artifacts between jobs. | 17:06 |
SpamapS | jeblair: I think having a role that uses can tack on like "publish-artifacts" and "subscribe-artifacts" or something would be nice. | 17:06 |
jeblair | (that just makes the scaling/HA unit more coarse-grained) | 17:06 |
SpamapS | s/uses/users/ | 17:06 |
jeblair | SpamapS: yeah. with roles like that, you can use the buildset uuid to key the artifact. | 17:07 |
SpamapS | I like the idea of using swift or something like it. The simplicity outweighs the cost and exclusiveness. | 17:07 |
jeblair | SpamapS: main challenge is cleanup. when do you delete it? the job graph only runs dependent jobs when parents are sucessful; we may need to add an option to say "run this child job even if parents are unsuccessful". then you'd get a cleanup job guaranteed to run. | 17:08 |
jlk | clarkb: ah, https://review.openstack.org/#/c/419684/ is in merge conflict :/ | 17:08 |
jlk | actually I think that's a bad change. I don't think it's part of the stack. | 17:09 |
SpamapS | jeblair: yeah seems like we might need to have a zuul equivalent of .addCleanup(..) ;) | 17:09 |
jeblair | heh | 17:09 |
SpamapS | post's always run yeah? | 17:09 |
jeblair | SpamapS: yes, but that's just within a given job | 17:09 |
SpamapS | or maybe that's a suggestion... that there be a way of saying a post job always runs. | 17:10 |
SpamapS | oh | 17:10 |
SpamapS | right | 17:10 |
SpamapS | this is one level up | 17:10 |
jlk | weeeird. | 17:10 |
jeblair | ya | 17:10 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Add bubblewrap to bindep / test-setup.sh https://review.openstack.org/461849 | 17:10 |
pabelanger | ^ gets bubblewrap onto our nodes | 17:10 |
jlk | OOOOH that's on master, not the v3 branch. LOL | 17:10 |
SpamapS | jeblair: in addition to 'dependencies:' how about 'cleanups:' ? | 17:10 |
jeblair | SpamapS: that could work | 17:11 |
SpamapS | pabelanger: wewt | 17:11 |
jeblair | pabelanger: \o/ | 17:11 |
pabelanger | SpamapS: any objection if I rebase 453851 | 17:11 |
SpamapS | jeblair: now that I think about it, it gets really important with things like AWS creds used in jobs | 17:11 |
SpamapS | pabelanger: please do! | 17:11 |
clarkb | dependencies is sufficient right? | 17:12 |
clarkb | (I know it seems like making it go both directions is great but puppet did it that way and it was always a mess, so much so style guides for many setups forced you to always go one direction) | 17:13 |
jeblair | SpamapS: how so? | 17:13 |
jeblair | clarkb: dependencies is okay if we add an extra dimension to say whether success is required or not (B depends on A *succeeding*, or B depends on A *completing*) | 17:14 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Add support for bwrap https://review.openstack.org/453851 | 17:14 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Add bubblewrap to bindep / test-setup.sh https://review.openstack.org/461849 | 17:14 |
clarkb | jeblair: oh right as currently the restriction is succeeding to trigger a dep | 17:14 |
jeblair | it's possible that the dependencies with optional success is the more flexible approach; it may let you do more things than "cleanups:" would | 17:15 |
SpamapS | jeblair: If my job spins up instances on AWS and hands them to a few dependent jobs, I want to make sure they get cleaned up no matter what. | 17:17 |
jeblair | SpamapS: gotcha | 17:17 |
SpamapS | and I'm working on a job just like that right now :-P | 17:17 |
SpamapS | (for github.com/cncf/demo) | 17:17 |
SpamapS | which spins up instances/loadbalancers/etc in google cloud, azure, aws | 17:17 |
SpamapS | Without cleanup jobs, I'll just have to have a bot that cleans up. | 17:18 |
SpamapS | and nodepool support for *alltheclouds* won't work either because the test is "can this provisioning tool thing (terraform+things) still work with actual real AWS" | 17:19 |
SpamapS | jeblair: I think I'll write this up in storyboard so we don't forget. | 17:19 |
*** Cibo_ has joined #zuul | 17:20 | |
jlk | jeblair: any idea why gertty would otherwise work, but seems to ignore L ? | 17:20 |
jlk | no output in the log when I press that | 17:20 |
jeblair | SpamapS: cool, thanks. | 17:22 |
jeblair | jlk: hrm, any chance your config file remaps the 'L' key? | 17:23 |
jlk | I have no keymaps in my config | 17:23 |
jeblair | jlk: very strange; that hasn't changed in a long time | 17:23 |
jeblair | jlk: 'f1' says 'L Toggle whether only subscribed projects or all projects are listed' ? | 17:25 |
jlk | no, that's not showin in F1 | 17:25 |
jeblair | jlk: does that show up with another key or not at all? | 17:25 |
jlk | oh wait, it does on this home screen, trying again | 17:26 |
jlk | F1 says that, but pressing L has no effect | 17:26 |
jeblair | jlk: does the screen title change at all? 'Subscribed projects with unreviewed changes' / 'All projects' ? | 17:27 |
jeblair | (if not, what does it say?) | 17:27 |
jeblair | jlk: also, what version of gertty? (help screen should tell you) | 17:28 |
jlk | says "All projects" at the very bottom | 17:29 |
jlk | does not change if I press L | 17:29 |
jlk | version is 1.4.0 | 17:29 |
jeblair | jlk: the bottom is a navigation bar that indicates history, so if "All Projects" is there, you're probably on a screen other than the project list. the current screen title is at the top. if you hit 'esc' you should get back to the project list (and the bar at the bottom should be empty) | 17:31 |
jlk | ahhh. | 17:34 |
jlk | weeeird. | 17:34 |
jlk | it's interesting that the project list doesn't appear to be the default screen when you launch gertty | 17:35 |
jlk | okay, nodepool and zuul subscribed, is there others I should watch? | 17:36 |
jeblair | jlk: it should be the default screen. you may want to subscribe to infra-specs too. | 17:39 |
jlk | k | 17:39 |
jeblair | i set the relevant topics to 'zuulv3' on the specs, so they should show up in those queries i pasted earlier | 17:39 |
pabelanger | question around config-projects / untrusted-projects. Would openstack-infra ever have our tox jobs in an untrusted-project, like we have the setup in feature/zuulv3 ATM? | 17:40 |
jeblair | pabelanger: yes, almost certainly for project-specific tox envs (like our zuul-nodepool functional tests) | 17:41 |
jeblair | pabelanger: we might possibly even want to put the standard ones in some new untrusted repo, so it's easier to speculatively modify them. | 17:42 |
pabelanger | yes, I guess that is what I might be asking. If we want to break out the playbooks today from http://git.openstack.org/cgit/openstack-infra/zuul/tree/playbooks/tox?h=feature/zuulv3 (note I consider these very specific to openstack-infra) I don't think openstack-infra/project-config is the place (since it is a config-project). What would be consider calling the new repo? openstack-infra/zuul-jobs ? | 17:43 |
SpamapS | woot, bwrap change passed tests :) | 17:48 |
jeblair | pabelanger: i think we should reserve "^zuul-.*" for zuul-related repos; so 'infra-jobs', 'infra-roles', 'project-jobs', 'project-roles', 'openstack-zuul-jobs', 'openstack-zuul-roles' i think are better candidates. | 17:48 |
pabelanger | openstack-zuul-jobs seems nice | 17:49 |
jeblair | pabelanger: the zuul stdlib repo(s) might be named 'zuul-jobs' or 'zuul-roles'. | 17:49 |
pabelanger | jeblair: yes, that would make more sense | 17:49 |
pabelanger | let me see if I can get a topic on openstack-infra meeting for today | 17:49 |
*** Cibo_ has quit IRC | 17:58 | |
*** harlowja has joined #zuul | 18:00 | |
*** Cibo_ has joined #zuul | 18:09 | |
pabelanger | jeblair: SpamapS: I think we need to make bubblewrap driver know about venv when we run tox jobs | 18:21 |
*** jamielennox is now known as jamielennox|away | 18:22 | |
pabelanger | http://paste.openstack.org/show/608626/ | 18:22 |
*** Cibo_ has quit IRC | 18:24 | |
pabelanger | but, that was running bubblewrap under fedora, so yay | 18:24 |
dmsimard | jlk: You got me curious about the bonnyci implementation you mentioned, is that public ? | 18:48 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Add untrusted-projects ansible test https://review.openstack.org/461881 | 18:52 |
pabelanger | SpamapS: ^fails to run locally for me | 18:52 |
pabelanger | using bubblewrap | 18:52 |
*** hasharAway has quit IRC | 18:58 | |
*** hashar has joined #zuul | 18:58 | |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Use full path to socat in devstack plugin https://review.openstack.org/461846 | 19:12 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Collect test-results for tox jobs https://review.openstack.org/452991 | 20:01 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul feature/zuulv3: Collect test-results for tox jobs https://review.openstack.org/452991 | 20:11 |
*** jkilpatr has quit IRC | 20:22 | |
jlk | dmsimard: the ARA bits? | 20:22 |
* dmsimard nods | 20:22 | |
jlk | https://github.com/BonnyCI/hoist/tree/master/roles/ara | 20:23 |
dmsimard | ah, I've been trying to do a role myself but ENOTIME https://github.com/openstack/ansible-role-ara | 20:24 |
dmsimard | https://github.com/BonnyCI/hoist/blob/master/roles/ara/files/usr/local/bin/ara-prune is interesting :) | 20:24 |
jlk | It dumps down to https://bastion.opentechsjc.bonnyci.org/ara/ | 20:25 |
jlk | yeah, since our playbooks run every 15 minutes in cron, we were building up a rather large pile of data | 20:25 |
dmsimard | Thanks for sharing, I'll dig into it. There's some cool stuff coming in the next version (0.13) -- hopefully this week. | 20:29 |
jlk | nice! | 20:37 |
jlk | You can find us over in #bonnyci if you'd like to chat. rattboi is the one who has done a lot of that work. | 20:37 |
rattboi | dmsimard: yeah, I'm already in #ara as well | 20:40 |
pabelanger | dmsimard: is there a way to list reports from node level? Eg: report last playbook run on a given host | 20:40 |
dmsimard | pabelanger: no, hosts are unique per playbook by design | 20:41 |
dmsimard | It's a bit hard to explain but, basically, ansible has no way to tell if the host is really the same | 20:41 |
dmsimard | say, you have an inventory that goes "webserver ansible_host=127.0.0.2" and then another inventory that goes "webserver ansible_host=255.255.255.256", those two servers would end up grouped together while they are not really the same | 20:42 |
pabelanger | report by hostname of server? | 20:42 |
pabelanger | basically, I'd want: https://raw.githubusercontent.com/voxpupuli/puppetboard/master/screenshots/overview.png | 20:42 |
pabelanger | so we can tell quickly, the same time a playbook ran on a given host | 20:42 |
dmsimard | inventory name, hostname, etc. it's all the same, there's no persistence | 20:43 |
pabelanger | wow, that is odd | 20:43 |
dmsimard | There is a persistence between puppet agent and puppetmaster, you have a certificate and everything | 20:43 |
dmsimard | ansible doesn't have this concept | 20:43 |
dmsimard | we discussed this a bit in the early days of ara https://github.com/dmsimard/ara/issues/103 | 20:43 |
dmsimard | because at the beginning, hosts were unique across playbooks | 20:44 |
dmsimard | but it ended up causing problems | 20:44 |
dmsimard | ARA could maybe sort of hack around this and, I don't know, hash a combination of parameters (inventory/hostname/ip/something) but it'd be unreliable and inconsistent | 20:45 |
pabelanger | I wonder how dynamic inventory for ansbile does it | 20:45 |
dmsimard | pabelanger: there's still no notion of persistence on dynamic inventories | 20:46 |
pabelanger | we have an ansible-inventory.cache that is persistent | 20:46 |
dmsimard | pabelanger: the fact that your cache is persistent doesn't guarantee you that the machine on the other end is the same (or even still there) | 20:46 |
dmsimard | a puppet agent with the same hostname can come up and try to connect to a puppetmaster and it won't work because there is a certificate mismatch | 20:46 |
pabelanger | dmsimard: we'd know the hostname was the same, because we use hostkey validation on ssh | 20:47 |
pabelanger | otherwise, playbook would not run | 20:47 |
jlk | you _could_ do something interesting with a dynamic inventory that does more signature checking of hosts | 20:47 |
jlk | but yeah, as fat as Ansible is concerned, every run is independent from every previous run, and the inventory is looked at as new | 20:47 |
jlk | so long as the connection works, off you go | 20:47 |
dmsimard | jlk: exactly | 20:48 |
pabelanger | right, I actually more interested when things don't match. That usually means add SSH key, server is offline or dead | 20:48 |
dmsimard | there are some interesting things we can do when/if hosts are unique, this is what it looked like when it was implemented for example: https://youtu.be/k3qtgSFzAHI?list=PLyLLwe4-L1ETFVoAogQqpn6s5prGKL5Ty&t=12 | 20:49 |
jlk | _if_ you're using ssh to connect, and if you have host key validation turned on | 20:49 |
dmsimard | you could click on a host and see the history of playbooks that ran against it and the stats for each playbook | 20:49 |
dmsimard | but, anyway, I feel like that notion of persistence needs to come from upstream ansible, not ara | 20:49 |
dmsimard | ara already patches a lot of holes :) | 20:49 |
dmsimard | pabelanger: tbh you linked puppetboard, which was one of the inspiration to create ara :) | 20:52 |
pabelanger | dmsimard: yes, but one different I see, it you don't list report by host. Even if it is just your 'hosts' table in the database. I would image it straightforward to list the last data / time of a host.name. Even with the limitation you have noted | 20:54 |
jlk | I wonder how that would play out in a zuul/nodepool world where the hosts are transient | 20:56 |
jlk | do they all get unique names, or is it kind of the same names over and over? | 20:56 |
dmsimard | iirc it's an incremented counter | 20:58 |
dmsimard | so unique names | 20:58 |
dmsimard | There's maybe 10 digits ? I forget but I do remember inquiring if they felt that was enough :) | 20:58 |
jlk | nod | 20:58 |
*** jamielennox|away is now known as jamielennox | 21:01 | |
*** jkilpatr has joined #zuul | 21:11 | |
*** dkranz_ has quit IRC | 21:12 | |
jeblair | yeah, that comes from zk now. i think they will rollover. however, that's not the name we give to ansible. we tell it the name is 'controller' or 'compute' or something in inventory. so in zuul, we definitely don't want to presume any relationship between different runs. | 21:13 |
jeblair | but in infra, we do. it would be great to see all runs on 'review.openstack.org' | 21:14 |
pabelanger | ++ | 21:24 |
SpamapS | pabelanger: sorry, what's failing because of venv? | 21:27 |
pabelanger | SpamapS: https://review.openstack.org/#/c/461881/ should give an example, basically ansible-playbook and zuul are not in /lib /bin, but within our venv when we run a test | 21:28 |
pabelanger | this mean we cannot find ansible-playbook or our ansible libraries for zuul inside bubblewap mount | 21:29 |
dmsimard | jeblair: I'll keep that in mind, I agree it would be nice to have the notion of persistent hosts. | 21:29 |
pabelanger | http://logs.openstack.org/81/461881/1/check/gate-zuul-python27-ubuntu-xenial/80b8aa9/console.html#_2017-05-02_19_01_55_737329 | 21:30 |
dmsimard | Shrews: I wonder, in Tower, are hosts from inventories persistent ? Like, can you see the results of a specific host across different runs ? | 21:30 |
* dmsimard has never used Tower | 21:30 | |
pabelanger | ssh host key should be persistent | 21:30 |
jeblair | SpamapS, pabelanger: not sure how this relates to a potential solution, but this code in nodepool to deal with dib running in a venv may be relevant: http://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/builder.py#n516 | 21:33 |
SpamapS | pabelanger: where's our venv? | 21:33 |
SpamapS | because.. simple solution.. put it in /usr/local somewhere | 21:34 |
pabelanger | jeblair: oh, I thought we didn't want to do that | 21:34 |
SpamapS | and install with the scripts path /usr/local/bin | 21:34 |
jeblair | pabelanger: i don't want to do that | 21:34 |
jeblair | pabelanger: but we didn't come up with anything better until dib2.0 with a real python api. once that lands, we can remove it. | 21:34 |
pabelanger | SpamapS: http://logs.openstack.org/51/453851/7/check/gate-zuul-python27-ubuntu-xenial/90c3b00/console.html#_2017-05-02_17_18_37_425678 is our venv | 21:35 |
pabelanger | jeblair: k, I'll ask ianw. 2.0 is now out | 21:35 |
jeblair | it's not urgent | 21:36 |
pabelanger | if we could bind mount our venv, then activate it inside bubblewrap, I think that would do it | 21:36 |
clarkb | does zuul need to activate the venv? | 21:38 |
clarkb | dib is/was special because its a bunch of not python so venvs didn't quite work when executed directly | 21:38 |
clarkb | but zuul and ansible are both python that should respect things correctly | 21:38 |
jeblair | strictly speaking no, i think using '/path/to/venv/bin/ansible-playbook' should work. | 21:38 |
pabelanger | ya, that should work, but how to fix our zuul ansible libraries, they too need to be accessed from bubblewrap mount | 21:39 |
jeblair | but still need to get the venv into bwrap | 21:39 |
jeblair | pabelanger: we copy the zuul ansible libraries into a directory when the executor starts. that directory should be able to be bind mounted in. | 21:40 |
jeblair | pabelanger: state_dir/ansible/ | 21:41 |
SpamapS | that dir is already bind mounted in | 21:42 |
jeblair | yep | 21:42 |
jeblair | and it's an ro-bind, which is good | 21:42 |
SpamapS | so yeah, we can just add something at the same level of /usr | 21:42 |
pabelanger | odd, I couldn't get that to work locally then | 21:43 |
pabelanger | http://paste.openstack.org/show/608626/ | 21:43 |
pabelanger | was the error from ansible | 21:43 |
jeblair | pabelanger: can you get the full traceback? | 21:44 |
pabelanger | sure | 21:44 |
adam_g | anyone know if theres a doc similar to https://git.openstack.org/cgit/openstack-infra/system-config/tree/doc/source/logstash.rst that describes jenkinsless zuul / zuulv3 -> logstash publishing ? | 21:44 |
jeblair | adam_g: we haven't invented it yet. that task is up for grabs if you want it. | 21:45 |
jeblair | i'd be happy to discuss a potential design for it | 21:45 |
adam_g | jeblair: oh! how do things end up in logstash since the migration away from jenkins with v2.5? | 21:46 |
jeblair | adam_g: v2.5 emits zeromq events. | 21:46 |
adam_g | ah | 21:47 |
jeblair | very briefly, v3 should have a post playbook that does this. it might be as simple as just submitting a logstash job on to the gearman queue. in fact, that's probably a good place to start. but later, we might evolve it to do the logstash processing/pushes directly. | 21:47 |
pabelanger | jeblair: https://fedorapeople.org/~pabelanger/logs.txt of failure | 21:49 |
jeblair | pabelanger: sorry, i wanted the full traceback. | 21:49 |
jeblair | pabelanger: "To see the full traceback, use -vvv" | 21:49 |
clarkb | jeblair: adam_g I would avoid doing logstash processing/pushes directly as any slowdown would impact job throughput | 21:50 |
clarkb | jeblair: adam_g better to asynchronously schedule that to happen as its able while running as many jobs as possible imo | 21:50 |
adam_g | jeblair: interesting. may circle back to that when im finished with this thing. is that on the storyboard somewhere? | 21:50 |
jeblair | clarkb: yes, though if we can get some of that happening on worker nodes, it could be a net win. there are complications with that. which is why i think it's a nice thing to think about after we get just translate the event push mechanism in place. | 21:51 |
jeblair | adam_g: i don't think so; it didn't come up in the critical path to get things running. but if you wanted to add a story, i think it warrants a place in the backlog. | 21:53 |
clarkb | that also allows you to easily queue work while you do things like upgrade logstash/elasticsearch | 21:53 |
clarkb | which is handy | 21:53 |
pabelanger | jeblair: please refresh | 21:56 |
*** hashar has quit IRC | 21:57 | |
SpamapS | isn't logstash kind of built to asynchronously let you stash logs and then it processes them later? | 21:58 |
clarkb | SpamapS: logstash itself isn't no | 21:58 |
pabelanger | I think we might need library_dir too? | 21:58 |
clarkb | its bits in bits out | 21:58 |
clarkb | typically though the bits out are a database of some sort that allows you to glean information from whatever went in. statsd, hdfs, elasticsearch, etc | 21:59 |
jeblair | pabelanger: ah, thanks, that makes sense now. it's because zuul itself is not installed in the bwrap. currently, that works because zuul is installed in the testing venv (and in production, it's installed system wide or in a production venv). | 21:59 |
* SpamapS is reading now | 21:59 | |
SpamapS | clarkb: yeah I see that.. they just shove what you give them into sync queues | 21:59 |
SpamapS | so, no workers, no threads, blocking | 21:59 |
clarkb | yup | 22:00 |
jeblair | pabelanger: (basically, the zuul ansible plugins import a helper function from zuul itself) | 22:00 |
SpamapS | weird because that's the opposite of how SOLR worked | 22:00 |
clarkb | so if you tie that to job post processing you essentially lock up those resources while logstash runs | 22:00 |
SpamapS | SOLR would just keep accepting writes until you filled up the spool disk | 22:00 |
jeblair | clarkb: right, which means it's only worthwhile if we are moving that resource contention to something more desirable than a logstash grok farm | 22:01 |
SpamapS | however.. gearman is not a good queue | 22:01 |
SpamapS | it's a good work distribution system | 22:01 |
clarkb | jeblair: sort of, elasticsearch (or $other output) is still going to be a fixed resources | 22:01 |
clarkb | jeblair: so ya distributing logstash might sound good until you hit the ES bottleneck | 22:01 |
jeblair | SpamapS: it's a great queue! :) | 22:01 |
SpamapS | but if the data you're putting into it is precious.. you should have that data somewhere else for re-submission | 22:01 |
clarkb | SpamapS: its actually worked really well for this | 22:01 |
clarkb | SpamapS: yes thats exactly how it works. We don't queue the log contents, we queue the jobs that say logs are in archive location foo please index them | 22:02 |
SpamapS | and if you lose the queue | 22:02 |
SpamapS | scrape the log locations? | 22:02 |
SpamapS | or lose the data? | 22:02 |
clarkb | in our case due to volume we are typically ok with losing a few jobs | 22:02 |
pabelanger | jeblair: agree. I am not sure how to move forward on that for testing | 22:02 |
SpamapS | k | 22:03 |
SpamapS | then yeah it's good for that | 22:03 |
clarkb | we peak at about 1 billion records per day | 22:03 |
clarkb | if we lose a million thats still tiny | 22:03 |
SpamapS | so it goes [write local unstructured during job][copy unstructured to archive spot][submit job to process unstructured into logstash] yeah? | 22:04 |
clarkb | SpamapS: yes | 22:04 |
jeblair | pabelanger: i think the approach of bind mounting the venv (if it exists) and running ansible-playbook with that path is still a viable solution. | 22:04 |
*** smyers has quit IRC | 22:04 | |
jeblair | pabelanger: i believe it would solve that problem. but we should give SpamapS a second to catch up in case he has other ideas | 22:05 |
SpamapS | command: gearman -f logstash_this_log {{ archived_log_path }} | 22:05 |
SpamapS | jeblair: indeed that's a simple solution, very easy to do. | 22:06 |
SpamapS | Other option is moving the venv to /usr/local | 22:06 |
SpamapS | and installing things so the scripts are in /usr/local/bin | 22:07 |
jeblair | SpamapS: not for real (that would require tests have root), but maybe that could be done with bwrap bind mount trickery? | 22:07 |
SpamapS | Ah hrm. Yeah tests. Hm | 22:07 |
SpamapS | I am running on E at the moment.. I'm going to take a walk down to the coffee shop and think about this. But bind mounting in an explicit venv just for this seems like the simpler way | 22:08 |
pabelanger | cool | 22:08 |
jeblair | ++ | 22:08 |
pabelanger | I also just left a comment on https://review.openstack.org/#/c/453851/ about binding the home directory of the zuul user, if people want to check it out | 22:08 |
pabelanger | feel free to tell me I am out to lunch on it | 22:09 |
jesusaur | clarkb: logprocessor client is the gearman server, right? so a zuulv3 playbook would do more than emit 0mq events to that client would also have to reimplement the gearman server somewhere? | 22:13 |
clarkb | jesusaur: gearman is a standalone service you could just emit the job requests to any gearman server the workers were connected to | 22:14 |
clarkb | jesusaur: in our case the logprocessor client of 0mq is also the gearman server for simplicity but you can just use whatever | 22:14 |
jesusaur | clarkb: oh true, the only thing that would need to be re-implemented is formatting the job metadata before making the gearman request | 22:18 |
clarkb | yup | 22:18 |
jeblair | a post-playbook is well positioned to do that | 22:18 |
*** smyers has joined #zuul | 23:03 | |
SpamapS | Note that usually you don't want "the gearman server" but "the gearman servers" ;) | 23:16 |
SpamapS | clarkb: pabelanger Not sure the ssh-agent thing will work due to perms. Definitely interested in seeing if it does, but might be easier to just have that private key bind mounted into a place where we need it inside the bwrap. | 23:19 |
SpamapS | Kind of makes me think that SSH keys should be generated and fed in dynamically honestly. Otherwise an evil check job may print the private key that Ansible uses. | 23:21 |
Shrews | dmsimard: i know nothing about tower | 23:21 |
SpamapS | dmsimard: the demo I saw of Tower did just that, yes. | 23:21 |
jlk | Is there a method to add a new untrusted-project to configuration? | 23:21 |
clarkb | SpamapS: ya that was the other possibility pabelanger was exploring. I expect entropy to potentially be a problem though | 23:23 |
clarkb | especially in all these virtual environments | 23:23 |
SpamapS | jlk: you edit the config and land it as a - project: | 23:25 |
jlk | yeah I'm enabling a test that I think wants to use a project that didn't initially exist when the config was loaded | 23:25 |
SpamapS | err, an untrusted-projects I mean :-P | 23:26 |
jlk | although now that we init_repo all over the place, I'm not sure what value this test has | 23:26 |
SpamapS | jlk: tests/fixtures/config/zuul-connections-multiple-gerrits/main.yaml might help ? | 23:26 |
jlk | sure, I can write out a new one and tell zuul to load it, with a new project defined in there, then init the project | 23:27 |
jlk | but ... wha tis this actually testing, since that method gets used all over the place | 23:27 |
SpamapS | jlk: OH, I think I understand. | 23:27 |
jlk | I think this test was before we required all the repos be pre-known, and so it was just testing "hey a new repo showed up, can you do something with it" | 23:28 |
SpamapS | jlk: there's a bunch of tests in test_scheduler that do self.commitConfigUpdate that might shed light on how mechanically to do it in tests? | 23:28 |
jlk | yup | 23:29 |
jlk | I'm reading the original commit message to see what the thinking was on this test | 23:29 |
SpamapS | clarkb: haveged, for the most part, produces all the entropy you should need for htis. | 23:31 |
clarkb | SpamapS: ya wans't sure if it would or not, said would need to be tested. I do know without it key generation is basically a no go in many envs | 23:32 |
clarkb | ran into that with pbr's gpg testing | 23:32 |
SpamapS | Yep | 23:32 |
SpamapS | Thus far, haveged's methods have not been disproven or even pu-pu'd by mathematicians.. but I don't know how hard the cryptography community is really looking | 23:32 |
jlk | jeblair: so I'm looking at test_delayed_repo_init test, which was introduced with 287c06dca6982b7c5b4e67c7d90e4da20adf02e3 WHat it's trying to do, I don't think is possible, because if we put the repo in tenant config, we try to read from it when loading the config. If the repo doesn't exist, kaboom. | 23:47 |
jlk | jeblair: I could init the repo, load a new tenant config, and things should "work" but that doesn't seem to be the spirit of the test/feature. | 23:48 |
jlk | jeblair: I wonder if this test needs to go away, or if I'm not seeing the intent correct. | 23:55 |
jeblair | jlk: looking | 23:55 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!