Monday, 2017-07-31

*** jkilpatr has quit IRC01:06
*** deep-book-gk has joined #zuul01:20
*** deep-book-gk has left #zuul01:21
*** xinliang has quit IRC01:34
*** openstackgerrit has joined #zuul05:04
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Implement a static driver for Nodepool  https://review.openstack.org/46862405:04
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Implement an OpenContainer driver  https://review.openstack.org/46875305:04
*** GK1wmSU has joined #zuul05:43
*** GK1wmSU has left #zuul05:45
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Implement a static driver for Nodepool  https://review.openstack.org/46862405:46
tobiashtristanC: I have a comment and a question on https://review.openstack.org/#/c/48838405:52
*** _GK1wmSU has joined #zuul05:56
*** _GK1wmSU has left #zuul05:57
tristanCtobiash: thanks for the review06:07
tobiashtristanC: is the loadConfig executed on every 10s tick?06:09
tobiashtristanC: I think especially for openstack with many clouds they probably want to have a single config object06:10
tobiashtristanC: maybe there is a way to make this more generic (probably in a follow up)?06:10
tristanCtobiash: it seems like it yes, the Nodepool run loop does it in the while True06:10
tobiashmaybe have a list of drivers (could also simplify get_provider_config) and add a pre-config class method to the driver interface06:11
tristanCI've been testing multinode jobs, and I was wondering if there is a way to limit the nodes requested by a job? Would it make sense to add a max-node setting per tenant, or to restrict a tenant to a specific nodepool pool?06:11
tobiashtristanC: I'm not sure if this should be done on nodepool side, I think this also might make sense to solve this within zuul (e.g. tenant config)06:12
tristanCtobiash: the Config interface could have a reset class method to be called before the reconfigure06:13
tobiashtristanC: sounds good06:13
tobiashtristanC: I've discussed a similar topic (limit lables to tenants) with jeblair and for this we agreed this should be handled in zuul so I think it might make sense to handle the max-nodes per node request also in zuul (even per tenant)06:15
tristanCtobiash: that would works, but how about a max in-used node (not per request)?06:16
tobiashtristanC: can you explain this more?06:17
tristanCtobiash: e.g. a tenant would not consume more than x nodes in parallel06:17
tobiashtristanC: you mention tenant -> zuul06:18
tristanCtobiash: same per job, a tenant job would not be able to request more than y nodes06:18
tobiashtristanC: that would need to add such accounting into zuul06:18
tristanCyes, zuul need to keep track, or somehow query nodepool for tenant usage06:19
tobiashtristanC: nodepool has no tenant knowledge and I learned that it shouldn't get this (as it is also used for non-zuul use cases outside)06:20
tobiashtristanC: so zuul needs to keep track (which should not be that hard as it creates/deletes the node-requests06:20
tristanCwell my primary worry is that a new patchset could add a job with an unlimited nodeset06:22
tristanCon another topic, I wonder if it would make sense to allow nodeset to be created across pools, for example to mix different driver labels06:23
tobiashtristanC: that is probably a question for jeblair and Shrews06:24
openstackgerritTristan Cacqueray proposed openstack-infra/zuul feature/zuulv3: WIP: Add jobs dashboard  https://review.openstack.org/46656106:43
*** amoralej|off is now known as amoralej06:58
openstackgerritTristan Cacqueray proposed openstack-infra/zuul-jobs master: Add configure-logserver role  https://review.openstack.org/48911307:26
*** xinliang has joined #zuul08:15
*** rcarrill1 is now known as rcarrillocruz08:17
*** hashar has joined #zuul08:34
*** smyers has quit IRC09:02
*** smyers has joined #zuul09:14
tobiashtristanC: was your zuul.d config support just for project configuration or also for zuul.conf?09:29
tristanCtobiash: just for the in-repo configuration09:30
tobiashtristanC: ah, ok09:31
*** bhavik1 has joined #zuul10:53
*** jkilpatr has joined #zuul10:59
*** bhavik1 has quit IRC11:00
*** xinliang has quit IRC11:10
*** xinliang has joined #zuul11:23
*** xinliang has quit IRC11:23
*** xinliang has joined #zuul11:23
*** hashar is now known as hasharLunchAmper12:03
*** hasharLunchAmper is now known as hashar12:16
*** dkranz_ has joined #zuul12:21
*** amoralej is now known as amoralej|lunch13:00
*** xinliang has quit IRC13:05
*** amoralej|lunch is now known as amoralej13:39
*** openstackgerrit has quit IRC14:33
*** openstackgerrit has joined #zuul14:33
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Implement autohold  https://review.openstack.org/48669214:33
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Implement autohold  https://review.openstack.org/48669214:43
clarkbtristanC: I want to say max node settings came up at the last ptg. WIth one idea being capping it at your smallest cloud region (so that any job could be scheduled in any region)14:52
clarkbbut then further allow it to be restricted14:52
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Simplify run tox task  https://review.openstack.org/48755115:19
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Remove duplicated states from zk.py  https://review.openstack.org/48925615:23
jeblairtobiash, tristanC, clarkb: yes, we definitely need a max-nodes-per-job setting, ideally before our ptg cutover, just so that someone doesn't see how many nodes they can request at once.  :)  that should be straightforward.15:33
jeblairtobiash, tristanC, clarkb: i don't think we've discussed a concurrent node limit per tenant, but that sounds like an additional good idea (though not as urgent or simple to implement)15:33
jeblairtobiash: regarding nonexistant jobs -- can we perform a check in ProjectTemplateParser._parseJobList()?15:39
tobiashjeblair: tried it there, but that doesn't work because there we don't have a complete job list15:50
tobiashjeblair: but for eager validation I found a place in addProjectConfig and addProjectTemplate15:50
tobiashjeblair: https://review.openstack.org/#/c/488758/1/zuul/model.py15:50
tobiashjeblair: that works15:51
tobiashjeblair: but I think both eager and lazy validation have their own pros and cons15:51
tobiash+eager: consistent config enforced15:52
tobiash+lazy: fewer runtime cost on every config evaluation15:53
tobiash+lazy: no globally broken config (push through or removing a repo can lead to a globally broken config with eager validation)15:53
jeblairtobiash: why don't we have a complete job list?  we should parse all jobs before we parse all projects15:58
tobiashjeblair: during my tests it had just the jobs of the same repo (or possibly the job it parsed so far from repos scanned before)16:03
jeblairtobiash: that sounds like a bug -- the order is pipelines, nodesets, secrets, jobs, sempahores (huh that sounds backwards), templates, projects16:06
jeblairspecifically so that we can do this kind of validation16:07
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Fix GithubConnection logging name  https://review.openstack.org/48853716:09
tobiashjeblair: ah, t16:14
tobiashjeblair: possibly config,untrusted split16:15
jeblairtobiash: hrm, we should perform a complet config load/validation even on changes to config repos (though we don't run with that config -- incidentally, that's another point in favor of config-time checking)16:17
tobiashjeblair: addProjectConfig is called in a later stage where this works fine16:18
jeblairtobiash: yes, but the confusing thing is that it's called right after the place where i'm suggesting we add the check16:19
jeblairtobiash: can you push up the version of your change which failed there so i can take a look at it?16:19
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Docs: add a glossary  https://review.openstack.org/48927016:20
tobiashjeblair: don't have it anymore, but you can at least take the tests from the change linked above16:21
jeblairtobiash: is the WIP and non-WIP test the same?16:22
* tobiash is checking16:22
tobiashjeblair: the test from wip tests more stuff16:23
jeblairok i'll use that16:24
tobiashjeblair: I think I was totally blind... :/16:37
tobiashjeblair: now it looks totally fine in ProjectTemplateParser._parseJobList()16:40
openstackgerritTobias Henkel proposed openstack-infra/zuul feature/zuulv3: WIP Don't ignore inexistent jobs in config  https://review.openstack.org/48875816:42
tobiashjeblair: hm, does work with the new test cases, but not with dynamic job addition :(16:43
tobiashjeblair: ^16:43
jeblairtobiash: i think there are "errors" in some old tests16:44
jeblairtobiash: they accidentally remove a job definition which isn't actually used in the test (since, of course, the tests are designed to fail reconfiguration)16:44
jeblairtobiash: so they hit the non-existent job error earlier than the error they are designed to test16:44
tobiashjeblair: test_dynamic_config fails now (which is not supposed to use non-existent jobs)16:45
* tobiash checking16:45
jeblairtobiash: do you want me to fix up the other tests, or do you want to?16:45
tobiashjeblair: I'll do (still hoping the tests are broken)16:46
jeblairtobiash: org/project defines "project-test1" which is used by org/project116:46
jeblairtobiash: so any tests which replace org/project/.zuul.yaml need to keep the "project-test1" job definition, otherwise they break org/project1's config16:47
jeblairtobiash: also left a suggestion on 488758 for how to actually raise the error16:48
tobiashjeblair: hrm, thought I have fixed an error like this last week16:48
tobiashjeblair: ah, ok, that was the other glitch16:48
* jlk dusts off his IRC client16:49
jlko/16:49
*** hashar has quit IRC16:50
jeblairjlk: welcome back!16:50
openstackgerritTobias Henkel proposed openstack-infra/zuul feature/zuulv3: WIP Don't ignore inexistent jobs in config  https://review.openstack.org/48875816:57
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Docs: add a glossary  https://review.openstack.org/48927016:57
jlkSo y'all got v3 done while I was gone, right? It's all done now? We're on to v4?16:57
openstackgerritTobias Henkel proposed openstack-infra/zuul feature/zuulv3: WIP Don't ignore inexistent jobs in config  https://review.openstack.org/48875816:58
jeblairjlk: *much* has happened, though we missed you greatly!  we're working down the punchlist of things we need to have in place before we cutover openstack (which we're hoping to do at the openstack PTG in denver)16:59
pabelangerindeed!17:00
jeblairjlk: mordred and pabelanger are focusing on creating job content (and generating items for our punchlist :)17:00
jlkoh good, I'd love to see that punch list. Also, sounds like maybe some justification for me going to PTG17:00
jeblairjlk: and docs are moving along17:00
jeblairjlk: cool, most of it's in storyboard, a few things are in my local buffer because i still need to type them into storyboard; i'll try to refresh that today.17:01
jeblairjlk: tobiash is continuing to find and fix bugs about 3 hours before we would have run into them for openstack, which is awesome :)17:02
jlkneat!17:02
jeblairjlk: our plan at the moment is to perform some trial cutovers saturday/sunday evenings before the ptg, then make the switch monday morning17:02
pabelangerYa, that reminds me. I need to check flights again for Saturday travel17:03
*** yolanda has quit IRC17:03
jeblairjlk: we'll have one or two cross-project sessions where we talk about v3, and then sort of have open office hours throughout the week to help people with job creation17:04
jeblairjlk: and, of course, fix issues as they come up17:04
openstackgerritMerged openstack-infra/zuul feature/zuulv3: docs: reformat job section with zuul domain  https://review.openstack.org/48895517:04
jeblairjlk: so yes, your presence would be helpful if you are able17:05
jeblairjlk: also, we have openstack's zuulv3 talking to a test repository on github17:05
jlkwoot17:05
jeblairwhich i would show you, but github is just giving me pink unicorns17:06
jeblairah, here it is: https://github.com/gtest-org/ansible17:06
pabelangerYa, getting  502 Server Error back from them ATM17:06
jeblairright now may not be the time to do github work.  nice to have other options.  :)17:07
jlkyeah github has been shakey this morning17:07
pabelangermordred: https://review.openstack.org/#/q/topic:zuulv3-tox-jobs is ready for comments. Removes last bits of shell script from tox role.  I still need to add 'zero jobs' run check into a playbook17:09
tobiashjeblair: :)17:09
tobiashjeblair: patch is almost running, there was slightly more work to get the error message right17:10
Shrewsjlk: also, log streaming!17:10
jlkwooo!17:11
Shrewszuulv3.o.o has proper links for that now17:11
jeblairShrews: oh yes!  lemme recheck a change17:11
jeblairjlk: http://zuulv3.openstack.org/17:12
jeblairjlk: click on one of the jobs running there, and you'll get a streaming console log17:13
jeblaireg http://zuulv3.openstack.org/static/stream.html?uuid=2b15d099cfd14d0bb59fb5e06cbc3af5&logfile=console.log17:13
jlkthat's pretty badass!17:13
jlkheh, and then you realize how fun it is to have single ansible tasks that take many many minutes to complete17:14
jlkall your output all at once!17:14
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Implement autohold  https://review.openstack.org/48669217:18
Shrews^^^ added link to SB17:18
openstackgerritTobias Henkel proposed openstack-infra/zuul feature/zuulv3: Don't ignore inexistent jobs in config  https://review.openstack.org/48875817:31
tobiashjeblair: ^ now without wip and locally working pep8 and py35 \o/17:33
tobiashdoesn't zuulv3.o.o vote anymore?17:42
tobiashit ran jobs on ^ which were all green, but there is only the jenkins vote17:43
*** amoralej is now known as amoralej|off17:44
Shrewshrm yeah. something seems off17:48
pabelangerhttps://zuulv3.openstack.org/ online now17:54
*** yolanda has joined #zuul17:55
*** yolanda has quit IRC18:00
jeblairpabelanger, Shrews, tobiash: it looks like we're attempting to report gerrit changes to github, that fails, and so items are being removed from queues.18:04
jeblairpabelanger: want to work on updating our pipeline definitions to correct that?18:04
jlkerp!18:04
pabelangerjeblair: Sure, I noticed that on friday18:04
pabelangerjeblair: that means we need a dedicated pipeline for github-check?18:05
jeblairpabelanger, jlk: actually -- that's not supposed to be a config setting, is it.  that's supposed to be handled automatically by the driver?18:06
jlkthinking18:07
tobiashjeblair: for gerrit it was fixed here: https://review.openstack.org/#/c/461981/7/zuul/driver/gerrit/gerritreporter.py18:07
tobiashjeblair: we possibly need that for github too18:07
tobiashjeblair: that way we can share one pipeline for multi-gerrit-multi-github18:08
jeblairtobiash: that sounds likely18:08
jlkb09f421fd023412500f0eb50adbfa6ea8006026818:08
jlkwas the commit from jamielennox18:09
jlklet me see what github does18:09
jeblairi'm guessing we only test in one direction, not the other :)18:09
pabelangerjlk: jeblair: ya, I cannot think of a reason for a gerrit patch to report to github. But it would report to mysql regardless of the trigger.18:09
jlkperhaps :(18:09
jlkyeah it's missing for github18:10
jeblairpabelanger: yeah, i think that's part of the reason this should be handled in the driver; the driver does know whether it can report an item or not.18:10
jlkI can bang out a change here.18:10
jeblairpabelanger: and mysql can report *18:10
jeblairjlk: cool, thanks!18:10
jlkis there an open issue on this?18:10
jeblairjlk: not afaik -- i think we just diagnosed it18:11
pabelangerjeblair: Ya, in driver seems to make sense18:11
*** harlowja has joined #zuul18:11
jlkthe code is done, just looking for the right place to test this. b09f421fd023412500f0eb50adbfa6ea80060268 is oddly about multiple gerrits, not github+gerrit18:14
clarkbjlk: you might also want to look over spamaps change to make the shared github secret mandatory for zuul18:15
jeblairclarkb: pending or merged?18:16
jeblairjlk: wow, i was *sure* we had some kind of a test with github and gerrit in the same pipeline, but i can not find one18:17
jlkI'm almost positive we do18:17
jlkone sec18:17
jlkjeblair: tests/unit/test_multi_driver.py18:18
jeblairjlk: thank you!18:18
jlkI'll just add some more tests here regarding reporting.18:18
jeblairjlk: in fact, that test does not assert anything about reporting18:19
jlkyeah :/18:20
jeblairjlk: ah, but it also doesn't share a pipeline18:20
jeblairso we should: a) add some report assertions to that test (it is, in fact, erroring on a different issue right now)18:21
jlkeasy enough to add a new pipeline.18:21
jeblairand b) either combine those pipelines into one, or add make a new test with a shared pipeline18:21
jeblairjlk: the error it's hitting is that we need to capitalize "Verify" in gerrit reporters now18:21
jeblairwe probably missed updating that test since it wasn't failing18:22
clarkbjeblair: I'm not sure18:22
jlkwhich test are we talking about? multi driver?18:22
jeblairjlk: yes, i ran it in the foreground and saw the error18:22
jlkgotcha18:22
jlkyeah looking here, I think I can just combine them into a single pipeline.18:23
jeblairclarkb, jlk: on the other topic of conversation, i think clarkb is referring to this: https://review.openstack.org/48824018:23
clarkbhttps://review.openstack.org/#/c/488240/ looks like merged18:23
jlkI'll go that route, and add some asserts on reporting.18:23
jeblairjlk: kk18:23
jlkoh hrm18:24
jlkthat change might make it more difficult to synthesize webhook events running locally18:24
jlk(with curl)18:24
jeblairjlk: when we move the webhook listener into zuul-web and have it submit events to the scheduler over gearman, that will provide a convenient injection point for synthetic events.18:25
jlkI have a tiny python app that sends the event, I can add some signature stuff to it as well.18:25
*** yolanda has joined #zuul18:34
*** yolanda has quit IRC18:34
*** yolanda has joined #zuul18:34
tobiashjeblair: will the mergers/executors in future serve git repos via http (again)?18:35
jeblairtobiash: not publicly, but we have talked about doing so internally so that executors can fetch merged content from mergers without having to repeat the action.18:36
tobiashjeblair: ok, so I shall not rip out zuul_url...18:37
jeblairtobiash: didn't i do that already? :)18:37
tobiashjeblair: I meant the zuul_url from zuul.conf18:37
tobiashjeblair: it's still required in the zuul.conf18:38
jlkisn't the zuul_url used in part by the webapp?18:38
jlkand other things for status url?18:38
jeblairtobiash: ah.  hrm.  we probably should remove it for now; it's meaningless now18:38
*** yolanda has quit IRC18:38
jlkmaybe that usage got removed in the last two weeks :D18:38
jeblairjlk: i think that's something different?18:39
jlkhrm.18:40
jlkmust be.18:40
tobiashjlk: did you mean status_url?18:43
jlkjeblair: you were seeing "KeyError: 'verify'" right?18:43
jlk(because changing the pipeline to be 'Verify:' just changes the traceback to KeyError: 'Verify'18:43
jlk)18:43
jeblairjlk: i think so.  i guess there could be something further wrong.18:44
jlktobiash: that sounds / looks right18:44
jlkoh hah18:45
jlknope, hrm.18:46
jlklolol.18:48
jlks/Verify/Verified/18:50
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Implement autohold  https://review.openstack.org/48669218:51
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Implement autohold  https://review.openstack.org/48669218:53
jlkalright, tests pass with combined pipeline and fixed label. I'll expand the tests to assert reporting after I lunch and migrate to a coffee shop19:01
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Docs: add a :default: argument to zuul:attr  https://review.openstack.org/48930319:07
tobiashdid something change in the test framework?19:11
tobiashjust updated to latest branch tip and my local tests break19:11
tobiashhttp://paste.openstack.org/show/617056/19:12
tobiashthat happens for about 30% of the test cases19:13
tobiashrerunning with --failed reduces the failed tests19:13
jeblairtobiash: SpamapS was attempting a new way of stopping helper threads.  we wait 5 seconds to join the diskaccountant thread on shutdown (including test shutdown).  that error suggests it didn't shutdown within that time.19:14
jeblairtobiash: can you try increasing the join timeout and see if it helps?19:14
* tobiash checking19:14
jeblairtobiash: executor.server.DiskAccountant.stop()19:15
jeblairtobiash: (perhaps with a bunch of tests running, it's being starved a little and takes longer)19:15
tobiashjeblair: rerunning with 15s...19:16
tobiashjeblair: doesn't seem to be this value19:17
tobiashjeblair: tried 50, but first fail was after about 15s19:18
pabelangerShrews: don't hate me, but I think we want a reason for autohold (like hold in nodepool). So we can ping operators to clean up autohold things19:18
pabelangerhaving it a required field would be nice too. Forces people to add there name19:18
Shrewspabelanger: that's fine by me. i was just implementing based on jeblair's story19:19
pabelangerwe could do it in follow up patch too19:19
jeblairya should be easy to add19:19
jeblairand ++ to requiring it when we add it :)19:19
*** bhavik1 has joined #zuul19:20
jeblairtobiash: another possibility is that a bunch of tests are failing, but hitting that error during an unclean shutdown19:20
jeblairtobiash: you can whitelist the thread in shutdown() in tests/base.py to check that19:21
tobiashjeblair: ok, at least verified that the job root monitor patch is introducing this19:24
tobiashjeblair: whitelisting fixes all tests except test_cache_hard_links which fails deterministically on my system19:32
*** bhavik1 has quit IRC19:34
tobiashjeblair: that could be the root cause as it throws an assertion but doesn't stop the thread and all later tests in that batch fail because the thread is still there19:37
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Add required reason for hold  https://review.openstack.org/48936619:40
Shrewspabelanger: jeblair: ^^^19:40
Shrewsalso fixes the ---count in the example (one too many dashes)19:41
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Remove duplicated states from zk.py  https://review.openstack.org/48925619:43
openstackgerritTobias Henkel proposed openstack-infra/zuul feature/zuulv3: Ensure stop of disk accountant on assertion  https://review.openstack.org/48936819:47
tobiashjeblair: missing stop when assertion hits caused this19:48
tobiashjeblair: still have to check why the hard links check fails for me, possibly due to having /tmp on tmpfs19:49
tobiashjeblair: will check that19:49
pabelangerShrews: awesome, thanks!19:52
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Docs: add a :default: argument to zuul:attr  https://review.openstack.org/48930320:04
Shrewspabelanger: i think we can remove py2 jobs for nodepool features/zuulv3 branch now, yeah?20:07
Shrewsor is there something else we should do first?20:08
pabelangerShrews: Ya, I haven't see any issues with nl0120:08
jeblairShrews: i'm catching up on the autohold change -- why did you end up changing the parameters in zuul/rpcclient.py from 'tenant' to 'tenant_name' ?20:08
Shrewsjeblair: at the request of tobiash20:08
pabelangerShrews: I'd +2 a removal :)20:09
jeblairShrews: was that in a review comment or irc?20:09
Shrewsjeblair: review comment20:09
jeblairi can't find it :(20:10
Shrewsjeblair: ps5... also a bit in irc too20:10
Shrewshttps://review.openstack.org/#/c/486692/5/zuul/zk.py20:11
Shrewsoh, maybe not in irc. his last comment is what i was thinking of as an irc discussion20:11
jlkjeblair: can you give me some more details on the error you're seeing in prod with the failure to report? Without my change I have test case taht results in a traceback attempting to report a gerrit change to github, but it doesn't seem to stop that change from being reported to gerrit as well.20:12
jlkooooh!20:12
jlkthis could be because my setup is only reporting on success, rather than on start.20:12
jeblairShrews: okay, that's for arguments, but i don't think the same applies for on-the-wire json.  changing it there puts it out of sync with the rest of the rpc functions.20:13
pabelangerjlk: sure, I can get you a traceback20:13
*** persia has quit IRC20:13
pabelanger1 sec20:13
pabelangerjlk: an example of the error http://paste.openstack.org/show/617060/20:14
jlkum20:15
jlkwell that's slightly different20:16
*** persia has joined #zuul20:16
pabelangerlet me check other tracebacks20:16
jlknah, I think I know what's going on20:16
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Fix autohold RPC protocol  https://review.openstack.org/48937520:17
Shrewsjeblair: how's that? ^^^^20:17
pabelangerjlk: ya, others seem to be the same. That code you linked from jamielennox looked like the right approach20:17
jeblairShrews: woot, thx.  will discount that from my review :)20:17
jlkyeah I'm trying to replicate the later part of the issue, where it drops the change out of the queue and doesn't do the next bit20:18
jlklike, what was the symptom problem that sent y'all hunting?20:18
jlkthat's the test case I want to try to make sure we have20:18
jeblairjlk: iirc, the reporter failing bumped the item out of the queue20:19
Shrewsjeblair: no worries. just increasing my zuul commit count  :)20:19
pabelangerjlk: Ah, I might be confusing the issue jeblair mentioned earlier with another20:19
jlkjeblair: yeah, was it a start report that did that? Because when I don't have a start report things seem to "work". But it seems like a start event is what's causing the change to go missing20:19
jeblairpabelanger: i think it's the same issue20:20
jeblairjlk: that failure at least was a success report20:20
jlkthat's weird, I wonder if it's an ordering issue20:20
openstackgerritTobias Henkel proposed openstack-infra/zuul feature/zuulv3: Fix test_cache_hard_links when on tmpfs  https://review.openstack.org/48937720:21
jlkin my test, it attempts to report success for both drivers, fails on one driver, succeeds on the other20:21
jlkso the test case which looks to make sure each change has the appropriate number of reports still passes20:21
jeblairjlk: it def could be ordering.20:21
jeblairi'm not sure how we arrive at the reporter order; it might be dict key order which is random20:22
jeblairjlk: zuul logs the order, so you'll note in pabelanger's traceback, github comes first20:23
jeblair2017-07-31 19:56:40,809 DEBUG zuul.IndependentPipelineManager: success [<zuul.driver.github.githubreporter.GithubReporter object at 0x7f4449594128>, <zuul.driver.sql.sqlreporter.SQLReporter object at 0x7f4449594588>, <zuul.driver.gerrit.gerritreporter.GerritReporter object at 0x7f4449594898>]20:23
jlklet me re-run20:23
jlkah yeah in my test, gerrit reporter fires off first, then github20:24
jlkWhat's the names of your connections in main.yaml ?  Presumably not "github" and "gerrit" ?20:25
jeblairwe may want to put these in separate try/except handlers20:25
jeblairjlk: they are that actually20:25
jlkokay, so ordering is not guaranteed20:25
jlkI can easily catch this if I turn on start reporting though20:25
jlkbecause it'll toss out the change before it runs the job and it'll never report success20:26
jeblairseems like a fair test20:26
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Implement autohold  https://review.openstack.org/48669220:27
*** persia has quit IRC20:27
*** dkranz_ has quit IRC20:27
jeblairShrews: all +320:27
*** jkilpatr has quit IRC20:28
*** persia has joined #zuul20:28
openstackgerritTobias Henkel proposed openstack-infra/zuul feature/zuulv3: Remove zuul_url from merger config  https://review.openstack.org/48937820:29
Shrewsjeblair: w00t. another zuulv3 checkbox ticked20:29
Shrewsjeblair: i'll modify the nodepool client tomorrow for the new fields. Will be a good time to add the --detail flag that i've wanted to the 'list' command, too.20:32
jeblairShrews: sounds good!20:32
* tobiash hits eod20:35
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Add required reason for hold  https://review.openstack.org/48936620:36
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Fix autohold RPC protocol  https://review.openstack.org/48937520:36
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Remove duplicated states from zk.py  https://review.openstack.org/48925620:36
openstackgerritJames E. Blair proposed openstack-infra/zuul-sphinx master: Fix package setup  https://review.openstack.org/48939520:57
openstackgerritJames E. Blair proposed openstack-infra/zuul-sphinx master: Fix package setup  https://review.openstack.org/48939521:00
jeblairpabelanger, clarkb: can you +3 that ^?  i'll follow that with a release, then hopefully get zuul-jobs depending on that.21:00
pabelanger+2, give clarkb a moment to look21:01
clarkbya looking21:01
clarkbdone21:02
openstackgerritMerged openstack-infra/zuul-sphinx master: Fix package setup  https://review.openstack.org/48939521:06
openstackgerritJesse Keating proposed openstack-infra/zuul feature/zuulv3: Limit github reporting to github sources  https://review.openstack.org/48939921:11
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Switch to using zuul-sphinx  https://review.openstack.org/48940921:24
jeblairpabelanger: a +3 on that ^ should also net us a published zuul-jobs doc21:24
pabelangerjeblair: done21:26
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Switch to using zuul-sphinx  https://review.openstack.org/48940921:30
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Switch to using zuul-sphinx  https://review.openstack.org/48940921:32
*** jkilpatr has joined #zuul21:34
*** yolanda has joined #zuul21:34
*** yolanda has quit IRC21:41
pabelangerjeblair: jlk +3 on 489399. I'll be sure to restart zuulv3.o.o once it lands on disk21:42
jeblairpabelanger: thanks!21:42
jlkwoo21:43
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Limit github reporting to github sources  https://review.openstack.org/48939921:49
jlkauthoring code going into production feels like a nice way to return from vacation!21:51
Shrewsjlk: the way i see it, you owe us 2 weeks of work. best get movin'!21:56
jlkLOL21:56
pabelangerzuulv3.o.o restarted22:01
jeblairzuul meeting time in #openstack-meeting-alt22:01
SpamapStobiash: in 489377 ..I am a bit confused by the commit message. ???22:18
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Ensure stop of disk accountant on assertion  https://review.openstack.org/48936822:24
pabelanger2.5.2.dev150022:47
pabelangerimpressive22:47
pabelangermaster only has 1242 commits22:47
jeblairpabelanger: wait, are you saying v2 to v3 has more commits than all of zuul v1 and v2 combined?22:49
pabelangerjeblair: according to github, ya22:50
Shrewsneat22:52
jeblairinit to end of zuul v1 was 205 commits, end of v1 to (presumed) end of v2 will be about 1040.22:56
jeblairso comparable to, but larger than the entire v2 effort, so far.  however, *much* larger than the 141 commits that took us from v1 to v2.  :)22:57
jlkoh, so 141 to go from v1 to first of v2, but then another 900 some odd commits after that for continued v2 dev22:59
jeblairjlk: yep22:59
jeblairv2 had major changes all made continuously22:59
jlkis https://storyboard.openstack.org/?#!/board/41 still the right board to look at for work? Is there a different board for the jobs development?23:02
jeblairjlk: yes, and i don't think there's a board for jobs development.23:02
openstackgerritJames E. Blair proposed openstack-infra/zuul-sphinx master: Remove cap on sphinx  https://review.openstack.org/48943123:13
jeblairpabelanger: mind one more +3 on that ^?  i think we'll want that before we push to add it to global-requirements.23:14
pabelangerdone23:15
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Switch to using zuul-sphinx  https://review.openstack.org/48940923:20
openstackgerritMerged openstack-infra/zuul-sphinx master: Remove cap on sphinx  https://review.openstack.org/48943123:21
pabelangercool, release pipeline now live23:26
pabelangerhttp://zuulv3.openstack.org/23:26
pabelangerI'll start work in the morning for testing23:26
jeblairpabelanger: \o/23:27
pabelangerjeblair: any objections on using openstack-dev/sandbox or openstack-dev/ci-sandbox for tagging?23:27
jeblairpabelanger: sandbox sounds okay to me.  would avoid ci-sandbox.  but maybe ask others in #-infra23:29
pabelangeragree23:29
pabelangerdon't think we have apache setup properly23:32
pabelangerhttp://zuulv3.openstack.org/keys/gerrit/openstack-infra/zuul.pem 404s23:32
pabelangerwe'll have to look at rewrite rules in a bit23:32
jeblairpabelanger: ++23:32
jeblairpabelanger: you can try wgetting that from zuul directly on the server to make sure it works23:33
pabelangerjeblair: will have to dig more into it tomorror, getting 500 internal error back from: http://localhost:8001/openstack/keys/gerrit/openstack-infra/zuul.pub23:42
*** https_GK1wmSU has joined #zuul23:42
*** https_GK1wmSU has left #zuul23:43
jlkI think I'm done for the day. Cheers!23:51

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!