Tuesday, 2017-08-29

mordredlooking00:03
mordredjeblair: +2 from me on both (+3 on the doc update)00:04
mordredjeblair: https://review.openstack.org/#/c/498127/ is ready as well, while we're doing things that clean up logging on the executor00:05
mordredjeblair: also - from having written it, I have come to really like yaml/json versions of logging config00:05
mordredas well as the use of dictConfig in code00:05
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Fix job timeout docs  https://review.openstack.org/49851300:14
jeblairmordred: yeah, i think we should switch to that form00:45
jeblairmordred: couple minor comments on that00:54
mordredjeblair: looking00:54
openstackgerritMonty Taylor proposed openstack-infra/zuul-jobs master: Add role for adding launchpadlib credentials  https://review.openstack.org/49863300:58
leifmadsenI'm going to have to look at the zuul meeting agenda ahead of time and participate from the past via etherpad with follow up post-haste01:02
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Write a logging config file and pass it to callbacks  https://review.openstack.org/49812701:06
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Remove default root handler fallback to console  https://review.openstack.org/49863501:07
mordredjeblair: ^^ that second patch addresses your third comment, I added it as a followup so that we could have a place of discussion01:07
openstackgerritMonty Taylor proposed openstack-infra/zuul-jobs master: Add role for adding launchpadlib credentials  https://review.openstack.org/49863301:16
openstackgerritMonty Taylor proposed openstack-infra/zuul-jobs master: Add trigger-readthedocs job  https://review.openstack.org/49862601:17
mordredjeblair: ZOMG your legacy job succeeded! why did it take 2h? (or should I ignore that for now)01:21
dmsimardclarkb, jeblair: I think https://review.openstack.org/#/c/496935/ is good to go01:22
dmsimarderrrrrr01:23
dmsimardit looks like there's part of that patch missing01:23
mordredjeblair: wow. it seems like it took an hour from start of tempest to end of job01:23
dmsimardhow did that happen01:23
*** xinliang has quit IRC02:36
jeblairmordred: yeah, 1h transfer time for 2.3GB in infra-cloud is, sadly, about what i was expecting.  that's about 5mbps.  :(02:39
*** xinliang has joined #zuul02:49
*** xinliang has quit IRC02:49
*** xinliang has joined #zuul02:49
dmsimardclarkb: the dvr multinode job is failing even on completely unrelated changes so it's hard to tell if the failure is legit or not :(03:05
dmsimardI checked various unrelated devstack-gate jobs and they're not consistently passing03:05
dmsimardWould need an expert to tell me why they are failing :)03:06
clarkbdmsimard: can you check experimental to see if the normal dvr tempest multinode job passee03:13
dmsimardsure03:13
dmsimardclarkb: this other job was supposed to be more reliable :/03:13
dmsimardtriggered an experimental03:13
dmsimardlet's see tomorrow mornin03:14
*** SotK has quit IRC04:58
*** SotK has joined #zuul04:58
*** hashar has joined #zuul07:34
*** hashar has quit IRC07:34
*** hashar has joined #zuul07:34
*** hashar has quit IRC07:41
*** hashar has joined #zuul07:41
*** electrofelix has joined #zuul08:55
*** jkilpatr has quit IRC10:46
openstackgerritTristan Cacqueray proposed openstack-infra/zuul feature/zuulv3: Add gearman server port configuration  https://review.openstack.org/49875310:47
*** hashar is now known as hasharLucnh10:54
*** hasharLucnh is now known as hasharLunch10:54
*** jkilpatr has joined #zuul11:04
openstackgerritTristan Cacqueray proposed openstack-infra/zuul feature/zuulv3: Add gearman server port configuration  https://review.openstack.org/49875311:08
*** hasharLunch is now known as hashar11:35
*** dkranz has joined #zuul12:51
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Include the prepared projects list in zuul variables  https://review.openstack.org/49861812:53
pabelangerjeblair: mordred: is something like per region zuul-executors something that could help with infracloud bandwidth?14:00
jeblairpabelanger: theoretically, but there's to way to make that happen until zuul v4.14:03
pabelangerwfm!14:04
dmsimardgood morning o/14:07
leifmadseno/14:42
jeblairclarkb, mordred: https://review.openstack.org/496935  lgtm14:47
jeblairclarkb: https://review.openstack.org/498559 has 2x+2; but i wanted you to see it; can you +3 it?14:54
jeblairpabelanger: can you +3 https://review.openstack.org/498270 ?14:55
dmsimardjeblair: +1 on 498270, we definitely need a mechanism to do things like log recovery on jobs that have timed out14:57
dmsimardjeblair: however, at what point does a "post" job time out ? :)14:57
jeblairdmsimard: for now, it's just the same job timeout value (so we have *something*).  but really we should work out how to add separate pre/post timeouts to jobs.14:58
jeblair(the thing we need to decide there is how that works with multiple pre/post playbooks)14:58
dmsimardjeblair: FWIW I might have discussed this with you a while back where retrieving logs on timeout'd jobs in v2 was not really possible except if you "manually" timeout'd from inside the job to allow time for log recovery14:59
* dmsimard looks for logs14:59
jeblairdmsimard: yes, we intend to correct that with v3 :)14:59
dmsimardAh, yup, I was asking about "postbuildscript" http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-08-08.log.html#t2017-08-08T18:58:5415:00
jeblairdmsimard: so to be clear, when 270 lands, we should have the behavior we all desire -- the main playbook will timeout, and then the post playbook will run.  it also has a timeout -- the same value as the main timeout, but with the timer reset.15:00
jeblairthe thing to fix up in the future is to tune that post timeout so a 2h main job can have, say, a 10m post timeout, rather than a 2h post timeout as now.15:01
clarkbjeblair: I think some jobs do depend on JOB_NAME but we aren't writing it to reproduce at all before so this should be fine15:03
jeblairclarkb: ftr i have no problem breaking any job that depends on JOB_NAME.  i think we've been very clear about that.  :)15:04
clarkbya and reproduce isn't working for them anyways15:05
clarkbI haev approved15:05
jeblairw00t15:05
jeblairmordred: i've lost context on https://review.openstack.org/489780  what's going on there?15:09
jeblairmordred: also, maybe we should make https://review.openstack.org/490216 a high priority?15:10
*** lennyb has quit IRC15:12
openstackgerritPaul Belanger proposed openstack-infra/zuul feature/zuulv3: WIP: Add publish-openstack-python-docs to post pipeline  https://review.openstack.org/49884015:14
openstackgerritPaul Belanger proposed openstack-infra/zuul feature/zuulv3: WIP: Add publish-openstack-python-docs to post pipeline  https://review.openstack.org/49884015:17
pabelangerjeblair: we recently restarted zuulv3.o.o?15:21
jeblairpabelanger: not that i'm aware.  I'm waiting for you to approve 270 before i restart it.15:21
pabelangerOh, Hmm15:21
pabelangerhttps://review.openstack.org/498621/ a syntax error for some reason in project-config from zuul15:21
pabelangerjeblair:sorry for the delay on 270 +315:22
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Add gearman server port configuration  https://review.openstack.org/49875315:23
*** lennyb has joined #zuul15:23
jeblairpabelanger: that's a curious error; openstack-doc-build seems to exist in openstack-zuul-jobs.15:25
pabelangerjeblair: ya, its been there for a while. Since it is still calling run-docs.sh, but odd that is just started complaining.15:27
pabelangerI'll check debug.log here in a moment15:27
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Run post playbooks on job timeout  https://review.openstack.org/49827015:30
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Delete .pypirc file at end of task  https://review.openstack.org/49884315:36
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Move .pypirc into tmpfs  https://review.openstack.org/49884415:36
mordredjeblair: I've also lost context on 489780 - there was an issue a while ago - but I don't know what - and I'd rather just change all that anyway15:48
mordredjeblair: how about I abandon it15:48
mordredjeblair: also - oy, I will rebase 490216 right now15:49
jeblairmordred: kk15:49
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Prevent execution of locally overridden core modules  https://review.openstack.org/49021615:50
mordredjeblair: ^^ should be good now15:51
mordredjeblair: (the rebase conflict was a single line)15:51
mordredjeblair: fwiw, I pointed the release team at the release patches yesterday just to give them a headsup15:52
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Add integration test for zuul_stream  https://review.openstack.org/49820915:53
mordredpabelanger, jlk, SpamapS: could I sell any of you on +3ing tps://review.openstack.org/#/c/498127 https://review.openstack.org/#/c/498209/ and https://review.openstack.org/#/c/484000/ ?15:53
jeblairmordred, pabelanger, jlk: http://paste.openstack.org/show/619789/15:56
jeblairi'm concerned that the failed reconfiguration there may have left zuul with an incomplete configuration15:57
pabelangereep15:58
mordredjeblair: me too - I also started looking at something related to those rate limit errors a day or two ago15:58
mordredjeblair: I BELIEVE whats going on is that when we get a payload we do a look up to get the integration id and then cache that15:58
mordredjeblair: but we're not first filtering to see if it's a project we're configured to do anything with15:58
pabelangerposted this in openstack-infra by mistake: 2017-08-29 15:08:51,880 DEBUG is the first time Job openstack-doc-build not defined happend in debug.log today15:59
pabelangerso, lines up with reconfigure failure15:59
jlklooking15:59
mordredjeblair: which means that things like those weird payloads from uknown repos may be eating our quota15:59
mordredjlk: oh - there's another 'fun' thing github related - which is taht we're getting some payloads from forks of ansible that we don't know anything about15:59
mordredwhich makes me wonder if adding an Application to a project means that forks of that project ALSO send things16:00
jlkthat would be interesting16:02
jlkI have to wonder if forking a repo drags along the integration, but we would need to examine these repos and see what's going on16:02
mordredjlk: one of the forks we got something from seems to be a very old fork :(16:03
mordredone sec, I'll get you a  log snippet16:03
mordredjlk: csc0714/ansible and cn-ansible/ansible16:04
mordredjlk: http://paste.openstack.org/show/619794/16:05
jlkMaybe we're interpreting that payload wrong. In the github app configuration page (as the app owner) you have a pane that will show you the raw payloads of everything that's been delivered. Would be interesting to find that particular payload.16:05
jlkEvery repo is _supposed_ to get their own unique ID16:06
jlkso if forking a repo drags along the apps, it should get a new ID for that new fork. ANd your app would have gotten an "install" payload saying the repo installed the app16:06
mordredjlk: http://paste.openstack.org/show/619795/ are all of the no installation id messages from the current debug log16:07
mordredjlk: what should I look for in the payloads page?16:09
jlkokay that's saying that we're getting a project name that we don't have a mapped install ID for16:09
jlkuh.16:10
jlkmordred: pull up one of the payloads that mentions that project16:11
jeblair2017-08-24 16:39:24,681 ERROR zuul.GithubConnection: No installation ID available for project csc0714/ansible16:11
jeblairthat's the first time one of those shows up (it also shows up with the other 2)16:11
jeblairi don't see anything interesting immediately preceding it in zuul debug logs16:12
jlkwhat I'm thinking is that zuul is getting a payload, it's extracting what it thinks is the source project of the payload and trying to find a mapping of it in the configuration. We're missing that, so the operation I think ends right there.16:14
mordredyah - so - I think it would be helpful to add debug logging of the X-GitHub-Delivery value - that's how the payloads are listed on the web page16:15
jeblairjlk: i don't think it ends there -- http://paste.openstack.org/show/619796/16:15
jlkmordred: while you're on the github page, can you get a listing of your installations?16:15
jeblairthat's why mordred was thinking it's eating our unauthenticated api counts16:15
jlk"Your Installations" tab.16:15
jlkdoes it show the fork?16:15
mordredjlk: no - that only shows the repos that I own that it's attached to16:16
jlkjeblair: oh interesting. That was a theoretical problem that appears to be true.16:16
jlkjeblair: that routine gets hit when we get a "status" event, something setting a github status on a commit hash16:16
jlkwe have to search github for any pull requests that have that hash as the head.16:17
jlkoh hrm. I wonder...16:17
jlkI wonder if in our search, we're creating github client objects to all these other repos, and that's where we're trying to find an install ID and failing16:18
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Add X-Github-Delivery id to debug logs  https://review.openstack.org/49885316:18
mordredjlk, jeblair: ^^ I think that'll help us track back to a given incoming payload16:18
jeblairwell, it looks like we're getting *one* webhook event, followed by 3 "no installation id" errors, followed by one "multiple pulls" exception16:18
jlkmordred: +216:19
jlknod16:19
jlkjeblair: so one webhook event is the status event, which leads to searching github, and maybe making a bunch of calls to new projects. I'm walking that code right now16:19
jlkgetPullBySha is the func16:19
mordredoh - what do you want to bet there is a PR somewhere thatsomeone has mentioned in a different repo (or maybe that those other repos have already merged) and there is some list in the payload and we're pulling the wrong thing?16:19
jeblairmaybe there are not multiple installs, we're getting a legit webhook event from the repo we're watching, but that pr shows up in 4 projects16:19
jlkjeblair: that's my current hunch16:20
mordredyup16:20
jlkYes yes we do16:20
mordredand then since we don't get results for those, we're not caching them, so we're doing the search every time16:20
jlkwe create a new github client option that's based on the owner/project16:20
jlkthat's when we go hunting for an installID16:20
mordredjlk: so we should maybe limit that to owner/project that we know about in our configuration?16:21
jlkugh, I think this is one thing that GraphQL was supposed to make better16:21
jlkso that we could get all the data we need in the search rather than having to poke at the API again.16:21
jlkmordred: yeah that seems like a quick easy limit. If we don't care about it, don't even look it up16:21
mordredjlk: ++16:22
jlkneed to think on this some more16:22
mordredespecially since we're rate limited - and are already storing an installation id cache - that should get us installation ids for the things we care about pretty quickly16:22
jeblairokay, a final question to bring this back around to the error that bombed us -- why is getProjectBranches resulting in unauthenticated api calls?16:22
jlkA status event, I really really really wish github would link that status event to a particular repo16:22
mordredjlk: right?16:23
jlkactually16:23
jlkwait a sec16:23
jlkI'm looking at a status event, and it's in the payload a "name" attribute16:24
jlkwhich I'm pretty sure is the repo in question16:24
jlkand there is a "repository" key too with all kinds of info16:24
jlktrying to page more of this stuff into my head space.16:25
jlkjeblair: I'm not ignoring that question just yet16:25
mordredjlk: yah- I see a "base" key with the information on what the PR is targetted to, which also includes a repo key16:26
mordredjlk, jeblair: http://paste.openstack.org/show/619800/16:27
jeblairjlk: that's fine; i was waiting for a context switch to drop that in, and you just tripped me up by having a eureka moment.  :)16:27
mordredthat is a payload (it's not the one in question, but it's a copy-paste of a full payload from the admin panel)16:27
mordredlooks like it has installation id in the payload, as well as repository16:28
mordredjeblair: yes - I believe we got an incomplete configuration- I'm getting "job multinode unknown" here: https://review.openstack.org/#/c/498209/1316:29
jeblairmordred: yeah, i think i understand how that happened; i'm formulating a response16:31
mordredjlk: oh - that's not a status payload16:31
jlkI'm doing an experiment16:32
jlkoh hrm, I think our v3 is not responding :(16:32
mordredhttp://paste.openstack.org/show/619801/16:33
mordredwe may be at ratelimit for the hour16:33
mordredjlk: I'm tailing log - wanna do the experiment again? or were you saying the bonny v3 isn't responding?16:33
jlkbonny16:34
mordrednod16:34
jlkoh I opened it wrong16:34
mordredthat paste above is a status payload, fwiw16:34
mordredalso, fwiw:16:35
mordred2017-08-29 16:34:44,222 DEBUG zuul.GithubConnection: GitHub API rate limit remaining: 12357 reset: 150402637916:35
mordred2017-08-29 16:34:44,360 DEBUG zuul.GithubConnection: GitHub API rate limit remaining: 55 reset: 150402803316:35
mordredI'm guessing that's auth'd and unauth'd ?16:35
Shrewsmordred: comment on https://review.openstack.org/49862316:37
jlkmordred: it might be yea. Using the APP id is one set of limits. We might be able to be more clear about which is which.16:38
jlkDo you have any installs to sandbox like repos on the infra zuul v3?  I don't want to open something against real ansible for our testing16:38
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Add X-Github-Delivery id to debug logs  https://review.openstack.org/49885316:39
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Write a logging config file and pass it to callbacks  https://review.openstack.org/49812716:39
jlkand I think our zuul v3 is not responding16:39
mordredShrews: for such a simple job I can't quite get it right can I? thanks - fixed16:39
mordredjlk: yes - https://github.com/gtest-org/ansible16:40
jlkah okay16:40
jlkmordred: incoming PR, which should generate a few things, including hopefully a pending status event16:44
jlkhttps://github.com/gtest-org/ansible/pull/416:45
jlkis there a configured pipeline?16:45
mordredyup16:45
mordred2017-08-29 16:45:48,285 DEBUG zuul.GithubConnection: Scheduling github event from github: pull_request16:46
jlkdoes that pipeline have a configuration to set status at all?16:46
jlk(where is the pipeline config for this?)16:46
mordredin openstack-infra/project-config16:47
mordredjlk: http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.yaml16:48
mordredjlk: let's restart zuul scheduler with those patches from above and try again16:48
mordredit's too hard to attempt to find the payload on the gh page without the id16:48
jlkokay, I guess that zuul is busy a bit because i haven't seen teh status come through16:48
jlk(or we're hitting the bug in question preventing it from happening)16:48
mordredjeblair: restarting zuul scheduler16:49
jlkFunny that there's status: pending  without quotes, but the other two statuses are quoted: 'success' and 'failure'16:49
jeblairjlk: we can clean that up; quotes are optional in all 3 of those cases16:50
jeblair(i'd prefer to drop them)16:50
mordredscheduler restarted with logging and github delivery id patches applied16:52
mordred(so that means we should also get base pre-playbook logged properly again)16:53
jlkokay will re-open the PR16:53
jlkre-opened, you should get the new logging. I'm concerned that it never reports status16:54
*** electrofelix has quit IRC16:57
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Update config cache only after all cat jobs complete  https://review.openstack.org/49887216:57
jeblairmordred, jlk, pabelanger: that ^ should fix the problem where a failed reconfig causes further dynamic reconfigs to fail.  with that, the github error "only" should have caused us to fail to reconfigure (which, of course, is still a critical error)16:58
jlkmordred: any indication from the logs if / where it's failing to process the reopen event far enough to set a pending status?16:58
mordredjeblair: startup issues - workign on it16:59
jeblairjlk: http://paste.openstack.org/show/619804/17:00
jeblair(that also suggests that the code on disk and in memory are not in sync)17:00
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Fix badly named function typo  https://review.openstack.org/49887717:00
jlkseems that got cut off, I don't see the full trace17:00
mordredjeblair: that ^^ - sorry17:00
jeblairjlk: that was the full trace -- that's what i meant by the code not being in sync.17:01
jlkah :(17:01
jeblairjlk: the actual line is probably nearby17:01
pabelangerI know I had some issues getting -e git+https://github.com/sigmavirus24/github3.py.git@develop#egg=Github3.py to install properly, wonder if puppet is failing some how now.17:02
mordredok. scheduler running again17:02
pabelangeralso, we really should ask sigmavirus24 for a release. I tried reaching out to him / her, but didn't hear anything back. Maybe others will have a better chance17:02
jlkhrm I wonder if it's the new line that we just added, request.headers.get17:02
jlkI just sent a "recheck" comment, which should trigger the pipeline17:03
mordredcool17:03
jeblairjlk: is this interesting? http://paste.openstack.org/show/619806/17:04
mordred2017-08-29 17:03:35,202 DEBUG zuul.GithubWebhookListener: Github Webhook Received: fe3b9e94-8cdb-11e7-99e4-7f96d0d39b8a17:04
mordredwoot17:04
jlko_O17:04
mordredthat's actually not your thing17:05
jlkjeblair: somewhat. 2 things.17:05
jlkjeblair: 1) we do not currently handle review_comments (because those are different than just a comment on the pull request.)  2) I thought we gracefully dropped things we don't handle events for, looks like there is a problem in that graceful handling17:05
*** hashar is now known as hasharDinner17:06
jlkwell, I think I got the data I need from our v2 install17:09
jlkwhat I wanted to verify is that the "name" key in the status payload is the target repo, not the forked repo. We can narrow our search for PRs by just adding that into the search terms.17:09
mordredjlk: I'm not seeing webhooks for those recheck comments at all17:09
jlkokay I'll close/open17:09
jeblairjlk: http://paste.openstack.org/show/619807/ there's the correct traceback17:10
mordredjlk: also, we should figure out comments if we're not getting them17:10
jlkyes17:10
jlkjeblair: oh that makes some sense. We found no commit, so trying to do soemthing on a None. boo.17:10
jlkprobably because we're failing to find the pr or something17:11
mordredjlk: http://paste.openstack.org/show/619808/17:11
mordredjlk: that's the payload from you reopening the pr17:12
jlkyeah but did it do anything on zuul side?17:12
mordredjlk: yes - it triggered something then hit api-rate limit issue17:13
mordredone sec17:13
mordredjeblair: I can't get the scheduler started with server zuul-scheduler start - but I can if I run it with -d on the command line17:13
mordredjlk: but17:14
mordredgah17:14
mordredjeblair: but I also don't see any errors when it doesn't start - any thoughts?17:14
pabelangerpidfile still exist?17:14
jeblairmordred: systemd may think it's still running; throw some 'stop' and 'status' at it.17:14
openstackgerritJesse Keating proposed openstack-infra/zuul feature/zuulv3: Limit github PR search to project status is from  https://review.openstack.org/49887917:14
jlk^^ that may help a lot17:14
pabelangerya, I've found systemd using init.d script pretty odd. I've been meaning to write systemd unit files for zuul17:15
mordredthat was it17:15
jeblairmordred: (i think especially if you stop it any way other than via systemd, systemd gets confused)17:15
mordredpabelanger: yes please. the half-way-in-between state we're in now is pretty brittle. I don't like systemd, but I like systemd managing my non-systemd init things even less17:15
mordredjeblair: yah17:15
jeblairpabelanger: after ptg please :)17:16
pabelanger:)17:16
mordredjlk: http://paste.openstack.org/show/619810/17:18
jeblairpabelanger: some comments on 49858817:18
mordredjlk: that's the log entries for the pr being opened17:18
jlkSo I suspect what's happening is that a status for ansible/ansible comes in, mentioning a particular sha. Zuul then searches all of github for any pr that is open that has the sha as part of the PR, and it's finding other forks of ansible that have open PRs (why the fork has a PR I don't know) that also have that hash and it's getting confused17:19
jlkmordred: oh, boo. we're just hitting API limits.17:20
mordredjlk: maybe those forks just have the sha in their master branch? like it's a fork that's carrying local changes?17:20
jlkprobably because we're over searching17:20
jlkmordred: I'm not sure yet, I'm going to replicate the search and see what I get17:20
mordredjlk: yah - because of the other fork thing17:20
mordredjlk: cooll17:20
jlkgotta find that line somewhere in the IRC log17:20
jeblairjlk: if we narrow that search to just the project we're interested in, can that search use the auth api limits?17:21
mordredjeblair, pabelanger: y'all ok with me restarting ze01 executor to pick up logging changes?17:21
jlkjeblair: maybe?17:21
jlkso I re-did one of the searches we logged about having multiple hits on.17:21
jlkhttps://github.com/search?utf8=✓&q=8e6c0ca5996bf1057ec346d68ed85eec8b25ca11+type%3Apr+is%3Aopen&ref=simplesearch17:21
jlkinterestingly enough, it doesn't find the ansible/ansible one17:22
jlkmaybe because it's closed?17:22
pabelangermordred: no issue here17:22
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Limit github PR search to project status is from  https://review.openstack.org/49887917:22
jlkjeblair: I think I already did that..17:22
jlkoh I see what you're doing.17:23
jeblairjlk: i fixed a small thing in your change so you can keep digging17:23
jlknod17:23
jeblairjlk: it also doesn't return anything from gtest-org/ansible17:24
jlkOkay, so https://github.com/ansible/ansible/pull/26282 was the trouble PR17:24
mordredjeblair: I did 'service zuul-executor stop' on ze01 and still show 5 zuul-executor processes running - should I figure out what they're doing?17:24
jlk11 hours ago there was a status that came in17:24
jlkon that hash17:25
mordredjeblair: stat("/var/lib/zuul/builds/f90ee26921054a2caca95a36a3dbabba/work/logs/job-output.txt17:25
mordredjeblair: I think they're orphaned log streaming processes17:25
jeblairmordred: :(17:25
mordredjeblair: oh - that build dir is still there17:25
jeblairmordred: keep!17:26
jlkand the three open PRs it found were um... huge?17:26
jeblairmordred: i bet 'keep' causes our log streaming cleanup detection to fail17:26
mordredjeblair: yah. I'm guessing keep is on - so at shutdown we're not deleting those dirs so the log streaming doens't notice that the job has stopped via the log file going away17:26
mordredjeblair: yup17:26
jlksomebody was struggling with github I think. Opening PRs to merge upstream devel with downstream fork devel.17:26
jeblairmordred: anyway, i think you can kill 'em.17:27
mordredjeblair: cool - we can fix that edge case later17:27
jeblairmordred: and i don't need keep anymore; no need to turn it back on when you restart17:27
jlkre search rate limits17:28
jlk"The Search API has a custom rate limit. For requests using Basic Authentication, OAuth, or client ID and secret, you can make up to 30 requests per minute. For unauthenticated requests, the rate limit allows you to make up to 10 requests per minute."17:28
jlkcrap I had a disconnect, what was the last you all saw from me?17:29
mordredjeblair: ok. executor restarted with logging changes and new version of ara17:30
mordredjlk: to make up to 10 requests per minute."17:30
jlkokay. Yeah. So, I think we definitely need to re-work this a bit to use authe'd github17:31
jlkand... and maybe we could stop searching and instead just get a dump of all the PRs of the target repo instead, and examine them to see what the HEADs are17:31
jlkI realize that our Depends-On stuff is using search as well17:32
jlkso every time we deal with a PR we're going to be hitting the search API :(17:32
mordredjlk: yah - that seems less scalable than we'd like17:32
*** Shrews has quit IRC17:33
jlkthankfully we cache that info17:33
jlker rather we cache the PR, it's unclear to me just yet if we'd search EVERY time we get an event regarding the same PR17:33
*** Shrews has joined #zuul17:34
mordredjlk: yah. so - if we did a single search of all PRs when we start and maintain a cache, then update that cache whenver we get an event - we should be able to search almost never at the cost of a big search at start?17:35
mordredjlk: or something17:35
jlknot exactly what I was thinking.17:35
jlkI think I can eliminate the search we do when we  get a status event17:36
mordredoh - neat17:36
jlknot touching the Depends-On bit yet17:39
jlkjeblair: don't merge that change, I'm going to try something different17:39
mordredjeblair: v3 is running weird and I think I may want help looking at it17:41
mordredjeblair: https://review.openstack.org/#/c/498877 doesn't seem to be being enqueued - neither via an additional +A or a recheck comment17:41
jlkcrap, gotta go my car is ready. I'll be back at this in 30 minutes or so17:42
*** maxamillion has quit IRC17:53
Shrewshrm, we have some project-config zuulv3 jobs that are misconfigured. luckily, doesn't look like they are used17:57
*** maxamillion has joined #zuul17:57
Shrewshttps://review.openstack.org/49889518:00
mordredShrews: thanks - that was my oops18:05
mordredjeblair: ok - I think I've got things restarted happy - I think we were stuck in a weird place with bogus config due to the ratelimit error which was causing jobs to not trigger or something18:10
mordredjeblair: you may want to look at executor-debug.log - as things were starting up there was a pile of:18:10
mordred2017-08-29 18:09:25,459 DEBUG zuul.ExecutorServer: Finished updating repo github/gtest-org/ansible18:10
mordred2017-08-29 18:09:25,536 DEBUG zuul.ExecutorServer: Got cat job: a6cf8c99cdc4467fa35eeda885e188f318:10
jeblairmordred: hrm; i was just looking into the 498877 error and i didn't see any signs of a bogus config18:11
jeblairmordred: what's unexpected about gtest-org/ansible cat jobs?18:12
mordredjeblair: nothing necessarily - there were just a lot of them one after the other, so it triggered my eyeball 'maybe it's an issue'18:12
jeblairmordred: ansible has a lot of branches18:12
mordredjeblair: maybe it's just that when i've seen similar before in the scheduler log it says something about the branch it's doing - whereas this was just as sequence of cat jobs that seemed identical in the executor log18:13
mordredjeblair: not important - just a thing to know as an admin that that's not a sign something is stuck in a loop18:13
jeblairmordred: well, i don't know what was wrong earlier, but 498877 is working now18:14
mordredjeblair: yes18:14
jeblairmordred: next time it'd be nice to figure out what's going on though18:14
mordredjeblair: I agree18:15
jeblairmordred: next time, please avoid restarting until we've diagnosed the problem18:15
mordredjeblair: will do18:16
openstackgerritMonty Taylor proposed openstack-infra/zuul-jobs master: Make a docs-on-readthedocs project-template  https://review.openstack.org/49890318:30
openstackgerritMonty Taylor proposed openstack-infra/zuul-jobs master: Make a docs-on-readthedocs project-template  https://review.openstack.org/49890318:43
jeblairmordred: https://review.openstack.org/498877 is all about some post fail18:44
jeblairmordred: is that because of the bug that change is fixing?18:44
jeblairi'm rechecking and trying to catch the error on stream18:47
Shrewsjust happened on https://review.openstack.org/498626 too18:48
ShrewsCould not clean up: 'AraCli' object has no attribute 'ara_context'18:50
Shrews??18:50
jeblairdmsimard: are we using your new point release?18:51
dmsimardjeblair: the new dot release would be 0.14.118:51
jeblair2017-08-29 18:51:41.727936 | localhost | module 'logging' has no attribute 'config'18:51
jeblairthat's the line before what Shrews pasted18:52
dmsimardjeblair: I guess it would need to be updated on the executor ?18:52
dmsimardIt's probably not automatically updated ?18:52
Shrewshttp://paste.openstack.org/show/619822/18:52
dmsimardIt's not installed for every job, right ?18:52
jeblairara==0.14.118:52
jeblairdmsimard: it's updated whenever we update zuul18:52
jeblairwhich is every time we land a change :)18:52
dmsimardok let me see18:53
dmsimardbah there's no logs in that job ?18:53
dmsimardyou're looking from the executor directly ?18:53
jeblairdmsimard: no logs on any job.  Shrews and i were streaming output18:54
jeblairdmsimard: but that's all there is; there's no traceback18:54
dmsimardokay18:54
dmsimardjeblair: the log config just landed in zuul too right ?18:54
* dmsimard looks at that patch again18:54
dmsimardthis one https://review.openstack.org/#/c/498127/18:55
openstackgerritDavid Moreau Simard proposed openstack-infra/zuul feature/zuulv3: Add integration test for zuul_stream  https://review.openstack.org/49820918:55
dmsimardrebased this patch which has integration tests to see what happens out of curiosity18:56
jeblairdmsimard: no job will upload lods at this point18:56
dmsimardhttp://zuulv3.openstack.org/static/stream.html?uuid=102ad366d10a4c74ac138646cc2bd2c1&logfile=console.log18:56
jeblairdmsimard: but if that job outputs something useful in the stream, you'll see it there18:57
openstackgerritMonty Taylor proposed openstack-infra/zuul-jobs master: Remove bindep_command and bindep_fallback references  https://review.openstack.org/49891318:57
jlkback now18:58
jeblairremote:   https://review.openstack.org/498915 Import logging.config as well as logging18:59
jeblairmordred, dmsimard: ^18:59
dmsimardbleh19:00
dmsimardjeblair: FWIW, full logs http://paste.openstack.org/raw/619824/19:00
dmsimardthe integration test passed19:00
dmsimardbut the "real" post job failed19:00
dmsimardnot sure what's going on there19:00
dmsimarder, for some reason paste.o.o didn't pick up the whole thing19:01
dmsimardhttps://paste.fedoraproject.org/paste/jE8DDlAuHKmkssVK2cZnVw/raw19:01
jeblairdmsimard: does that job run "ara generate html" ?19:01
dmsimardjeblair: it does: 2017-08-29 18:59:14.738256 | TASK [Generate ARA html]19:02
jeblair(infra team meeting starting now in #openstack-meeting ; i have a zuulv3 topic about git caching)19:02
jeblairdmsimard: does it use the new logging config?19:02
dmsimardThe first one passes (where the controller node runs it) and the second one fails (where the executor runs it)19:02
dmsimardjeblair: I assume so, I rebased the patch19:02
dmsimardjeblair, mordred: added a comment in https://review.openstack.org/#/c/498915/19:16
mordreddmsimard, jeblair: I think I know why the integration test works and prod doesn't19:19
dmsimardgo on19:19
mordredwe pass the env var to the invocation of ansible-playbook which sets it up properly for the callback plugin19:20
dmsimardbecause that's the kind of issue I want to avoid with integration tests19:20
mordredbut we do not pass it to the generate html command19:20
dmsimardnot sure I follow who's doing what, you mean the executor doesn't currently pass the required env var when running the generate command and we do it in the integration tests ?19:25
mordredhrm. no - I'm backto being confused. we pass and don't-pass things consistently across the two19:27
mordredwe don't set the env var for generate html in either place19:28
mordredand we set the env var for ansible-playbook in both places19:28
jlkhrm, what would I need to put into a logging config so that it just spews stuff to stdout?  It used to do this a while ago, but since I last ran my dockers, everything is by default going into a log file19:29
dmsimardmordred: where does this end up running, the executor ? https://github.com/openstack-infra/project-config/blob/master/playbooks/base/post-logs.yaml19:42
mordreddmsimard: yes. that runs on the executor - but via ansible, so env vars passed to the invocation of ansible-playbook should not pass through19:42
openstackgerritPaul Belanger proposed openstack-infra/zuul-jobs master: Create upload-afs role  https://review.openstack.org/49858819:43
dmsimardmordred: I guess we emulate the same thing here where the logging config is set for the ansible-playbook task but not the generate task https://review.openstack.org/#/c/498209/14/playbooks/zuul-stream/functional.yaml19:44
mordreddmsimard: yah19:44
mordredjlk: we actually JUST landed a change to how that works19:45
dmsimardmordred: we're absolutely positive both zuul and ara are up to date on the executor, was it reloaded ? does it need a *restart* or something ?19:45
dmsimardmordred: otherwise I'm missing something obvious as to why it works in the integration job and not "fo real"19:45
mordreddmsimard: the executor has been restarted after being re-installed19:45
mordredyah. this is a thing that very much confuses me19:45
dmsimardmordred: the log config generation, does that happen automatically when running the executor ?19:46
dmsimardmordred: because we seem to be doing it explicitely in the job19:46
mordredyes. the executor generates the log config19:46
mordredand then passes the path to it via the env var19:46
dmsimardand I guess the logging patch is effective on the executor because we're no longer missing the first couple lines from the output and we're not seeing the alembic/migration/ara logging19:48
jlkhaha I have a file named /var/log/zuul/{server}.log19:48
dmsimardjlk: oops19:48
dmsimardbad substitution somewhere :)19:49
dmsimardthat makes me remember the days where ansible created a literal '$HOME' file19:49
dmsimardmordred: yup, I just re-read the entire integration job log and I don't see any hints.19:50
mordredjlk: https://etherpad.openstack.org/p/sI8hoI3Tah19:50
mordredjlk: that should get you "log everything to stdout"19:51
jlkand that goes in a logging config file somewhere?19:51
jeblairjlk, mordred: if you pass -d shouldn't it go to stdout anyway?19:52
jeblair(i'm trying to figure out why going to stdout *and* backgrounding are desirable)19:53
jlkhrm.19:53
jlkyeah I'm running -d and -c /path/to/conf19:53
mordredjeblair: yes. with -d it SHOULD go to stdout19:53
jlkyou'd thing -d it should. but it's not19:53
jlkit's defaulting to writing to a {server}.log file19:54
mordredwell- it certainly shouldn't do that :)19:54
dmsimardmordred: any way we can test that jeblair's patch fixes the issue before I go ahead and rush a release ?19:54
mordredjlk: does your zuul.conf file you're pointing to with -c have a logging config in it?19:55
jlkno, those lines are commented out19:55
mordreddmsimard: I mean - I have tested that trying to use logging.config without importing it is an error19:55
jlk(side note, just realized that a webhook_token is now required. drats)19:55
mordredjlk: we also updated the names of the parameters - not sure if that happened while you were away or not19:56
mordredjlk: app_key= app_id= and webhook_token= now19:56
dmsimardmordred: I believe you, just still wish we could reliably reproduce this in the integration test (which is the whole point)19:56
jeblairmordred: i spot the logconfig error19:56
dmsimardmordred: can we run the real *executor* instead of just "ansible-playbook" 619:57
dmsimards/6/?/19:57
mordreddmsimard: that's a WAY more complex thing to do - very unlikley we'll get that done before PTG19:57
dmsimardmordred: the fact that we can do it is already good news, but sure, after the ptg19:58
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Fix typo in ServerLoggingConfig  https://review.openstack.org/49892219:58
jeblairmordred, jlk: ^ that should fix the {server} thing19:58
mordredjeblair: I agree - but I'm confused as to why jlk hit that if he doesn't have logging config configured19:59
dmsimardmordred: but I mean, by exercising the executor instead of ansible-playbook, we would probably be able to reproduce the error19:59
jeblairmordred: that's the default?19:59
jlkthat fixes the file names indeed20:00
jlkwhich is certainly part of the problem :D20:00
mordred            if hasattr(self.args, 'nodaemon') and not self.args.nodaemon:20:00
mordred                logging_config = logconfig.ServerLoggingConfig()20:00
mordredoh - well there it is20:00
jlkah20:01
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Fix a backwards boolean comparison for nodaemon logging  https://review.openstack.org/49892320:01
jlklogic reverse20:01
jlkfiring up here with that change20:02
mordredjeblair: ok. with those two patches that issue shoudlbe fixed.20:02
jlkhrm.20:03
jlkso with that change, I still get log files written to20:03
jlkrather than console out20:03
mordredjlk: when you run with -d ?20:03
jlkyeah20:04
jlk /srv/zuul/bin/zuul-scheduler -d -c /run/secrets/zuul.conf20:05
mordredjlk: can you slam in a print in setup_logging in zuul/cmd/__init__.py to see what   hasattr(self.args, 'nodaemon') and  self.args.nodaemon are?20:06
jlksure20:06
jlkerm20:09
mordredjeblair: I have set keep on on ze0120:09
jlkwell. I don't have log files written out20:09
mordredjlk: but you still have no output on stdout?20:09
jlkyeah20:10
mordredgrump20:10
jlknot even the print statement. Not sure where that went20:10
mordredjlk: are your dockers eating stdout somehow perhaps?20:10
jlkI don't think so, I get plenty of stdout for zookeeper, and I was getting tracebacks on stdout20:10
jlkmaybe did the default level of logging change?20:11
jlklike I need to add more verbosities ?20:11
jlkthere does not appear to be a command line option for more verbosities :(20:11
mordredjlk: zuul/ansible/logconfig.py line 83 - bump that to DEBUG (and sorry for the hassle here)20:13
mordredjlk: we have different levels set for different things, but then the console output is filtering to warning20:14
jlkno worries, I had getting my dockers going again as a backburner that I let sit too long20:14
jlkhasattr is True, and the args.nodaemon is True20:15
jlkboth boolean20:15
mordredok. I think it's the other thing20:15
jlktrying the debug20:16
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Set better defaults for server logging  https://review.openstack.org/49892820:16
jlkoh yeah that looks familiar20:16
jlkalthough20:16
jlkevery message is duplicate20:16
mordredjlk: ^^ that patch does a few more things20:16
jlkthere's two lines printed for every entry20:16
mordredyah - one sec20:17
jeblairmordred: can you make the defaults for -d be debug?  that was the old behavior20:18
jeblair(and i think desirable)20:18
mordredjlk: https://review.openstack.org/#/c/49863520:18
mordredjeblair: yes I can - one sec20:18
dmsimardmordred, jeblair: 0.14.2 pushed20:19
pabelangerknown issue? Could not clean up: 'AraCli' object has no attribute 'ara_context'20:20
jeblairpabelanger: yes.  scrollback.20:21
pabelangerThanks, see it now20:21
jeblairpabelanger, mordred, dmsimard: i will update and restart ze0120:21
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Raise default logging level to debug if nodaemon is passed  https://review.openstack.org/49893120:22
dmsimardjeblair: 0.14.2 on pypi but surely not on -infra mirrors yet, not sure if the executor node uses the mirrors20:23
jeblairdmsimard: nope, pyp20:23
dmsimardok20:23
jeblairactually, we shouldn't need a restart should we?20:23
dmsimardjeblair: thanks for finding (and fixing!) the issue20:23
jeblairi upgraded ara20:24
dmsimardfor the restart, not entirely sure how it's loaded into the executor.20:25
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Remove default root handler fallback to console  https://review.openstack.org/49863520:25
mordredjeblair: I don't tihnk we're doing the copy trick with ara yet20:25
mordred26885120:26
mordredgah20:26
mordredjeblair: which is to say I do not thinkg you need to restart20:26
jlkmordred: those three recent changes look good here20:27
mordredwoot20:27
dmsimardjeblair: so, circling back to v3 integration testing with ara -- this is the kind of issue I hope to be catching either through executor integration tests (which, to my understanding, we don't have yet), zuul_stream tests (WIP) and integration tests directly from ara's gate20:27
dmsimardI feel bad about ara breaking the gate a few times for what seems like things that could've been relatively easily caught20:27
*** jkilpatr has quit IRC20:28
dmsimardthe other challenge is properly gating what ends up in zuul-jobs (especially things from base roles that include roles from zuul-jobs)20:30
mordreddmsimard: yes - I totally agree with that (and I'm sad that the integration test we set up did not catch this and still wantto know why)20:31
mordredlike - it's not like we didn't actually set up a specific job just for this case ... which makes me extra sad it broke20:31
jeblairdmsimard: no worries.  we know what we signed up for.  i'm still happy to fix it after the ptg20:34
jeblairthe job i'm watching just passed ara-emit-html20:34
jeblairhttp://logs.openstack.org/77/498877/1/check/tox-pep8/79f316d/ara/20:35
dmsimardyay20:35
dmsimardwell, that's a relief20:35
mordredjeblair: woot!20:35
jlkwell, poop.20:42
jlkI can't see where scheduler is sitting and spinning on processing of github events20:42
*** jkilpatr has joined #zuul20:47
dmsimardclarkb, jeblair, mordred: so now that inventory/overlay will be landing, any other tree you want me to bark at ?20:52
dmsimardfix_disk_layout is also about to land, just missing a +3 https://review.openstack.org/#/c/496935/20:52
mordreddmsimard: done20:54
dmsimardI was thinking just now that we should probably consider pinning to Ansible<2.4 instead of Ansible >=2.3.0.020:56
dmsimardSo they don't break stuff in our faces, makes sense ?20:56
clarkbin d-g its an ==20:57
clarkbor do you mean with zuul?20:57
dmsimardIn Zuul, yes20:58
dmsimardzuul currently has >=2.3.0.020:58
openstackgerritMerged openstack-infra/zuul-jobs master: Add trigger-readthedocs job  https://review.openstack.org/49862620:59
dmsimard2.4 changes a LOT of things, I haven't yet spent time getting ara to work with it but I have non-voting jobs on devel (2.4) and they're broken20:59
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Fix badly named function typo  https://review.openstack.org/49887721:00
dmsimard2.4 is still planned for mid september, with a first release candidate sept 6th21:00
* dmsimard sends a patch, we can discuss it there if need be21:00
jlkblah anybody got tips on signing the json data going into a requests call?21:02
openstackgerritDavid Moreau Simard proposed openstack-infra/zuul feature/zuulv3: Pin Ansible to <2.4  https://review.openstack.org/49894121:05
jlkn/m got it21:06
*** hasharDinner is now known as hashar21:07
jlkSo I guess there is a question21:25
jlkthat I'm about to answer with data21:26
jlkOKAY GOOD NEWS21:31
jlkfetching all the pulls of a PR does not decrease the rate limit by the # of pulls.21:32
jlkand getting the same event a second time hits all the good cache points and we don't reduce the rate limit at all21:32
jeblairjlk: yay!21:34
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Pin Ansible to <2.4  https://review.openstack.org/49894121:34
*** olaph1 has joined #zuul21:42
*** olaph has quit IRC21:42
openstackgerritJesse Keating proposed openstack-infra/zuul feature/zuulv3: Improve function to find PR from commit status  https://review.openstack.org/49895721:46
jlkjeblair: mordred ^^ that should help somewhat with search API limits and of trying to hit repos we don't care about.21:48
mordredjlk: looks great - other than tests failing21:56
*** olaph1 is now known as olaph22:05
*** dkranz has quit IRC22:08
*** hashar has quit IRC22:09
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Add mirror-workspace-git-repos role  https://review.openstack.org/49896722:23
jlkoh hah22:23
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Add start-zuul-console role  https://review.openstack.org/49896822:26
openstackgerritJesse Keating proposed openstack-infra/zuul feature/zuulv3: Improve function to find PR from commit status  https://review.openstack.org/49895722:32
mordredjlk: that'll do it :)22:53
openstackgerritJesse Keating proposed openstack-infra/zuul-jobs master: Add a role to remove an ssh private key  https://review.openstack.org/49853022:57
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Add mirror-workspace-git-repos role  https://review.openstack.org/49896723:11
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Add start-zuul-console role  https://review.openstack.org/49896823:11
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Add mirror-workspace-git-repos role  https://review.openstack.org/49896723:14
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Add start-zuul-console role  https://review.openstack.org/49896823:14
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Add mirror-workspace-git-repos role  https://review.openstack.org/49896723:16
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Add start-zuul-console role  https://review.openstack.org/49896823:16
mordredjeblair: jlk's patch https://review.openstack.org/498957 is good to go and should help with our ratelimit fun23:18
mordredjlk, jeblair: also, so that we don't lose track of the stack, https://review.openstack.org/#/c/498635 and the 3 before it are ready - and the first three each have a +2 from one of you23:22
mordredjeblair: with your latest copying patch, you should be able to make the devstack-legacy job base on base-test right? or were you thinking of synthetically testing that instead23:23
jeblairmordred: i was going to rebase devstack-legacy23:23
mordredjeblair: cool23:25
pabelangerhttps://review.openstack.org/498588/ is ready for review again, that is our upload-afs role23:25
pabelangerhttps://review.openstack.org/498621/ is our new pubilsh-openstack-python-docs jobs too23:26
pabelangeractually, I still see an issue with 498621, let me update quickly23:26
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Improve function to find PR from commit status  https://review.openstack.org/49895723:33
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Fix a backwards boolean comparison for nodaemon logging  https://review.openstack.org/49892323:36
mordredpabelanger: speaking of - https://review.openstack.org/#/c/498623/ has a publish-to-pypi project-template - since by the time we land it we'll have all th ebits we need for that23:36
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Set better defaults for server logging  https://review.openstack.org/49892823:37
pabelangermordred: nice23:37
mordredpabelanger: so - if I can work down my current stack, I believe we'll be 100% done with publish-to-pypi- complete with constraints patches and release announcements!23:38
pabelangermordred: woot!23:38
pabelangerI am hoping afs jobs won't be much far behind23:39
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Raise default logging level to debug if nodaemon is passed  https://review.openstack.org/49893123:39
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Remove default root handler fallback to console  https://review.openstack.org/49863523:40
openstackgerritMerged openstack-infra/zuul-jobs master: Create upload-afs role  https://review.openstack.org/49858823:41
mordredpabelanger: +10023:41
mordredjeblair, pabelanger: it's still in check, but https://review.openstack.org/#/c/498903 should be good to go now (pre-reqs have landed so recheck works)23:42
openstackgerritMerged openstack-infra/zuul-jobs master: Add mirror-workspace-git-repos role  https://review.openstack.org/49896723:42
openstackgerritMerged openstack-infra/zuul-jobs master: Add start-zuul-console role  https://review.openstack.org/49896823:43
openstackgerritMerged openstack-infra/zuul-jobs master: Make a docs-on-readthedocs project-template  https://review.openstack.org/49890323:50
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Fix typo in ServerLoggingConfig  https://review.openstack.org/49892223:50

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!