Friday, 2018-02-02

corvusSpamapS: remote:   https://review.openstack.org/540184 Send a NOOP after registering a function00:12
SpamapScorvus: reading gearmand.. looks like it does not wake up workers on registration.00:19
SpamapSOnly SUBMIT's00:19
SpamapSor PRE_SLEEP's00:19
SpamapS(as in, it will respond to a PRE_SLEEP with a NOOP if there are jobs in the worker's functions)00:19
corvusSpamapS: oh interesting, so geard currently has the the same behavior00:20
SpamapSso geard appears to be doing the right thing.00:20
corvusor the consistent thing, if not the right thing :)00:20
SpamapSI think it makes sense too00:20
SpamapSCAN_DO doesn't mean "and I have nothing to do", so one can reasonably expect either GRAB_JOB* or PRE_SLEEP after a CAN_DO.00:20
corvusSpamapS: which, current behavior, or noop-after-cando?00:20
* SpamapS has reversed himself :-P00:20
SpamapSI think current behavior makes sense, now that I've thought about it.00:21
SpamapSThe protocol seems more or less silent on this, as usual. I don't see that it would hurt to NOOP after CAN_DO, but ... I think it's ok to wait for the PRE_SLEEP/GRAB_JOB.00:22
corvusSpamapS: i still think noop-after-cando makes sense, for the same reason.  they are independent.  :)  but that's probably because i see the state machine as persisting across a cando packet.  ie, if you're 'sleeping' before cando, then you're still sleeping after cando.  if you see a cando packet as indicating the connection is 'awake', then i agree, it makes sense for the connection to go back to sleep.00:24
corvusor put another way, the question is -- once the client goes to sleep, can only the server wake it up, or can it wake itself up?00:26
SpamapSI believe it can wake itself up.00:27
corvusso 2 pre_sleeps in a row are okay?00:27
SpamapSBut they don't tend to. :)00:27
corvusSpamapS: feel free to -1 that change, and i can make another so that when a client does registerFunction, it also does a pre_sleep to poke the server to poke it back00:28
SpamapSthe gearmand code doesn't mind if you sleep twice. it will just do the same thing.00:28
corvus(i think that's the other thing that makes me favor noop-on-cando -- it's a bit more efficient, since the server has all the info)00:29
SpamapSI mean, it's also worth noting, a sleeping worker is perfectly fine to just GRAB_JOB00:30
SpamapSthe point of the NOOP is to say "that might be successful now"00:30
SpamapSSending NOOP doesn't actually change the state of the worker to "not sleeping".00:31
SpamapSin fact, GRAB_JOB is the only thing that changes to that state.00:31
corvusthat's a good point00:38
openstackgerritMerged openstack-infra/zuul master: Support the fragment form of Gerrit URLs  https://review.openstack.org/53970500:49
SpamapSGearman is like the game of go. Simple rules. Lifetime to master.00:51
SpamapSso far so good with my 44-repo job....00:58
SpamapScorvus: ^^ however, creating zuul refs definitely makes the executor very busy when those jobs start.01:00
*** rlandy|biab is now known as rlandy01:05
*** rlandy has quit IRC01:31
*** yolanda_ has joined #zuul02:06
*** yolanda has quit IRC02:07
*** lennyb has quit IRC03:22
openstackgerritChangBo Guo(gcb) proposed openstack-infra/nodepool master: update ggupported python version in setup.cfg  https://review.openstack.org/54023103:26
*** harlowja_ has quit IRC04:09
openstackgerritChangBo Guo(gcb) proposed openstack-infra/nodepool master: update supported python version in setup.cfg  https://review.openstack.org/54023104:30
openstackgerritJames E. Blair proposed openstack-infra/zone-zuul-ci.org master: Zuul: Remove project name  https://review.openstack.org/54024904:44
*** bhavik1 has joined #zuul04:51
*** bhavik1 has quit IRC04:53
*** harlowja has joined #zuul04:56
openstackgerritMerged openstack-infra/zuul master: Don't use run_lock in executor's merger  https://review.openstack.org/54011205:36
*** xinliang has quit IRC05:52
*** xinliang has joined #zuul06:04
*** threestrands has quit IRC07:05
*** yolanda_ has quit IRC07:23
*** ngocng has joined #zuul08:36
*** ngocng has quit IRC08:37
*** jpena|off is now known as jpena08:50
openstackgerritChangBo Guo(gcb) proposed openstack-infra/nodepool master: Refactor method node_list in status.py  https://review.openstack.org/54032509:45
*** xinliang has quit IRC10:56
*** AJaeger has quit IRC11:02
*** AJaeger has joined #zuul11:03
openstackgerritMerged openstack-infra/nodepool master: update supported python version in setup.cfg  https://review.openstack.org/54023111:08
openstackgerritMerged openstack-infra/zuul-jobs master: Make subunit processing more resilient  https://review.openstack.org/53443111:27
*** xinliang has joined #zuul11:30
openstackgerritMerged openstack-infra/zuul-jobs master: Change the list of extensions to a dict  https://review.openstack.org/53968311:50
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool master: zk: use kazoo retry facilities  https://review.openstack.org/53553712:18
openstackgerritMerged openstack-infra/zone-zuul-ci.org master: Zuul: Remove project name  https://review.openstack.org/54024912:20
*** jpena is now known as jpena|lunch12:40
*** JasonCL has quit IRC13:23
*** JasonCL has joined #zuul13:24
*** JasonCL has quit IRC13:28
*** rlandy has joined #zuul13:31
ShrewsFWIW, I believe I have a fix for the provider wedge problem implemented. However, deleting an unused node at pause time affects soooo many tests, so trying to get that cleaned up.13:43
Shrewsit's not fun13:43
*** jpena|lunch is now known as jpena13:44
*** jpena is now known as jpena|off13:50
*** JasonCL has joined #zuul14:02
*** jpena|off is now known as jpena14:05
openstackgerritAndrea Frittoli proposed openstack-infra/zuul-jobs master: Add no_log to verbouse tasks  https://review.openstack.org/54040114:36
*** JasonCL has quit IRC14:45
hughsaundersHey all, odyssey4me mentioned a few days ago that we are interested in integrating Jenkins with nodepool. I'm investigating this and my initial question is what are the operations that I need to submit to nodepool via zk in order to allocate, use and release nodes?14:47
odyssey4meShrews mordred electrofelix ^ :)14:49
hughsaundersLooking at zuul's nodepool module there are functions for request, lock/unlock, return, use and accept. From that I'd guess the flow is request, accept, lock, use, unlock, return?14:49
*** JasonCL has joined #zuul14:51
Shrewswow, this must be "let's all code to the zuul-nodepool api week"14:56
*** JasonCL has quit IRC14:56
Shrewshughsaunders: it's a bit complicated, and not *really* documented yet because it's still not cemented fully, but a lot of it is described in http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html14:57
Shrewsbut from the zuul side, those seem to be the basic actions. corvus can provide more detail14:59
odyssey4meShrews The actions are easy to see, but the flow of a request is what hughsaunders is trying to ascertain15:00
odyssey4methe lifecycle of the node, so to speak15:00
hughsaundersShrews: didn't you get the invite?15:04
Shrewsfor... ?  i guess not15:05
electrofelixhughsaunders: https://etherpad.openstack.org/p/zuulv3-jenkins-integration might be a good place to start, think mordred wrote it?15:06
hughsaundersfor zuul-nodepool api week ;-)15:06
hughsaunderselectrofelix: thanks, will have a read15:07
electrofelixhughsaunders: are you going to be over for the PTG?15:07
hughsaunderselectrofelix: sadly not15:07
electrofelixwhat timezone do you work within, at the very least if I make it there and someone else planning on looking at this area, planning for some discussion around times that would suit would probably help15:08
hughsaundersUTC15:08
electrofelixthat keeps it simple so ;-)15:08
hughsaunders:)15:08
hughsaundersI was thinking of a similar approach to the existing plugins idea in the etherpad.15:10
hughsaundersIt could be refined a little with https://github.com/jenkinsci/groovy-events-listener-plugin15:10
electrofelixhughsaunders: ah, didn't know there was a proper plugin for events for that, that would certainly allow for a POC using groovy to be put together15:11
hughsaundersyeah15:11
hughsaundersjust need to know the operations that np+zk require15:11
electrofelixI think for optimal integration need to handle static slaves, dynamic ones from other plugins and nodepool provided slaves15:14
electrofelixhaving a separate nodepool provider should make it a little easier to generically handle any dynamic slave (assuming it's possible to easily check if there is one available from any of them)15:15
hughsaundersI would have thought nodepool only need to know about nodepool provided slaves..15:16
electrofelixI'm thinking the zuul trigger plugin part should regard the nodepool provider as just another slave provider, so need to keep some of the interactions separated15:17
hughsaundersah ok, I didn't have triggering jobs on my radar15:17
electrofelixjust thinking out loud, I think the interactions describe above are separate15:17
hughsaundersI nodepool to maintain a pool of nodes which become jenkins slaves, then jenkins allocates jobs to them and when the job finishes the node is cleaned up. The build that uses the node is triggered by something else - could be cron or pull request webhook etc15:18
* hughsaunders should read and edit before submitting15:19
hughsaundersslack has made me even lazier than before15:19
electrofelixah, ok, so you're just coming from the perspective of using nodepool from Jenkins, not the integration with zuul part?15:19
hughsaundersyeah15:20
hughsaunderswe're heavily invested in Jenkins at the moment, and nodepool could make node management easier.15:20
hughsaundersMigration to zuul might happen but if it does, its a long way off.15:20
electrofelixthat could work well, because I think we want to solve the integration with zuul and as we only have static slaves at the moment15:20
electrofelixbut would like to use nodepool in the future as well15:21
hughsaunderswe're using dynamic slaves, but a mess of groovy and ansible allocates a slave within each job, which is slow and error prone.15:21
electrofelixif someone else is worrying about the nodepool piece, means I've one less piece to worry about and just need to make sure that we can easily use the nodepool plugin :D15:22
hughsaunders:)15:23
openstackgerritMerged openstack-infra/zuul-jobs master: Add no_log to verbouse tasks  https://review.openstack.org/54040115:23
hughsaunderselectrofelix: I was thinking of having a python daemon between zk and jenkins, so that the groovy parts can be very thin. Would that architecture work for you?15:29
hughsaundersThe jenkins groovy events plugin hits the python daemon's api, that then does all the needfuls with zk. The python daemon also listens to zk events and calls the Jenkins api to register nodes.15:30
openstackgerritSam Betts proposed openstack-infra/zuul master: Add ZUUL_OVERRIDE_BRANCH support for legacy jobs  https://review.openstack.org/54041815:30
hughsaundersOr maybe nodes self register via a DIB element, but that might expose keys.15:31
*** JasonCL has joined #zuul15:38
*** JasonCL has quit IRC15:39
*** JasonCL has joined #zuul15:43
electrofelixhughsaunders: as a POC I think that would be fine to prove that it works, but I think that for it to become a solution people would be willing to install and use in general that it probably should all be in a plugin in Jenkins that talks directly to zk15:52
hughsaundersYeah, it would be much easier for a user to just install a plugin15:52
electrofelixFor the most part it's really just that you want to keep the calls to zk simple? Could write a series of calls in groovy and place them into a shared library in Jenkins that the events groovy code just utilizes? https://jenkins.io/doc/book/pipeline/shared-libraries/#writing-libraries15:58
hughsaundersI'm not sure how the scoping works with that, shared libraries are definitely available to pipeline-dsl jobs, but I'm not sure if they are available in the context of the groovy events plugin, will experiment.16:05
*** dmellado has quit IRC16:11
*** dmellado has joined #zuul16:14
tobiashelectrofelix: keep in mind that you need a persistent connention to zk otherwise you loose locks16:26
electrofelixhughsaunders: groovy scripts run within a system context will have access to any libraries that jenkins can load, so I'd be reasonably certain that it should be possible to grab just about anything else within Jenkins to be able to load stuff16:33
electrofelixI'm pretty sure the events plugin has to run the groovy within a system context to have access to the events in the first place16:33
hughsaundersI'm guessing something that needs a background thread - to maintain the zk connection will have to be its own java plugin, rather than riding on the existing groovy events plugin.16:34
electrofelixthat does complicate it16:37
electrofelixanyway, I'd go with whatever you're comfortable as a starting POC to confirm things, with a view that it will probably make sense to have it as a plugin in jenkins that talks direct to zookeeper rather than through an intermediary service16:39
electrofelixhughsaunders: https://cdn.rawgit.com/jenkinsci/groovy-events-listener-plugin/master/src/main/resources/org/jenkinsci/plugins/globalEventsPlugin/GlobalEventsPlugin/help-onEventGroovyCode.html mentions a PLUGIN_STARTED event that could be used to launch a background thread16:45
electrofelixhughsaunders: and https://github.com/jenkinsci/groovy-events-listener-plugin/blob/master/src/main/site/examples/2_IncludeExternalGroovyScript.groovy covers loading common groovy code from some location16:46
* SpamapS reading some fastcinating, relatable backscroll16:48
SpamapShughsaunders: I'm in a similar boat. Company full of Jenkins jobs and just one team on Zuul.. the migration may never happen honestly, but we do need to do some cross-repo stuff like yesterday ;)16:48
hughsaunderselectrofelix: interesting, thanks and SpamapS I feel your pain16:49
SpamapSAnd we have a similar "hmm, maybe nodepool for jenkins..."16:50
SpamapSbecause we use the jenkins openstack plugin16:50
SpamapSand... there are... issues.16:50
* SpamapS does like that this is coming full circle16:50
corvusi guess my question is -- why the focus on enabling jenkins, rather than converting to ansible?16:50
electrofelixSpamapS: I knew I recalled something used to do this before ... ;-)16:50
SpamapScorvus: we have thousands of lines of Groovy defining jobs16:51
SpamapSit's just..16:51
electrofelixcorvus: it's pretty difficult to convince management that it's worthwhile switching16:51
SpamapSWhat I'd like to do instead is enable zuul to send things through jenkins.16:51
electrofelixat least difficult to convince them we should rewrite everything to switch in one go16:51
SpamapSOne idea I've had is just to have a job with pre playbooks that sets up a node as a jenkins master, and feeds in the configuration.16:51
hughsaundersI just want to rip out the node management code from my groovy libs and let nodepool do it..16:52
corvusyeah, rewriting everything is hard.  but there must be work that can be done to automate conversion.  we did some of it for openstack.16:52
electrofelixIf it's possible to get easy integration between Jenkins <--> zuulv3 <--> nodepool, then I think the advantages of using .zuul.yaml in projects will convince people to switch organically16:52
corvusif we're talking about writing new code, why not do that?16:52
SpamapSAnd then the job runs in that jenkins master, and produces whatever result, and then it's torn down.16:52
SpamapScorvus: you know as well as anyone how awful jenkins config and job definition are intertwined if people aren't religious about using jjb or something like it.16:52
SpamapSAlso I believe some of these users, shockingly, actually like jenkins pipeline.16:53
SpamapSGroovy isn't the worst language ever. The pipeline UI is pretty good, showing jobs progressing through steps and allowing retrying of steps that fail. It's not how *I* work, but I get why some aren't excited about watching ansible text fly by.16:54
hughsaundersgroovy isn't the worst lang, but jenkins cps groovy is close.16:56
hughsaundersIf it was actual groovy it would be okish16:56
SpamapSIIRC, people are also using Kotlin in the same context now.16:56
SpamapSAnd then there's the pipeline DSL16:56
electrofelixcorvus: I think where we are, more likely to switch completely to zuulv3, if I can get us to the point that we're on zuulv3 without needing to migrate all the jobs16:56
SpamapSWe've been doing an experiment where we just use jenkins-cli.jar to kick a job from a tiny node.16:57
SpamapSIt ran into Jenkins plumbing issues but that is maybe the simplest way to take advantage of jenkins jobs and then write zuul jobs based on them.16:57
SpamapSLike, we have a job that builds RPMs and puts them in a yum repo. So we were thinking, kick that off from a zuul job's, and then you can run your zuul job can use the rpms.16:58
SpamapShey16:58
SpamapSslack has ruined me too16:58
clarkbas someone that only slacks via irc it looks like irc to me :)17:01
SpamapSYou're missing out, and I'm not being cheeky. Edit button++.17:02
mordredSpamapS: s/edit button++// ... WFM17:02
SpamapSI need mah gifs.17:02
clarkbalso when they have outages irc tends to stay up17:03
corvusokay, this channel has gone really far off topic17:03
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool master: Provider wedge fix  https://review.openstack.org/53931617:11
Shrewsclarkb: tobiash: I had to change some tests in that ^^ that I believe you two wrote. If you could carefully look at those changes to make sure they're still valid, I'd appreciate it.17:12
Shrewsi mean, the tests pass locally, but i think what they're testing is still right17:13
tobiashShrews: ok, looking in a minute17:13
Shrewsoh, i see one thing i need to change17:14
tobiashOk, I'll wait ;)17:15
Shrewsand of course, that change breaks a test... *sigh*17:17
Shrewstobiash: ok, nm. it's good as is17:27
*** dmellado has quit IRC17:34
Shrewscan review https://review.openstack.org/537932 get some love today?17:36
*** harlowja has quit IRC17:37
*** jpena is now known as jpena|off17:38
corvusShrews: is pass okay there, or should we log.exception it?17:41
Shrewscorvus: i could go either way on it. I don't think we care it's missing at that point since we intend to decline it17:42
Shrewscorvus: i can add that though. more info never hurts17:43
corvusShrews: yeah, i'm mostly thinking if that comes in handy debugging some weird situation.... is there a more narrow exception we can check for there -- a 'node not presest' exception?17:44
corvusShrews: cause maybe that could be just a log.debug line, and then the broader exception handler could do log.execption.17:45
Shrewscorvus: no, Exception is raised if it doesnt exist currently17:46
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool master: Handle missing request during a decline.  https://review.openstack.org/53793217:46
openstackgerritMerged openstack-infra/nodepool master: Invalidate flavor and image cache on 400 errors  https://review.openstack.org/44121517:47
Shrewspretty important bug fix though b/c we were continually declining the same req over and over in production, which made the log crazy large17:47
corvusShrews: ya17:48
corvusShrews: well, if this exception log annoys us, we can add a more specific exception and reduce it later17:48
corvustobiash: can you re-review https://review.openstack.org/537932 when you get a chance17:49
Shrewsi don't expect to see this one often, tbh17:49
tobiashcorvus: done17:50
Shrewscorvus: tobiash: thx17:51
tobiashcorvus: do you have some performance monitoring of zk in place?17:51
corvustobiash: no :(17:51
corvusmaybe17:51
corvushrm, i'm not sure how to link to a host in the new version of cacti17:52
corvusgimme a sec17:52
tobiashwe're currently building up a monitoring for our deployment and I'm currently in process of adding zk metrics17:53
corvushttp://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=94&nodeid=node1_94&host_group_data=17:54
corvustobiash: i believe zk is the only thing running on that host at the moment17:54
corvusso we don't have any direct metrics of zk itself, only indirect metrics of the host17:54
tobiashwow 20gb for zk17:55
corvusthere may be some leftover stuff on disk17:55
corvus(i'm checking that)17:56
tobiashoh, nm read 20gb and thought ram ;)17:56
corvusheh, no the whole system is using about 900MB17:57
tobiashthe ram usage looks pretty much like ours17:57
corvusthe network traffic is not insubstantial.17:57
tobiasha bit more than I suspected18:01
tobiashbut no problem for local networking18:01
corvusftr, /var/lib/zookeeper is 568M18:02
*** harlowja has joined #zuul18:21
corvusShrews: the handler is going to keep running and therefore, delete one more node on each pass through, correct?  so if a request requires 2 nodes, it will delete 2, after 2 passes?18:30
corvus(re 539316)18:30
openstackgerritAndreas Jaeger proposed openstack-infra/zuul-jobs master: Revert "Change the list of extensions to a dict"  https://review.openstack.org/54048018:33
*** dmellado has joined #zuul18:35
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Executor: Don't start too many jobs at once  https://review.openstack.org/54016218:39
openstackgerritAndreas Jaeger proposed openstack-infra/zuul-jobs master: Change the list of extensions to a dict  https://review.openstack.org/54048518:43
openstackgerritMerged openstack-infra/nodepool master: Handle missing request during a decline.  https://review.openstack.org/53793218:51
Shrewscorvus: (back from lunch) it should pause, delete, unpause, create node and repeat those steps for each node it needs.18:54
Shrewscorvus: thanks for looking at it. it's a significant enough change, i want others to try to poke holes in it. so far, i've not found any, but i may be too close to it18:55
Shrewscorvus: of course, if it can't delete anything, it will continue as normal18:56
Shrewswaiting for quota to be freed through normal means18:57
corvusShrews: the return on line 496 suggests that it won't unpause immediately after deleting, only if, after the next pass through the loop, it determines it has quota, will it unpause19:02
Shrewscorvus: yeah, it's not on in the same loop19:04
corvusShrews: the return means it has to go back to the provider looping through all the request handlers again, right?19:04
Shrewscorvus: you mean looping through all of the nodes?19:05
corvusShrews: iiuc, the provider loops through all the request handlers, and the request handler loops through all the requested nodes.  so the return sends it back to the provider request-handler loop19:06
Shrewsoh, yes19:06
Shrewsnothing changes with that19:06
corvusShrews: so i think it's (start node loop), pause, delete, (back to provider loop), (resume node loop), unpause, create, pause  [repeat]19:07
Shrewssame as what i stated, just slightly more detailed  :)19:08
Shrewsbut yeah19:08
corvusShrews: the main thing i wanted to double check is -- there will be a delay after the node deletion before the unpause (i think that's fine -- delete is async anyway, this will give it time to happen), but despite being paused, we're going to get back to that point in the loop and try again?19:08
Shrewsi suppose it depends on how the quota code deals with nodes in DELETING state. i see what you're saying though... maybe we try to delete more than we really need, right?19:10
corvusShrews: that's a secondary concern i was about to bring up, yes.19:10
Shrewshrm, that's a valid concern19:11
corvusShrews: i think the primary concern is: if we need to delete 2 ready nodes, will we?  (i think the answer is yes because despite the return in 496, we come back to that point again later).  then the secondary concern is: If it takes a long time (>provider loop interval) to delete 1 node, will we keep deleting ready nodes until one of them finally is deleted?  i think the answer to that is yes too, because of19:11
corvusthe async delete.19:11
corvusone might argue that's the best behavior, but i guess it depends on how well the provider loop time matches the expected node deletion time.19:12
Shrewscorvus: i agree. i didn't see that hole. glad you did. not sure how to address that yet19:12
Shrewsi don't want it to delete more unused nodes than it has to19:12
corvusShrews: maybe deletoldestunusednode could check for deleting nodes, and noop if there is a deleting node < $some_timeout old19:13
corvusShrews: well, that doesn't deal well with the multi-node request case19:13
corvusShrews: so i guess it'd have to be more like "delete the number of nodes needed for the request, unless there are nodes already deleting that haven't been in deleting state more than 5 minutes"19:14
Shrewsi think it will. the spotted DELETING node should disappear, then we can actually delete for any other nodes, yeah19:15
corvusShrews: downside of that is it makes the waitfornodeset loop a bit more complicated -- it can't just delete them one at a time, it has to calculate its deficit, right?19:15
Shrewswe could still treat it one-at-a-time i think19:15
corvusif we did, it'd be slow though?  a 5 node request would delete each of those nodes one at a time before fulfilling.19:16
corvusmaybe that's okay since this is kind of an edge case?19:16
Shrewscorvus: i think that's ok, imo19:16
corvusyeah, i think we can live with it.  it doesn't have to be optimized, it just has to eventually get itself out of the corner.19:16
Shrewscorvus: i'll WIP that and add that bit. thanks19:17
corvuscool, thx19:17
corvusSpamapS: 18:58 < openstackgerrit> James E. Blair proposed openstack-infra/gear master: Automatically send GRAB_JOB after CAN_DO  https://review.openstack.org/54048919:17
SpamapSsweet19:17
SpamapScorvus: will review later today19:18
corvusSpamapS: how's that look as an alternative to yesterday's thing?19:18
corvusSpamapS: thx19:18
corvusi will have pep8 fixed by the time you look at it :)19:18
mordredcorvus, Shrews, SpamapS, jimi|ansible: you'll be happy to know that in addition to zuul correctly catching some bugs in https://github.com/ansible/ansible/pull/20974 - it has also now run successfully - which means we can land a PR I've been waiting to land for over 6 months because I wanted to see it integratoin tested first19:18
jimi|ansible\o/19:19
corvusmordred: you play the long game :)19:19
AJaegercongrats!19:19
Shrewsmordred: awesome!19:20
corvusmordred: when i click on the yellow circle, it seems to say openstack-zuul hasn't completed checks yet?19:21
ShrewsWe can now anticipate many questions from folks on how to parse our job ouput when their module change causes a failure. \o/19:21
mordredcorvus: I just pushed a new patchset19:21
mordredthe last one passed integration tests but faile a unittest due to amissing method on a fake object19:22
corvusmordred: ah, the missing info for me is that you pushed a new patchset... do amends not show up in the timeline?19:23
clarkbcorvus: thats one of the major grumps with github aiui. When you rebase you effectively lose old review history19:23
mordredcorvus: they do now actually19:24
mordredif you look underneath the last comment from openstack-zuul19:24
clarkboh they've fixed it?19:24
mordredyou should see  my face and a line "Add a module_utils OpenStack Cloud constructor …"19:24
* Shrews sees mordred's face everywhere19:25
Shrewsmostly nightmares19:25
corvusmordred: yeah, but i didn't see that anywhere else, like between the -20 and -4 hour comments, so i didn't think it meant that was chronological19:25
mordredI think it gets replaced19:26
corvusmordred: but i guess when checks are done, the newest comment will appear after the commit?19:26
mordredlike - the old versions of the commit don't show up any more19:26
mordredcorvus: yes19:26
corvusok.  that of course makes perfect sense.  it's just that things *disappearing* from a timeline undercut the value of a timeline as a timeline from a UI point of view.  it makes me question the nature of time.19:27
mordredcorvus: as well you should19:27
corvusbut i think i grok the rules of this artificial quantum realm now19:28
mordredof course, now Shrews is lucky and gets to review that monster ... and hopefully maybe rcarrillocruz too19:28
corvusgood thing there's no rush.  ;)19:28
openstackgerritMerged openstack-infra/zuul-jobs master: Revert "Change the list of extensions to a dict"  https://review.openstack.org/54048019:28
*** electrofelix has quit IRC19:31
tobiashShrews, corvus: the quota code treats nodes in DELETING state like normal READY nodes19:34
corvustobiash: ok. i think that confirms we should make the change we discussed19:35
Shrewsyeah. and another test to verify the delete behavior19:36
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Executor: Don't start too many jobs at once  https://review.openstack.org/54016219:36
corvustobiash: tobiash can you review my 2 comments on 535711?19:55
tobiashlooking19:57
tobiashcorvus: responded20:02
tobiashregarding the path I can change that20:02
tobiashregarding drivers hooking it mordred had a followup to refactor that: https://review.openstack.org/#/c/53701420:03
tobiashregarding the path I just thought this was discussed before I adopted the change so I left it as is20:04
tobiashbut I'm fine with maintaining the old path20:04
corvustobiash: ah, thanks.  that followup is after the confusing .json change... mordred can we rebase 537014 earlier in the stack?  i'd like to have a driver-based zuul-web asap and build off of that.20:05
corvustobiash: oh, i don't remember wanting to change the path...  do you have a reference to jog my memory?20:05
tobiashcorvus: that was just my assumption as I didn't know about such a conversation ;)20:06
corvustobiash: oh, you mean it was in jlk's original change20:07
tobiashyes20:07
tobiashso my assumption was that should be more generic than 'connection'20:07
jlkhrm, I remember some of this convo20:07
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool master: Provider wedge fix  https://review.openstack.org/53931620:07
corvusjlk: https://review.openstack.org/#/c/535711/8//COMMIT_MSG specifically20:08
jlkand thought process.20:08
tobiashbut no idea about that, is 'connection' a synonym to 'driver' in zuul context?20:08
SpamapSmordred: zomg, zuul made your day better.20:09
corvustobiash: no, but a given connection has exactly one driver20:09
SpamapSthat's awesome.20:09
corvustobiash: so connection implies driver20:09
SpamapSto me, connections are definitions that are used for triggers and/or reporters. drivers are things that define connections.20:10
jlkoh20:11
jlkcorvus: I thought maybe it was because we might have another connection driver with a similar name?20:11
jlkmaybe?20:11
* jlk thinks some more20:11
tobiashso the current path is /connection/<name_as_in_zuul_conf>/...20:12
tobiashand the change currently replaces /connection/ by /driver/github/20:12
mordred a connection is a logical representation of a remote system. a driver is code that implements that....20:12
jlkMaybe I felt wrong takign the /connection/ path entirely20:12
tobiashso stay with connection?20:13
jlkeither way, it should totally be discussed , reasoned about, etc..20:13
mordredso a zuul might have a connectoin to public github and a connection to a private github and a connection to a gerrit20:13
mordredthe first two woulduse the github driver, the second woulduse the gerrit driver20:13
jlkit might have multiple connections to public github20:13
corvusso /connection/my_private_github/ uniquely identifies both a connection (ie, remote system) and the driver which is used to interface with it.20:13
tobiashthe connection name is encoded in the path currently so the driver name is not necessary20:14
corvusif you have multiple connections, it'd just be /connection/github1 and /connection/github220:14
mordredagree. /connection/{connection} sohuld be sufficient20:14
mordredand yes - I can rebase the confusing stack20:15
corvusmordred: i was thinking if we can do the last change first -- 537014 -- that would be ideal20:16
mordredcorvus: let me see how bad that rebase is :)20:17
jlkyeah I honestly can't remember why I went with that path, and I didn't comment in the code for it (go me!) so I have no argument here.20:17
mordredtobiash: you ok with me rebasing this stack real quick? don't want to step on work you're doing20:17
tobiashmordred: go ahead20:18
corvusmordred: er20:18
tobiashcorvus: I think I know why the change currently has the driver in the route20:18
tobiashit needs that information to query the gearman about github20:19
corvusmordred: i was thinking rebase 537014 on 536780... so it shouldn't affect the 3 changes in tobias's stack20:19
tobiashso maybe that gets easier with mordred's change20:19
tobiashmaybe these need to be squashed and reworked20:19
tobiashhave to look at mordred's change again20:19
corvushrm... ok...  well, if it's easier, we can also land all these patches together -- ie, break the endpoint, then change it back with the refactor.20:20
tobiashcorvus: or put mordred's change (without github) in front of it20:21
corvusand as long as we land them in rapid succesion, we don't have to deal with an api break20:21
corvusi assumed mordred's change depended on 53571120:22
tobiashpartly20:22
tobiashit also changes the sql connection20:22
tobiashand primarily adds the framework for that into zuul-web if I understood correctly20:22
corvusright, i'm just thinking that if we land 535711, 536773, 536780, 537014, and a new change to change the route back to the old form, we should be able to do that with minimal updates.20:23
corvusassuming that mordred comes back and says that rebase isn't too crazy.  ;)20:24
tobiashok, I'm fine with that too ;)20:24
tobiashthe path change might be easy there: https://review.openstack.org/#/c/537014/3/zuul/web/handler.py20:25
corvusyep, at that point it's just deleting a line :)20:25
tobiashyeah, that order is probably the easiest20:26
mordredcorvus, tobiash: sorry - brainjuggling the rebase ... andyes, corvus, I agre with rebase 537014 on 53678020:33
mordredbut there's some internal dependencies - so I'm doing it in a few passes20:34
tobiashmultipass rebase... nice strategy :)20:35
mordredok. I *think* I've got it20:56
openstackgerritMonty Taylor proposed openstack-infra/zuul master: Add facility for plugins to register web routes  https://review.openstack.org/53701420:56
openstackgerritMonty Taylor proposed openstack-infra/zuul master: Add /info and /{tenant}/info route to zuul-web  https://review.openstack.org/53701120:56
openstackgerritMonty Taylor proposed openstack-infra/zuul master: Add support for configuring graphite_url  https://review.openstack.org/53701220:57
openstackgerritMonty Taylor proposed openstack-infra/zuul master: Move WebInfo config processing into zuul.model  https://review.openstack.org/53701320:57
tobiashmordred: ah, already with the correct path :)20:58
tobiashmordred: the commit message doesn't fit the corrected path20:59
mordredtobiash: whoops20:59
openstackgerritMonty Taylor proposed openstack-infra/zuul master: Add facility for plugins to register web routes  https://review.openstack.org/53701421:00
openstackgerritMonty Taylor proposed openstack-infra/zuul master: Add /info and /{tenant}/info route to zuul-web  https://review.openstack.org/53701121:00
openstackgerritMonty Taylor proposed openstack-infra/zuul master: Add support for configuring graphite_url  https://review.openstack.org/53701221:00
openstackgerritMonty Taylor proposed openstack-infra/zuul master: Move WebInfo config processing into zuul.model  https://review.openstack.org/53701321:00
mordredtobiash: fixed :)21:00
corvussweet, i'll review that now21:01
mordredI also marked 537011 WIP because it needs tests21:02
tobiashmordred: commented on 53701421:09
corvus(i have comments in progress)21:09
tobiashnot sure if I'm correct there but that's what I think21:09
corvusmordred: reviewed -- mostly some small things.  i think tobiash is right about the path.  also, we need tests for the rest api.  that can come next.21:14
openstackgerritAndrea Frittoli proposed openstack-infra/zuul-jobs master: Change the list of extensions to a dict  https://review.openstack.org/54048521:16
mordredcorvus, tobiash: re: path - I can see where the confusion lies - but there's two different classes there ...21:21
mordredclass SqlWebHandler(BaseWebHandler) <-- the sql handler subclasses BaseWebHandler - whereas the github driver subclasses BaseDriverWebHandler (which should be renamed to BaseConnectoinWebHandler)21:21
mordredthe BaseDriverWebHandler pre-pends connection info always - the BaseWebHandler doesn't21:22
corvusmordred: oh yep i fell into that trap21:22
tobiashoh, yes21:23
tobiashthen ignore my comment21:23
openstackgerritMonty Taylor proposed openstack-infra/zuul master: Add facility for plugins to register web routes  https://review.openstack.org/53701421:28
corvusi'm +2 on the stack of 421:34
tobiash\o/21:35
tobiasheod for me, cya21:41
fungianybody who enjoys reviewing massive amounts of whitespace will just love https://review.openstack.org/54008221:41
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool master: Provider wedge fix  https://review.openstack.org/53931621:42
*** myoung is now known as myoung|bbl21:42
clarkbfungi: looks like you have enough reviews for approval21:43
mordredcorvus: the commit message is incorrect on the first patch ... do we care?21:43
mordredcorvus: otherwise I think we can land the stack21:43
corvusmordred: it's correct for that patch, right?  it's just that we change it back 3 patches later?21:44
corvusfungi: aprvd21:45
fungithanks corvus. i thought maybe the whitespace demanded greater care ;)21:46
corvusfungi: heh you have tricked me into a whitespace review ;)21:46
fungiand it's not even april 1 yet21:46
openstackgerritMerged openstack-infra/zuul-website master: Convert Arcana files from DOS to UNIX line endings  https://review.openstack.org/54008221:48
openstackgerritMerged openstack-infra/zuul-website master: Link page source from the index footer  https://review.openstack.org/54008321:48
*** rlandy has quit IRC21:51
corvusmordred: 537014 made zuul sad.  i left a comment pointing out the error21:56
mordredcorvus: I make zuul sad all th etime22:01
* corvus warms up the hot cocoa22:02
corvusmordred: you gonna fix that or should i push up the fix?22:19
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Add facility for plugins to register web routes  https://review.openstack.org/53701422:20
corvusnm22:20
mordredcorvus: oh - sorry - I made the snarky comment about how I break things - but then didn't actually, you know, fix it22:24
mordredsigh22:24
corvusmordred: we all have our special skills ;)22:24
openstackgerritMonty Taylor proposed openstack-infra/zuul-jobs master: Look for subunit2html - not an empty variable  https://review.openstack.org/54054722:31
mordredcorvus: zuul unhappy again. I'll get it this time22:41
openstackgerritAndrea Frittoli proposed openstack-infra/zuul-jobs master: Change the list of extensions to a dict  https://review.openstack.org/54048522:42
mordredhahaha22:47
mordredrebase issue22:47
openstackgerritMonty Taylor proposed openstack-infra/zuul master: Add facility for plugins to register web routes  https://review.openstack.org/53701422:48
corvusmordred: i guess those tests work!22:48
mordredcorvus: indeed!22:54
openstackgerritMerged openstack-infra/zuul-jobs master: Look for subunit2html - not an empty variable  https://review.openstack.org/54054722:59
corvusi'm going to carry over tobias's +2 on that and start merging these23:17
mordredcorvus: ++23:45

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!