Friday, 2018-10-26

*** goern has quit IRC00:56
dmsimardI really like where the UI is going02:37
dmsimardlive status, searchable jobs, builds, descriptions02:37
dmsimard++ zuul-web02:37
dmsimardhappy to see the progress bar padding made it haha02:39
dmsimarda meager contribution02:39
*** bhavikdbavishi has joined #zuul02:47
*** pwhalen has quit IRC04:25
*** bhavikdbavishi1 has joined #zuul05:47
*** evrardjp_ has joined #zuul05:51
*** jpenag has joined #zuul05:53
*** ianw_ has joined #zuul05:54
*** bhavikdbavishi has quit IRC05:55
*** Diabelko has quit IRC05:55
*** gothicmindfood has quit IRC05:55
*** jpena|off has quit IRC05:55
*** evrardjp has quit IRC05:55
*** chkumar|off has quit IRC05:55
*** ianw has quit IRC05:55
*** jlvillal has quit IRC05:55
*** aluria has quit IRC05:55
*** bhavikdbavishi1 is now known as bhavikdbavishi05:55
*** ianw_ is now known as ianw05:55
*** Diabelko has joined #zuul06:03
*** chandankumar has joined #zuul06:06
*** bhavikdbavishi has quit IRC06:48
*** evrardjp_ is now known as evrardjp07:37
*** goern has joined #zuul07:42
*** hashar has joined #zuul07:54
*** SotK has joined #zuul08:06
*** panda|off is now known as panda08:32
*** rfolco|rover has quit IRC08:35
*** electrofelix has joined #zuul09:58
*** ssbarnea has joined #zuul10:12
*** bhavikdbavishi has joined #zuul10:28
*** bhavikdbavishi has quit IRC10:32
*** bhavikdbavishi has joined #zuul10:34
*** jpenag is now known as jpena10:36
*** bhavikdbavishi has quit IRC10:38
*** ssbarnea has quit IRC10:49
*** AJaeger_ has joined #zuul10:57
*** AJaeger has quit IRC10:59
*** jpena is now known as jpena|lunch11:01
*** EmilienM is now known as EvilienM11:24
*** panda is now known as panda|lunch11:29
*** hashar is now known as hasharAway11:31
*** jpena|lunch is now known as jpena11:57
*** panda|lunch is now known as panda12:24
*** rfolco|rover has joined #zuul12:34
*** rlandy has joined #zuul12:36
openstackgerritSimon Westphahl proposed openstack-infra/zuul master: Use branch for grouping in supercedent manager  https://review.openstack.org/61333512:44
*** AJaeger_ is now known as AJaeger13:18
dmsimardI created a short video demo of the Zuul web interface and the workflow/interaction with ara for troubleshooting jobs: https://www.youtube.com/watch?v=Rg4vdc0ptgA13:28
dmsimardI wanted to start out with a gif but it ended up being too long eh13:28
*** chandankumar is now known as chkumar|off13:33
openstackgerritMonty Taylor proposed openstack-infra/zuul master: DNM Link to change page from status panel  https://review.openstack.org/61359314:10
mordredcorvus, tristanC: ^^ I tried that locally- and I think it's a terrible idea after having looked at it, but I thought I'd push it up so you could look at it too14:10
mordredcorvus, tristanC: after having poked/looked - I think I've convinced myself that if we link to the change page from code review it's enough. if we do link to it from the status page, I think adding a little icon in the change panel that looks like an expand/full-screen icon would be he way to go - but I think that might get busy14:12
fungicorvus: is the idea about basing priority on project-group to alleviate the situation where a dependent pipeline has three changes from one project followed by one change from another project and we end up prioritizing jobs for the changes in position 1 and 4 but then the change in position 4 is still stuck waiting for the changes in position 2 and 3 to get tested anyway before it can merge (possibly even14:17
fungifurther delaying 4 when 3 eventually gets tested and failed and so 4 has to be retested anyway)?14:17
fungiit seems like performance wise for dependent pipelines it would make sense to prioritize node requests for the first change(ish) in each queue, then the second in each queue, and so on, but maybe i'm misunderstanding the proposal somewhat14:19
mordredfungi: I think, based on my reading, that the last thing you said (prioritize node requests for first change) is what basing priority on project-group (as determined by membership in shared queue) would be achieving14:22
fungiyeah, i thought so too. just not sure why it was mentioned as a "we might even want to..."14:22
corvusmordred, fungi: yes.14:23
fungiseems like if we don't the result would be worse than what we have already14:23
corvusfungi: mostly because i haven't fully thought out all of the different tweaks we might need in the algorithm(s).  i was mostly focused on a mechanism that could do something hand-wavey like this.  i think you make a compelling case for why, in fact, at least in a dependent pipeline, we need to do exactly that.  :)14:24
corvusmaybe the result too is that the different pipeline managers behave slightly differently wrt priority.  dependent does that, independent is strictly per-project, etc.14:24
fungiokay, cool. just wanted to be sure i wasn't misunderstanding. for independent pipelines i agree a per-project round-robin ought to work out fine14:24
openstackgerritMonty Taylor proposed openstack-infra/zuul master: quick-start: add a note about github  https://review.openstack.org/61339814:25
mordredcorvus: fixed the sphinx issue in that ^^14:26
corvusmordred: re the change link -- i think your expand-icon idea may be worth exploring... (i haven't seen the result of the DNM change you pushed up yet, but i will look when it's available)14:27
corvusmordred: i certainly do understand the potential ui problems with linking to the change page and agree it warrants careful consideration :)14:27
corvusmordred: maybe putting it in the top-right part of the box (so it's not near the change number) will help indicate it's something that will "embiggen the box" rather than link to the change itself.14:29
Shrewscorvus: i don't think you'd have to add a new priority field to the znode. just alter the precedence value from, say, 200, to 200-<priority>.14:33
Shrewsnodepool wouldn't need to change at all then14:33
Shrewsexcept for caching things14:33
Shrews(at least, i don't *think* it'd have to change for that. it just does a reverse sort of that field, iirc)14:35
corvusShrews: right, but the priority needs to be able to change (possibly many times) after the request is created.14:37
tristanCmordred: i also agree, from the main status page, using filter is more efficient than loading a dedicated per change status page. it seems like the only use for such page is to be referenced from code review system.14:37
Shrewsoh, hrm14:37
mordredcorvus: ya - Im mostly concerned about number of pixels the icon would take up14:37
corvustristanC: see my comment on the parent change for why we should try to add it.14:37
mordredcorvus: but that was what I was thinking ... upper right corner of box14:38
fungithe former network engineer in me wonders if we can learn anything from traditional fair queuing and flow control algorithms for network traffic which could apply to the zuul problem space...14:38
* fungi should reacquaint himself with the altq implementations in *bsd14:40
corvusmordred: looks like https://review.openstack.org/613161 is passing if you want to get that moving again14:48
corvusoh sweet, http://logs.openstack.org/27/613027/4/check/zuul-quick-start/113c721/container_logs/ works14:49
mordredcorvus: I was just reviewing that in fact14:49
mordredcorvus: I think once that lands we can revert revert the pep8 patch - and add a flag to it so that people can disable it14:49
*** ssbarnea|bkp2 has quit IRC14:51
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Add the process environment to zuul.conf parser  https://review.openstack.org/61282414:51
corvusmordred: ya, also we should have something copy the zuul return file into the logs... either in the pep8 role or maybe the base job?  i looked and i think it would be awkward to have zuul_return itself do that.14:52
corvusmordred: (or, temporarily, have the pep8 role call zuul_return twice -- you can pass a path argument, so the second invocation could write it into the log dir)14:53
mordredcorvus: maybe that's a good thing to do at the start14:54
Shrewstobiash: that test failure in the node cache change is interesting14:54
corvusSpamapS: ^ we should get logs on 612824 now -- though also see my comment on that about adding unit test coverage14:54
tobiashShrews: I saw that this morning and I think that's two test races I need to address but didn't have time today to do so14:55
corvusfyi -- i'm about to be afk until wednesday14:55
corvusoh, i wonder if 'docker logs' emits things to stdout *and* stderr14:57
corvusbecause http://logs.openstack.org/27/613027/4/check/zuul-quick-start/113c721/container_logs/gerrit.log looks really sparse14:58
corvusso maybe that needs to be >file 2>&114:58
*** ssbarnea has joined #zuul15:00
Shrewstobiash: i might poke at it a bit after lunch15:00
*** hasharAway is now known as hashar15:12
SpamapScorvus: Indeed, I was kind of feeling bad about not having unit test coverage. ;)15:52
*** gothicmindfood has joined #zuul15:54
tobiashShrews: the test_node_caching fail is because I was just too stupid to correctly use iterate_timeout16:13
tobiashShrews: and the second one is a test race16:13
openstackgerritTobias Henkel proposed openstack-infra/nodepool master: Support node caching in the nodeIterator  https://review.openstack.org/60464816:20
tobiashthat should fix the test_node_caching ^16:20
logan-when a with_items is used in a zuul playbook it seems like the console log only receives output for the first item, subsequent items just show "ok: Item: <item> Runtime: 0:03:36.089373" -- has anyone else seen this? I am running a somewhat older sha of Zuul (July-ish) so this might have been addressed later on16:20
tobiashlogan-: yes I see this too in our master++ deployment, but didn't have time yet to do a deeper analysis16:22
tobiashlogan-: that's likely a bug in the zuul_stream callback plugin16:22
logan-tobiash: thanks, agreed regarding the callback. ara report shows the full output16:26
mordreditem iteration in callbacks is ... fun16:27
dmsimardmordred: it's not16:27
dmsimard:D16:27
*** hashar is now known as hasharAway16:28
tobiashdmsimard: you're expert on this, maybe you see what's wrong ;)16:29
mordreddmsimard: yah. by "fun" I did, in fact, mean "not fun"16:29
dmsimardlogan-: have a link to an example ?16:30
openstackgerritTobias Henkel proposed openstack-infra/nodepool master: Support node caching in the nodeIterator  https://review.openstack.org/60464816:31
tobiashShrews: and I think that should resolve that test race ^16:31
dmsimardtobiash: I can look, I'm biased but I don't use the actual job-output.txt file very much :P16:35
openstackgerritTobias Henkel proposed openstack-infra/nodepool master: Support node caching in the nodeIterator  https://review.openstack.org/60464816:41
Shrewstobiash: :)16:42
tobiashShrews, corvus: I just crafted a test case that creates 200 node requests and waits until all are fulfilled. On master that took 174s, with node caching 49s on my laptop with local zk on tmpfs16:53
Shrews\o/16:56
tobiashShrews, corvus: I have a further idea to optimize, we currently run updateNodeStats on every create and delete of a server which is pretty costly even if cached. What do you think about doing that asynchronously with a rate limit to let;s say every 10s?16:58
tobiashthat actually costs 30s in my test case (the 49s was with deactivated updateNodeStats). With functional updateNodeStats the cached time was 79s17:00
logan-dmsimard: no sorry, I don't. its on an internal repo. but it seems easy to repro.. just with_items over a command task and you'll see output from the first command but not the rest17:00
Shrewstobiash: i'd be fine with that change17:03
tobiashShrews: currently this is done by each thread that launches or deletes a node. Would you move that to a stats thread of each provider and just notify it on node creation/deletion?17:05
*** jpena is now known as jpena|off17:06
Shrewstobiash: i might consider something a bit simpler. have the reporter cache results until X seconds have passed, then report to statsd17:14
Shrewsbut i haven't thought that all the way through17:15
tobiashShrews: you mean adding a global cache per provider in the stats reporter?17:15
Shrewstobiash: yeah17:15
tobiashok, that is simple and would work17:16
tobiashI'm just unsure if we want to introduce a global var for that17:16
Shrewsi always prefer the simpler approach... unless it doesn't work  :)17:17
tobiashbut on the other side I don't want to add the nodepool object to every nodelauncher17:17
Shrewsit doesn't have to be global. could be a class attr17:17
tobiashdoes that work? It's just a base class of nodedeleter and launcher17:18
Shrewspart of StatsReporter17:18
tobiashwill test that17:18
Shrewsit'd have to be a class attribute, not instance attr17:19
tobiashShrews: the only problem I see with this approach is that if we hit the cache and then don't have an event for a long time we end up with wrong stats17:20
tobiashShrews: so we might need to kick off a thread that updates the stats after 10s17:21
tobiashhrm, that might get more complicated than I thought17:22
tobiashI'll try something17:23
*** electrofelix has quit IRC17:32
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool master: Cleanup down ports  https://review.openstack.org/60982917:41
*** toabctl has quit IRC19:19
*** hasharAway is now known as hashar19:20
*** toabctl has joined #zuul19:21
openstackgerritClark Boylan proposed openstack-infra/zuul master: Small script to scrape Zuul job cpu usage  https://review.openstack.org/61367419:35
*** j^2 has quit IRC19:40
openstackgerritMerged openstack-infra/zuul master: quick-start: add a note about github  https://review.openstack.org/61339819:42
openstackgerritJeremy Stanley proposed openstack-infra/zuul master: Add reenqueue utility  https://review.openstack.org/61367619:44
openstackgerritTobias Henkel proposed openstack-infra/nodepool master: Support node caching in the nodeIterator  https://review.openstack.org/60464820:03
openstackgerritTobias Henkel proposed openstack-infra/nodepool master: Rate limit updateNodeStats  https://review.openstack.org/61368020:03
openstackgerritTobias Henkel proposed openstack-infra/nodepool master: Rate limit updateNodeStats  https://review.openstack.org/61368020:09
tobiashShrews: fixed some further test races and implemented the nodestats rate limit ^20:10
*** openstackstatus has quit IRC20:42
*** openstack has joined #zuul20:46
*** ChanServ sets mode: +o openstack20:46
*** hashar has quit IRC20:57
*** pwhalen has joined #zuul21:20
*** jimi|ansible has quit IRC21:41
*** rlandy has quit IRC21:41
*** pcaruana has quit IRC23:10
*** jesusaur has quit IRC23:45

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!