Tuesday, 2013-10-01

*** reed_ has quit IRC00:00
jog0jeblair: I don't think elastic-recheck picked up on 122400100:06
jog0and its one a very frequent bug righ tnow00:07
clarkbjog0: is that the new one?00:10
jog0clarkb: yeah00:16
jog0I ran elasticsearch locally and it worked00:16
jog0clarkb: so something is funny on the server side00:16
clarkbjog0 how does the bot know to reread the file?00:16
clarkbdoes it just reread each time?00:17
*** dkliban has joined #openstack-infra00:17
jog0clarkb: yup00:19
*** mrodden has joined #openstack-infra00:20
*** dcramer_ has quit IRC00:20
openstackgerritChris Hoge proposed a change to openstack-infra/config: Adding puppet-vswitch project to Stackforge  https://review.openstack.org/4902000:27
clarkbjog0: puppet isn't running on the server00:28
clarkbjog0: I am going to run it in noop mode to see how much stuff it iwll hcnage then run it if it comes back clean00:28
clarkbjeblair: fungi: any opinion on whether or not puppet can be started?00:28
fungiclarkb: i think it should be caught up, but didn't want to start it out from under jeblair while he might still be looking at it00:30
jog0clarkb: cool thanks00:30
fungilast i looked he was still tailing logs00:30
jog0is it possible to manually update the git repo00:30
clarkbfungi: I see that --noop shows log configs will be changed00:31
clarkbfungi: from INFO to DEBUG00:31
clarkbjog0: it is, I will do that instead00:31
jog0actually with EOB on west coast shouldn't matter to much if we wait00:31
openstackgerritChris Hoge proposed a change to openstack-infra/config: Adding puppet-vswitch project to Stackforge  https://review.openstack.org/4902000:31
jog0whoa gate is 49 haha00:31
jeblairclarkb, fungi: i'm clear for you to start puppet00:31
fungiclarkb: ahh, yeah that patch merged since my last test apply00:31
fungisounds fine to restart to me then00:32
clarkbjeblair: ok thanks, I will restart it00:32
clarkbI also manually updated the repo00:32
jeblaircool00:32
clarkbjog0: should work now00:33
pabelangerGents, do you mind giving a once over to https://review.openstack.org/49020 when you have a moment?  It is a stackforge import of a puppet module. Thanks in advance00:33
pabelangerand gals too00:33
jog0clarkb: thanks00:34
*** sarob has quit IRC00:34
*** sarob has joined #openstack-infra00:35
clarkbjeblair: we seem to be spending a lot more time with nodepool VMs in deleting state recently. is that all rackspace?00:35
jeblairclarkb: primarily gate flapping, secondarily nodepool has a bug in the cleanup cron so if it hits an error deleting, they can stay around for days00:37
jeblairclarkb: i've been running "for x in `nodepool list |grep "delete   | [2-9][0-9]\\."|awk '{print $2}'`; do echo $x; nodepool delete $x; done" about once a day to keep up with that until i fix it.00:38
*** davidhadas has quit IRC00:38
jeblair(or some variant)00:38
*** sarob has quit IRC00:39
*** davidhadas has joined #openstack-infra00:39
*** hogepodge_ has quit IRC00:42
*** nosnos has joined #openstack-infra00:50
dstufftWas there any PyPI timeouts today?00:53
jeblairdstufft: https://jenkins02.openstack.org/job/gate-requirements-install/259/console is the only one i see with a quick glance00:55
jeblairdstufft: https://jenkins01.openstack.org/job/gate-requirements-install/ and https://jenkins02.openstack.org/job/gate-requirements-install/ being the best places to look for that00:55
dstufftjeblair: you might want to try bumping the pip timeout btw00:56
dstufftby default it's 15s00:56
*** anteaya has quit IRC00:56
jeblairdstufft: where's that set?00:57
jeblairdstufft: (and that affects "SSLError: The read operation timed out" ?)00:57
dstufftjeblair: same as any pip variable, --timeout on the cli, PIP_TIMEOUT =, or ~/.pip/pip.conf timeout =00:57
dstufftjeblair: I don't know for sure, it's an idea00:58
dstufftjeblair: i'm still trying to trace the problem that's affecting thrift00:59
dstufftand a few others00:59
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Set pip timeout to 60s when using pypi  https://review.openstack.org/4909301:00
jeblairdstufft: ^01:00
dstufftjeblair: cool, hopefully it helps. I'm still gonna keep poking at it though incase01:01
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Set pip timeout to 60s when using pypi  https://review.openstack.org/4909301:01
*** rockyg has quit IRC01:01
*** amotoki has joined #openstack-infra01:03
*** emagana has quit IRC01:04
*** thingee is now known as thingee_zzz01:05
*** sarob has joined #openstack-infra01:05
*** sarob has quit IRC01:10
*** changbl has joined #openstack-infra01:12
*** MarkAtwood has quit IRC01:13
*** sarob has joined #openstack-infra01:13
*** ryanpetrello has quit IRC01:14
mordreddstufft, jeblair: I am in favor of your products. can you send me some brochures?01:16
mordredjeblair: btw - while your head is in that space01:16
mordredjeblair: there is another change related to consuming select_mirror.sh in d-g01:17
*** sarob has quit IRC01:18
*** ryanpetrello has joined #openstack-infra01:20
*** UtahDave has quit IRC01:20
*** MarkAtwood has joined #openstack-infra01:20
*** esker has joined #openstack-infra01:21
*** dkliban has quit IRC01:22
*** nati_ueno has quit IRC01:22
*** dguitarbite has joined #openstack-infra01:28
*** melwitt1 has quit IRC01:30
*** boris-42 has quit IRC01:33
*** ryanpetrello has quit IRC01:39
*** ryanpetrello has joined #openstack-infra01:42
*** MarkAtwood has quit IRC01:46
Alex_GaynorSo something seems to be broken on the python2.6 builders01:48
Alex_GaynorIf you look at teh gate pipeline, they're all failing01:49
*** tizzo has quit IRC01:49
Alex_Gaynormordred, clarkb, fungi: ^01:50
Alex_GaynorAlso, zuul doesn't seem to respond super well to this many failures, 1285 results in teh queue right now01:50
*** rockyg has joined #openstack-infra01:54
*** ryanpetrello has quit IRC01:56
*** sarob has joined #openstack-infra02:06
*** UtahDave has joined #openstack-infra02:10
*** sarob has quit IRC02:10
*** mrda has quit IRC02:15
dimsAnyone know if zuul "stuck"? 5 reviews stuck in "check" though all the jenkins jobs have finished02:20
*** sarob has joined #openstack-infra02:26
*** sarob has quit IRC02:31
*** AlexF has joined #openstack-infra02:31
*** schwicht has quit IRC02:31
*** jerryz has quit IRC02:34
*** dims has quit IRC02:40
jgriffithnew one I think: https://jenkins02.openstack.org/job/gate-cinder-python26/1591/console02:40
*** dguitarbite has quit IRC02:40
jgriffithSo just curious, is there any sort of plan for getting the gate issues resolved at this point?02:42
jgriffithie do I still need to try and keep +A frozen?02:42
*** sarob has joined #openstack-infra02:44
*** dcramer_ has joined #openstack-infra02:47
*** sarob has quit IRC02:49
*** dolphm has joined #openstack-infra02:55
*** AlexF has quit IRC03:00
dolphmis something dead? "reverify" doesn't seem to be doing anything03:00
dolphmand a couple changes failed with this in the past hour http://logs.openstack.org/13/48413/2/gate/gate-horizon-python26/f79f074/console.html03:00
*** dguitarbite has joined #openstack-infra03:01
markmcclainlooks like there's a problem with py26 tests03:05
markmcclainmany of them seem to be failing in the gate03:05
* mordred looking03:07
*** rcleere has joined #openstack-infra03:08
*** yaguang has joined #openstack-infra03:08
mordreddolphm, Alex_Gaynor, jgriffith, markmcclain - it looks like it may be moving again?03:09
dolphmmordred: i can't get https://review.openstack.org/#/c/46193/ gating again03:09
mordredoh - wait03:09
mordredit looks like it may be slaves off of jenkins0203:10
mordredclarkb, fungi: you guys around?03:10
mordredI think it may have all been one slave03:11
mordredthat was unhappy03:11
mordredbut was failing quickly03:11
dolphmso it probably chewed through a lot of jobs?03:12
mordredso consuming a LOT of the jobs and failing them03:13
jgriffith:(03:13
jgriffithlong day for sir Jenkins03:14
mordredyah03:14
mordredsrrsly03:14
mordredI gotta tell you - he gets sad with the resets03:14
jgriffithmordred: perhaps he's going on strike03:15
jgriffithjenkins says: "I've had enough of this abuse...."03:15
*** che-arne has quit IRC03:15
mordredjgriffith: I don't blame him03:18
jgriffithmordred: me neither... I'd suggest a night off and I'll buy him a drink03:19
* hub_cap sneaks up to the bar03:19
* mordred throws a goat at hub_cap03:19
* hub_cap flogs mordred with a blotched axe03:19
hub_capi have /slap too ;)03:19
*** DennyZhang has joined #openstack-infra03:20
*** sandywalsh has quit IRC03:23
*** senk has joined #openstack-infra03:24
*** yaguang has quit IRC03:25
*** basha has joined #openstack-infra03:26
dolphmis new stuff getting into the gate queue?03:27
*** dguitarbite has quit IRC03:29
jeblairAlex_Gaynor: the result queue doesn't mean very much these days; it should be removed soon; it's little more than an indication that zuul should run the queue processor, so it could really be a flag.  at any rate, as long as there is work to do, it will do it, and once things settle down, it burns through the 'queue' very quickly.03:38
dolphm(disregard my last comment, the delay before starting jobs just seems longer than i'm used to)03:39
*** bnemec has quit IRC03:39
jeblairmordred: which slave?03:41
dolphmjeblair: example- http://logs.openstack.org/93/46193/19/gate/gate-keystone-python26/3f95a7f/console.html03:42
markmcclainIs there an easy to express a revert through?03:42
*** jhesketh_ has joined #openstack-infra03:42
*** bnemec has joined #openstack-infra03:43
markmcclainreverting should stop this https://bugs.launchpad.net/neutron/+bug/122400103:43
uvirtbotLaunchpad bug 1224001 in neutron "test_network_basic_ops fails waiting for network to become available" [Critical,In progress]03:43
jeblairmarkmcclain: no, it's on our wishlist03:43
*** senk has quit IRC03:46
* markmcclain loves logstash right now03:49
*** sandywalsh has joined #openstack-infra03:49
jeblairmarkmcclain: have you seen the bot in #openstack-qa?  the elastic-recheck script is announcing both failures that it classifies as well as unknown ones03:50
markmcclainno.. I forgot to add that room to my auto join list03:51
jeblairmarkmcclain: http://eavesdrop.openstack.org/irclogs/%23openstack-qa/%23openstack-qa.2013-10-01.log03:51
markmcclainjeblair: I think my favorite thing about logstash is that it made finding the problem commit super easy03:52
jeblairmarkmcclain: ++03:54
jeblairclarkb: ^ fan mail!03:54
*** AlexF has joined #openstack-infra03:55
clarkbwoot03:56
*** esker has quit IRC03:58
*** esker has joined #openstack-infra04:04
openstackgerritDarragh Bailey proposed a change to openstack-infra/jenkins-job-builder: Use yaml local tags to support including files  https://review.openstack.org/4878304:07
openstackgerritDarragh Bailey proposed a change to openstack-infra/jenkins-job-builder: Provide default ConfigParser object  https://review.openstack.org/4879004:07
*** DennyZhang has quit IRC04:08
openstackgerritDarragh Bailey proposed a change to openstack-infra/jenkins-job-builder: Add repo scm  https://review.openstack.org/4516504:17
*** AlexF has quit IRC04:22
*** AlexF has joined #openstack-infra04:31
openstackgerritDarragh Bailey proposed a change to openstack-infra/jenkins-job-builder: Raise exceptions when no entrypoint, macro or template found  https://review.openstack.org/4910204:36
openstackgerritDarragh Bailey proposed a change to openstack-infra/jenkins-job-builder: Raise exceptions when no entrypoint, macro or template found  https://review.openstack.org/4910204:37
*** Ryan_Lane has quit IRC04:38
*** darraghb has joined #openstack-infra04:38
*** crank has quit IRC04:41
*** tvb has joined #openstack-infra04:46
*** senk has joined #openstack-infra04:47
mordredjeblair: it was centos6-1004:49
darraghbquery on https://review.openstack.org/#/c/45165/, should the JJB code default to generating all XML elements even if the default is blank/no-value, or should it default to the plugin behaviour on which XML tags to set?04:50
*** senk has quit IRC04:51
*** esker has quit IRC04:53
*** tvb has quit IRC04:58
mordreddarraghb: there was a discussion around that a little while ago and I don't remember the outcome04:59
mordredI tihnk mgagne and zaro were talking about it maybe?04:59
darraghbmordred: any irc logs kept online to search?05:02
mordreddarraghb: yup!05:03
darraghbnvrmind found them I think: 'eavesdrop'05:03
mordredhttp://eavesdrop.openstack.org/irclogs/%23openstack-infra/05:03
darraghbnuts, google isn't indexing the openstack-infra logs05:05
darraghbmordred: any guestimate on how long ago that conversation came up?05:05
*** afazekas has quit IRC05:05
*** jhesketh_ has quit IRC05:06
*** SergeyLukjanov has joined #openstack-infra05:08
mordreddarraghb: gah... no :)05:08
mordred:(05:08
*** Ryan_Lane has joined #openstack-infra05:08
darraghbright so, wget the lot and grep it is05:10
mordrednice choice!05:11
mordredalso, I'll try to remember to poke mgagne and zaro in the morning05:11
*** Ryan_Lane has quit IRC05:13
*** SergeyLukjanov has quit IRC05:14
*** SergeyLukjanov has joined #openstack-infra05:15
*** pmoosh has quit IRC05:16
*** AlexF has quit IRC05:24
*** thingee_zzz is now known as thingee05:32
*** afazekas has joined #openstack-infra05:35
*** slong has quit IRC05:35
*** slong has joined #openstack-infra05:35
*** jeremyb has quit IRC05:36
*** jeremyb has joined #openstack-infra05:36
*** Ryan_Lane has joined #openstack-infra05:39
darraghbCan't find anything concrete on whether should default to sending blank XML tags where plugin normally doesn't set them in the UI unless a specific value is set by the user05:43
*** jeremyb has quit IRC05:46
*** jeremyb has joined #openstack-infra05:46
*** senk has joined #openstack-infra05:47
*** Ryan_Lane has quit IRC05:47
*** afazekas is now known as afazekas_no_irq05:49
*** senk has quit IRC05:54
*** tvb|afk has joined #openstack-infra05:54
*** tvb|afk has quit IRC05:54
*** tvb|afk has joined #openstack-infra05:54
*** darraghb has quit IRC05:59
*** pblaho has joined #openstack-infra06:00
*** UtahDave has quit IRC06:14
*** SergeyLukjanov has quit IRC06:15
*** vipul is now known as vipul-away06:16
*** odyssey4me has joined #openstack-infra06:18
*** blamar has quit IRC06:20
*** AlexF has joined #openstack-infra06:29
*** vipul-away is now known as vipul06:30
*** slong has quit IRC06:33
*** Ryan_Lane has joined #openstack-infra06:39
*** flaper87|afk is now known as flaper8706:43
*** Ryan_Lane has quit IRC06:46
*** senk has joined #openstack-infra06:50
ttxmordred: translations get updated faster than the gate can merge them.06:50
*** AlexF has quit IRC06:54
*** senk has quit IRC06:54
ttxclarkb, jebalir: in case you're still around, Zuul status page looks broken to me (nothing but the graphs in there)06:54
ttxjeblair: ^06:54
*** jcoufal has joined #openstack-infra07:11
*** thingee is now known as thingee_zzz07:11
*** salv-orlando has joined #openstack-infra07:15
*** SergeyLukjanov has joined #openstack-infra07:18
*** DinaBelova has joined #openstack-infra07:19
*** markmc has joined #openstack-infra07:35
*** Ryan_Lane has joined #openstack-infra07:40
*** fbo_away is now known as fbo07:42
*** Ryan_Lane has quit IRC07:44
*** jpich has joined #openstack-infra07:50
*** hashar has joined #openstack-infra07:54
*** dizquierdo has joined #openstack-infra07:55
*** johnthetubaguy has joined #openstack-infra07:57
*** DinaBelova has quit IRC08:00
*** AlexF has joined #openstack-infra08:04
*** salv-orlando has quit IRC08:05
*** DinaBelova has joined #openstack-infra08:08
*** yassine has joined #openstack-infra08:11
*** tvb|afk has quit IRC08:14
*** tvb|afk has joined #openstack-infra08:14
*** tvb|afk has quit IRC08:14
*** tvb|afk has joined #openstack-infra08:14
*** kpavel has joined #openstack-infra08:16
*** derekh has joined #openstack-infra08:18
*** AlexF has quit IRC08:19
*** kpavel has quit IRC08:19
AJaegerIs jenkins or Zuul down?08:20
BobBallProbably08:21
BobBallor possibly not - why do you think they are?08:21
AJaegerBobBall, https://review.openstack.org/#/q/status:open+project:openstack/openstack-manuals,n,z08:22
AJaegerThere's no progress since 90 mins - the validation jobs normally need less than 10 minutes each08:22
BobBallgate queue is very long ATM08:22
AJaegerBobBall, for examle https://review.openstack.org/#/c/48945/08:22
BobBallMaybe it is broken08:23
BobBallThat job isn't showing in Zuul08:23
BobBallbut the gate is very long as I said08:23
AJaegerYEah, it's slower than usual for sure even earlier08:25
AJaeger5hours ago it took 40 mins until "starting gating jobs " appeared at https://review.openstack.org/#/c/48738/08:25
AJaegerthis sometimes takes seconds...08:25
*** dizquierdo has quit IRC08:27
AJaegerAny idea why openstack-manuals are not shown at all on http://status.openstack.org/zuul/ ?08:40
*** Ryan_Lane has joined #openstack-infra08:40
ttxstatus/zuul up again08:41
*** Ryan_Lane has quit IRC08:44
*** tkammer has joined #openstack-infra08:45
*** AlexF has joined #openstack-infra08:47
*** flaper87 is now known as flaper87|afk08:55
*** flaper87|afk is now known as flaper8708:56
*** Clabbe has quit IRC09:03
*** dizquierdo has joined #openstack-infra09:04
*** Ryan_Lane has joined #openstack-infra09:10
*** AlexF has quit IRC09:12
*** Ryan_Lane has quit IRC09:15
ttxbut looks like events and results are not really processed09:18
ttxthings merged 30 minutes ago but did not reach the post pipeline yet09:19
ttxapproved reviews are not being queued to the gate either09:20
*** senk has joined #openstack-infra09:22
*** senk has quit IRC09:26
*** dizquierdo has quit IRC09:33
*** dizquierdo has joined #openstack-infra09:37
ttxhmm, looks like it suddenly unstuck09:50
*** luhrs1 has joined #openstack-infra09:51
*** luhrs1 is now known as che-arne09:51
*** DinaBelova has quit IRC09:59
*** sandywalsh has quit IRC10:10
*** Ryan_Lane has joined #openstack-infra10:11
*** Ghe_Holidays is now known as GheRivero10:14
*** Ryan_Lane has quit IRC10:15
*** flaper87 is now known as flaper87|afk10:17
*** pcm_ has joined #openstack-infra10:19
*** pcm_ has quit IRC10:21
*** pcm_ has joined #openstack-infra10:21
*** senk has joined #openstack-infra10:22
*** sandywalsh has joined #openstack-infra10:23
*** schwicht has joined #openstack-infra10:23
*** SergeyLukjanov has quit IRC10:25
*** senk has quit IRC10:27
*** SergeyLukjanov has joined #openstack-infra10:27
*** flaper87|afk is now known as flaper8710:29
*** DinaBelova has joined #openstack-infra10:30
*** darraghb has joined #openstack-infra10:33
*** AlexF has joined #openstack-infra10:35
darraghbjeblair: need an indication on whether the assumption in https://review.openstack.org/#/c/49102/ that there should only be 1 or no entry points returned is valid10:41
AJaegerttx, indeed it's moving forward now10:41
*** Ryan_Lane has joined #openstack-infra10:41
*** Ryan_Lane has quit IRC10:45
*** basha has quit IRC10:54
*** sergmelikyan has joined #openstack-infra10:54
sergmelikyanHi guys, not so long ago mordred updated repos with PBR and removed d2to1 from setup_requires section.10:56
sergmelikyanAlso this commit added #!/usr/bin/env python hashbang, it's used/required by CI?10:56
*** hashar has quit IRC11:00
sergmelikyanAnd what is GLOBAL REQUIREMENTS REPO ?11:01
*** AlexF has quit IRC11:04
sergmelikyanlast question is obsolete :( Next time I will search first :(11:04
*** AlexF has joined #openstack-infra11:05
*** hashar has joined #openstack-infra11:05
*** boris-42 has joined #openstack-infra11:07
*** hashar_ has joined #openstack-infra11:13
*** hashar has quit IRC11:13
*** hashar_ is now known as hashar11:13
*** ojacques has quit IRC11:14
*** ojacques_ has joined #openstack-infra11:14
*** ojacques_ is now known as OlivierSanchez11:14
*** che-arne has quit IRC11:16
*** flaper87 is now known as flaper87|afk11:16
*** dims has joined #openstack-infra11:18
*** luhrs1 has joined #openstack-infra11:19
*** luhrs1 is now known as che-arne11:19
*** darraghb has left #openstack-infra11:21
*** flaper87|afk is now known as flaper8711:21
*** senk has joined #openstack-infra11:23
sdaguettx: I think that with the amount of work rescheduling resets, things are a little slower than you might imagine11:26
ttxsdague: ack... it's a lot slower than when the gate was so much loaded around FPF, though11:27
*** senk has quit IRC11:27
sdagueyeh, but nnfi means it reschedules on the reset immediately11:27
ttxsdague: right. That creates weird artifacts, like the post pipeline only being fed once every 4 hours11:28
ttxI don't really think that's healthy11:28
ttxif you look at the post pipeline trends graph on zuul status page, you can see how weird it is11:29
*** michchap has quit IRC11:29
sdagueyeh, though there were also a spike of check jobs at the same tim11:30
ttxnothing got there for like 5 hours and then we got 30 changes fed to it at once11:30
*** michchap has joined #openstack-infra11:30
ttxthe check line was pretty much empty all morning11:30
ttx(european morning)11:30
*** primemin1sterp has joined #openstack-infra11:31
ttxso I wouldn't blame the check line.11:31
ttxthe check line got fed at around the same moment as the post pipeline11:31
sdaguehonestly, it's probably worth figuring out how to instrument zuul for behavior we think is aberant11:31
*** primeministerp has quit IRC11:31
sdaguebecause just changing scheduling again will push up some other issue (like the check starve)11:32
sdaguewe're actually running in a weird config right now because of reacting to the check starve11:32
ttxsdague: probably boils down to the way the events get processed11:32
sdagueand the implications for when priorities are set11:32
sdaguewhich is only when something enters a queue11:32
ttxI don't think it really resulted in slowing things down, just "looks" weird11:33
ttxalso we lost zuul status page for about 3 hours11:33
*** nosnos has quit IRC11:33
ttxthe page was still fetching json but would not display queues11:33
ttxno clear error in js consoles11:33
ttxI blamed the complexity of the gate tree11:34
ttxsdague: i'm considering to only allow gate fixes and RC1-blocking fixes, until we get a few RC1s out. I.e. stop random fixes from preventing us to issue rc111:35
ttxsdague: do you think that could help ?11:35
sdaguettx: are you sure that post did just get pushed to the bottom of the page/11:36
sdaguewhen the train map gets wide, the 3rd column drops11:36
sdaguebelow it11:36
ttxsdague: the page was pretty empty. Only showed graphs11:36
ttxand the "Queue lengths:" line with empty counts11:37
sdaguettx: oh, yeh, so that means zuul stopped responding to json requests11:37
*** matsuhashi has quit IRC11:37
sdaguewhy... no idea11:37
ttxsdague: the json was fetched alright11:37
sdaguethe json was empty?11:37
ttxHmmm... all I can tell is that it was returning 200. That said I could fetch it myself. Used Dan Smith's tool to parse the json11:38
sdagueok11:39
ttxI blamed some timeout in the page processing that would not generate any error in console11:39
sdagueyeh, could be11:39
ttxand then you fetch the next one before you could fully display the previous one11:39
sdagueoh, yeh, that could be11:39
sdaguewe refresh rather more than we need11:40
ttxsdague: another fun fail: translations being re-imported (creating another patchset) while the gate is almost done processing the change11:40
ttxgets refreshed every day and those days, one day is not enough to get through the gate11:41
ttxoops.11:41
sdaguettx: so on the gate fix issue, the challenge is we've been having is that teams aren't prioritizing the project races that are causing the issues11:41
*** Ryan_Lane has joined #openstack-infra11:41
*** Ryan_Lane has joined #openstack-infra11:41
ttxsdague: i'm looking through the bugs right now11:42
ttxsee if I can help to prioritize correctly. The trick is the bug status is often not reflecting reality11:42
*** OlivierSanchez is now known as osanchez11:42
*** markmcclain has quit IRC11:42
sdaguewell, the neutron bugs that are bouncing the gate I've set to Critical and rc111:42
*** markmcclain has joined #openstack-infra11:43
ttxLooking at https://bugs.launchpad.net/tempest/+bug/1226337 right now, trying to understand if it needs more than the cinder fix11:43
uvirtbotLaunchpad bug 1226337 in cinder "tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern flake failure" [Undecided,Fix committed]11:43
*** markmcclain1 has joined #openstack-infra11:44
*** markmcclain has quit IRC11:44
*** osanchez has quit IRC11:45
ttxsdague: was wondering if the "Last seen" was calculated from hte recheck time or the test failure time :)11:45
sdaguettx: recheck time I think11:45
*** Ryan_Lane has quit IRC11:46
sdagueif we have a good logstash requery we can track it down11:46
*** ryanpetrello has joined #openstack-infra11:46
*** dcramer_ has quit IRC11:46
ttxsdague: would be nice to know if that cinder fix closed it11:46
ttxsince that was our biggest offender11:46
sdaguethat's not our biggest offender11:46
sdagueand I suspect that's probably good now11:47
ttxok, one of our 180+ retries offenders11:47
*** sandywalsh has quit IRC11:47
*** ryanpetrello has quit IRC11:47
*** OlivierSanchez has joined #openstack-infra11:47
sdaguethis is our biggest offender - https://bugs.launchpad.net/neutron/+bug/122400111:48
uvirtbotLaunchpad bug 1224001 in neutron "test_network_basic_ops fails waiting for network to become available" [Critical,In progress]11:48
*** OlivierSanchez is now known as osanchez11:48
ttxsdague: I was going by the retry count11:48
sdague165 instances in last 24 hrs - http://logstash.openstack.org/#eyJzZWFyY2giOiJcInRlbXBlc3Quc2NlbmFyaW8udGVzdF9uZXR3b3JrX2Jhc2ljX29wcyBBc3NlcnRpb25FcnJvcjogVGltZWQgb3V0IHdhaXRpbmcgZm9yXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4MDQ1NzY5NzMxOX0=11:48
sdagueso the problem is, with neutron issues, everyone just does a recheck no bug11:49
ttxha. ha.11:49
*** markmc has quit IRC11:49
*** yassine_ has joined #openstack-infra11:52
*** yassine has quit IRC11:56
*** dolphm has quit IRC11:58
*** sandywalsh has joined #openstack-infra12:01
*** dkranz has joined #openstack-infra12:01
*** MoXxXoM has quit IRC12:01
ttxsdague: for 1224001, apparently it was narrowed down to enabling tenant isolation in order to get tempest neutron tests to run in parallel (bug 1216076)... any idea if the original disease was actually worse than the cure ?12:03
uvirtbotLaunchpad bug 1216076 in tempest "Neutron jobs won't pass tempest running in parallel" [Critical,Fix released] https://launchpad.net/bugs/121607612:03
*** tkammer has quit IRC12:03
sdaguettx: yeh, I was just working through that12:04
ttxsdague: ok, then I'll stop distracting you :)12:04
sdagueso if neutron can't run under tenant issolation... that's kind of a problem. Basically means that it can't take simultaneous requests12:04
sdaguewhich, realistically is what we are seeing12:04
sdagueso my current idea12:04
sdaguerevert that patch12:04
ttxsdague: yes, that would still be a bug in Neutron alright. Just trying to unfuck the gate12:05
sdagueadd 2 neutron tests to the neutron jobs that run under isolation, so the neutron team could still work through it12:05
sdaguewhich would also have the added benefit of getting us 4 tempest jobs on neutron changes12:05
*** dims has quit IRC12:07
ttxsdague: sounds like a plan12:07
openstackgerritSean Dague proposed a change to openstack-infra/config: add tenant issolation jobs to neutron patches  https://review.openstack.org/4914412:07
sdagueso the only issue with that plan....12:08
*** dims has joined #openstack-infra12:08
sdagueit will take a full day for something to move up through the gate12:08
ttxsdague: unless we kill it. If that solves like 50% of resets that's worth it12:09
*** dizquierdo has left #openstack-infra12:09
sdaguewell - https://review.openstack.org/#/c/49143/ is for review12:09
*** Ryan_Lane has joined #openstack-infra12:12
sdaguettx: also https://review.openstack.org/#/c/49144/ - which would be the zuul config changes12:12
*** mriedem has joined #openstack-infra12:15
*** Ryan_Lane has quit IRC12:16
*** hashar has quit IRC12:18
*** flaper87 is now known as flaper87|afk12:20
*** esker has joined #openstack-infra12:21
mriedemmordred: i think this has to be pushed through the gate since it's quantumclient - it's blocking all stable branch patches: https://review.openstack.org/#/c/49006/12:22
*** tkammer has joined #openstack-infra12:22
*** senk has joined #openstack-infra12:23
ttxyeah, a pile of 4 just got through12:24
*** esker has quit IRC12:26
*** senk has quit IRC12:27
sdaguettx: well the neutron job passed on that devstack one12:29
sdaguefungi: you up yet?12:29
ttxpeople need to stop on this sleeping habit12:30
dimslol12:32
*** dkliban has joined #openstack-infra12:32
*** rfolco has joined #openstack-infra12:32
*** flaper87|afk is now known as flaper8712:33
*** tizzo has joined #openstack-infra12:35
mriedemsdague: am i wrong about this? ttps://review.openstack.org/#/c/49006/12:36
mriedemin that it won't pass tempest because it's quantumclient12:37
mriedemoops https://review.openstack.org/#/c/49006/12:37
sdaguemriedem: yeh, I think anything on the quantumclient branch needs special handling12:38
* sdague can't wait until we drop grizzly support12:38
mriedemsdague: has to wait 6 months12:38
mriedemyou could hibernate!12:39
*** odyssey4me has quit IRC12:39
ttxcan't wait until we drop folsom support.12:39
mriedemttx: speaking of: https://review.openstack.org/#/c/48788/12:40
mriedemi thought folsom only got security fixes now, is that right?12:40
*** thomasm has joined #openstack-infra12:40
mriedemthomasm: good morning, are you going to abandon this? https://review.openstack.org/#/c/48299/12:41
ttxmriedem: it no longer gets point releases. That doesn't prevent critical bugfixes from being proposed to the branch12:41
ttxnot convinced that would be one of them though12:41
mriedemttx: ok. once icehouse is released though then people are on their own for folsom patches right?12:42
thomasmmriedem, Oh, man, I forgot to do that. sorry about that!12:42
ttxyes, definitely12:42
*** markmc has joined #openstack-infra12:42
mriedemthomasm: np, just cleaning up my review backlog :)12:42
thomasm=]12:42
thomasmdone12:42
*** Ryan_Lane has joined #openstack-infra12:43
mriedemthanks12:43
thomasmSure thing. Thanks for reminding me!12:43
*** pblaho has quit IRC12:44
AlexFa na samom dele?12:47
*** Ryan_Lane has quit IRC12:47
AlexFsorry… please disregard...12:47
thomasmHey jog0, would you mind checking out https://review.openstack.org/#/c/48351/ when you get a change? It's that devstack fix for RPC notifier when CM is enabled.12:47
thomasmFixed up the commit mesage, like asked.12:47
thomasmmessage*12:48
thomasmchance*12:48
thomasmwhoa can't type this morning12:48
sdaguethomasm: jog0's on the west coast at a conference, so I'd expect he'll be on a little later12:48
thomasmsdague, Ohhh okay. Cool. Thanks! =]12:49
fungisdague: 'sup?12:50
* fungi peruses scrollback12:50
sdaguefungi: so this https://review.openstack.org/#/c/49143/ has a good chance of alleviating the gate resets12:51
sdaguebut if I just put it into the gate, it probably won't bounce to the top until late this afternoon12:52
sdagueso what kind of magic should we do to get that to the top of the queue12:53
* fungi looks and thinks12:56
koolhead17mordred: around12:57
fungialso, i've only read about half the scrollback so far... did we get any more problem slaves like centos6-10 throwing java exceptions in the log and not running tests?12:57
sdaguefungi: not that I've noticed this morning12:57
*** jcoufal has quit IRC12:58
*** dprince has joined #openstack-infra12:58
fungisdague: okay, so there are two possible ways for jumping the queue with 49143 (maybe more than that, but these are the options i know)...12:58
fungione is to manually merge that change to tempest and bypass the gate, risking the potential that it's not actually gate-worthy even though check jobs succeeded12:59
sdagueso it's a revert for devstack, so I think manual merge would be fine13:00
fungithe other is to stop zuul and dump the queue, then reenqueue all pending changes with that one at the front of the list13:00
fungier, right devstack. not tempest13:00
* fungi is still on coffee #213:00
sdagueno worries13:00
sdagueso because it's a revert, I think direct merge is ok13:00
sdaguebecause we know we worked with the old code, as we did until thursday13:01
fungii can take care of that, in that case13:01
sdaguegreat, the sooner the better13:01
fungigimme a few minutes to make sure i do it "correctly" and don't just cause myself new problems to spend my morning fixing13:01
*** julim has joined #openstack-infra13:01
sdagueyep, no worries :)13:01
*** rockyg has quit IRC13:02
sdaguethen a couple of gate resets later we should have everyone on the code, and the gate will flow again13:02
*** anteaya has joined #openstack-infra13:03
fungiyeah, just thinking through that to make sure there's not something i'm missing. it might cause other queued changes to lib/tempest to fail to merge because of conflicts, but seems generally safe13:03
*** sandywalsh has quit IRC13:03
*** weshay has joined #openstack-infra13:04
sdagueyeh, it should be pretty issolated13:05
fungisdague: are there any other devstack cores around to get a second +2 just for cya?13:05
sdagueI can live with merge conflicts13:05
sdaguedtroyer is the only other one13:05
sdaguereally13:05
fungik13:05
sdaguehonestly, we +A stuff through in emergencies, so consider it +A13:05
*** davidhadas has quit IRC13:06
sdagueI just didn't want to actually send it to the gate13:06
*** tizzo has quit IRC13:06
fungihere goes...13:08
fungiand it's merged13:08
sdaguegreat, thank you13:08
sdagueoh, sweet, the top of the queue reset13:09
sdaguewhich means we might get this all in one go on the new code13:09
sdagueoh, never mind, it reset 7 mins ago13:10
* fungi resumes catching up on scrollback to see if there are any other fires which need putting out13:10
sdagueI think that's the only fire13:10
*** adalbas has joined #openstack-infra13:10
ttxfungi: agree that's theonly fire13:11
ttxa few weird things like the zuul status page not showing queues, or weird sudden processing of results resulting in spikes in the post pipeline after hours of inactivity13:12
ttxbut that was not strictly blocking13:12
*** Ryan_Lane has joined #openstack-infra13:13
*** yassine has joined #openstack-infra13:15
fungiright, sounds like the "refresh faster than we can draw" theory is a good possibility there13:15
sdaguefungi: when you get a chance - https://review.openstack.org/#/c/49144/ is sort of the 2nd half of 4914313:16
ttxfungi: all I can say is that it was fetching the status.json alright... and no JS error in the console13:16
sdagueto bring the jobs back for neutron13:16
fungialso, on translation import derpage, we've been trying to get up with tom because something seems to have changed with transifex accounts for cinder and nova such that we haven't been successfully sending the updated potfiles back for no idea how long now13:16
sdagueso we don't just make RC by turning off all the tests13:16
*** dcramer_ has joined #openstack-infra13:16
sdagueok, 47984,2 just reset13:17
fungihopefully i'll catch him at tonight's conference call if nothing else13:17
sdaguewhich means everything in the gate will now be on the new code for neutron tests (or already passed their neutron tests)13:17
*** hashar has joined #openstack-infra13:17
*** Ryan_Lane has quit IRC13:18
sdagueso this is the moment of truth13:18
*** zul has quit IRC13:18
*** dkliban has quit IRC13:19
*** sandywalsh has joined #openstack-infra13:19
*** yassine_ has quit IRC13:19
*** AlexF has quit IRC13:20
*** esker has joined #openstack-infra13:22
*** hashar has quit IRC13:24
*** che-arne has quit IRC13:25
*** ryanpetrello has joined #openstack-infra13:28
*** alexpilotti has joined #openstack-infra13:30
openstackgerritSean Dague proposed a change to openstack-infra/config: add tenant isolation jobs to neutron patches  https://review.openstack.org/4914413:30
sdaguethanks jd__ for spelling comments :)13:31
*** AlexF has joined #openstack-infra13:31
jd__that's all I can do :)13:32
sdagueok, first gate reset since that revert is the pypi CDN 'slode13:32
sdagueso that's good13:32
*** zul has joined #openstack-infra13:32
*** hashar has joined #openstack-infra13:33
*** acabrera has joined #openstack-infra13:33
*** acabrera is now known as alcabrera13:34
*** blamar has joined #openstack-infra13:34
*** luhrs1 has joined #openstack-infra13:41
*** davidhadas has joined #openstack-infra13:41
*** luhrs1 is now known as che-arne13:42
*** jcoufal has joined #openstack-infra13:42
clarkbttx: delay in post processing possibly due to gearman queue priority. the backlog of check and gate must be serviced first13:42
ttxclarkb: the numbers in "queue length" were definitely looking bad. Like 150 events and 90 results or something13:43
*** Ryan_Lane has joined #openstack-infra13:44
ttxclarkb: you can still see te calm then the peak on the post pipe trend line on zuul status page13:44
sdaguettx: first neutron reset in the gate, but it's a different bug13:45
ttxclarkb: do we have any leverage to throw more muscle into gearman queue processing ?13:45
openstackgerritDavid Peraza proposed a change to openstack/requirements: Adding sqlalchemy db2 dialect dependencies  https://review.openstack.org/4874513:45
ttxsdague: so far so good13:45
sdagueso I think one of the NNFI failings right now is the fact that failing jobs get automatically readded if jobs above them reset13:46
*** tkammer has quit IRC13:47
*** dkranz has quit IRC13:47
sdaguethe current issue is we've got 3 requirements changes in the gate13:47
ttxsdague: if there is a way to anticipate failure, https://review.openstack.org/#/c/49029/ was -2ed so we already know it won't merge13:47
sdaguettx: that doesn't keep it out of the gate13:48
sdaguejust prevents it from merging13:48
ttxcan we kill it and win 12 min ?13:48
*** Ryan_Lane has quit IRC13:48
sdaguettx: that won't be a reset though13:48
sdagueit will just stall13:49
*** rcleere has quit IRC13:49
sdagueso right now the requirements changes have a 50% chance of failing because the pypi CDN is broken, and it's going to take a week or two for them to roll out fixes13:50
clarkbsdague: ya the assumption is the failure closest to the head of the queue causes subsequent failures. this doesn't apply well to a flaky gate13:50
sdagueclarkb: it's not a flakey gate!13:50
sdagueit's bugs in the servers13:50
sdagueI'm going to hit anyone that uses the terms flakey gate or flakey tests from here on out :)13:50
clarkbsdague right less deterministic failures somewhere13:51
* anteaya removes flakey from her list of words, except to describe pie crust13:51
clarkbthat better? :)13:51
sdague:)13:51
sdagueyes13:51
sdagueso the assumption that we made previously was that the race in the server that caused a reset was introduced in the change that reset13:51
sdaguebut in experience we know that's rarely the case13:52
sdaguethese are almost always races that were introduced way before the patch that failed13:52
jgriffith+113:53
fungiwell, i think the ordering is still tuned for deterministic failures, really13:53
sdaguefungi: right13:53
fungiand honestly i have trouble imagining how we could tune it more efficiently for non-deterministic failure modes13:53
sdaguebut the system is now at a complexity, and the testing is now actually starting to evolve to something closer to a real world env, that strict determinism is gone13:54
sdaguefungi: when something fails, kick it out entirely13:54
*** dkliban has joined #openstack-infra13:54
*** changbl has quit IRC13:54
fungithat still suggests an assumption that the race in the server that caused a reset was introduced in the change that reset13:54
sdaguebasically the problem is we effectively have "auto recheck" on things in the queue if they are still running any test job when something ahead of them fails13:55
sdaguewhich actually means we aren't keeping track of at least 50% of our races13:55
sdaguebecause they are hidden in the fact that the job auto rechecked multiple times in the gate13:56
*** dcramer_ has quit IRC13:57
sdagueif we kicked them, then we'd make folks go through the normal process of registering the race13:57
fungii expect the end result to be the same, statistically13:57
sdaguewell, the bias is actually that the longer a test you have, the more likely you are to get autorechecked13:58
sdagueso neutron actually gets a lot of autorechecks13:58
sdaguebecause of the long py26 job13:58
*** AlexF has quit IRC13:59
fungiyes, but it's only the recheck which happens when all other changes ahead succeed which counts13:59
fungiand that set of jobs runs only once for any given change14:00
sdagueso take those requirements jobs, they are going to keep failing14:00
sdaguewe've got 3 in a row in the gate14:00
sdagueand at 50% fail rate, everything behind it is basically "give up, wait until dinner"14:00
sdagueand they've failed a lot already14:00
fungiright, so the net result is that we should have some automated means of preventing approvals for requirements changes? that will be the end result anyway14:01
sdaguebut we are auto rechecking them, vs. kicking them out and making a manual call to put them back in14:01
*** tkammer has joined #openstack-infra14:01
sdagueor, if you fail, stick you at the end of the queue14:01
sdaguethe next time a reset happens14:02
fungibasically, the longer the integrated gate queue is, the greater the chance that changes tested more heavily against jobs with nondeterministic failures will require manual intervention (because they get more opportunities to be kicked out)14:02
*** markmc has quit IRC14:03
sdaguefungi: sure, but then we can have some human control over patches14:03
fungihowever, when the queue is short, the same changes don't get kicked out as aggressively because they have those same jobs run fewer times, at least under your proposal14:03
sdaguefungi: yes14:04
*** jcoufal has quit IRC14:04
sdaguehowever, on average, the queue will be shorter14:04
sdaguebecause we are shedding things out14:04
sdagueand making people put them back in14:04
openstackgerritFelipe Reyes proposed a change to openstack-infra/jenkins-job-builder: Add tests to the assignedNode option  https://review.openstack.org/4915814:04
sdaguewhich would also let a gate fixing bug fix have a better chance of moving through the queue14:04
fungii think that shorter queue comes at the expense of a lot of additional developer time and a slowing down of the overall pace of development. i guess that could be argued as a positive outcome, but i'm unconvinced14:05
*** markmcclain1 has quit IRC14:05
mriedemhmm, speaking of translations...14:06
mriedembabel.core.UnknownLocaleError: unknown locale 'tl_PH'14:06
mriedemthat's coming from cinder using babel 0.9.614:06
fungiand moving them to the back of the queue on failure risks that the developers will never get feedback if the gate is long enough that it never empties, but maybe that will happen less often the fewer failing changes are closer to the head14:07
*** thedodd has joined #openstack-infra14:07
sdaguepart of the issue is our entire categorization system of races, rechecks, doesn't work unless it leaves the queue14:08
*** davidhadas_ has joined #openstack-infra14:09
sdagueunrelated precise31 looks bad14:09
sdaguehttps://jenkins01.openstack.org/job/gate-python-cinderclient-pep8/148/console14:09
fungichecking14:09
fungiahh, yep14:10
*** AlexF has joined #openstack-infra14:10
fungii've offlined it while i prod it a littole14:10
fungilittle14:10
*** davidhadas has quit IRC14:11
sdaguewhat about an intermediate solution where you only get to be auto-rechecked if you fail a shared job?14:12
sdagueor more specifically if you fail a job unique to you, you don't get to be put back into the gate automatically on a reset above you. I think that could be figured out programatically14:13
mriedemttx: given your discussion with russellb about translations, i think this is going to be a problem for rc1: https://bugs.launchpad.net/cinder/+bug/123369414:14
uvirtbotLaunchpad bug 1233694 in cinder "babel 0.9.6 can't compile tl_PH locale" [Undecided,New]14:14
sdaguemriedem: we're trying to get babel 1.1 into requirements, it's in the gate :)14:14
mriedemsdague: link to review?14:15
sdaguehttps://review.openstack.org/#/c/48739/14:15
*** jcoufal has joined #openstack-infra14:15
*** mkoderer has quit IRC14:15
fungisdague: agreed, i think zuul could fairly easily confirm whether there were no changes ahead of you which are being tested with the same job14:15
mriedemsdague: thanks14:15
*** Alex_Gaynor has quit IRC14:15
*** tvb|afk has quit IRC14:18
sdaguefungi: so it would be really nice to have something that auto downs nodepool nodes on a Java stack trace14:19
sdagueso it doesn't kill 6 changes before we notice14:19
mordredsergmelikyan: the #!/usr/bin/env python is not used/required - it's just in the boilerplate setup.py we copy around now14:20
mordredsergmelikyan: is it problematic for you?14:20
*** tizzo has joined #openstack-infra14:21
fungisdague: nodepool already does that, effectively14:22
fungisdague: the two slaves mentioned today are static/reused virtual machines, not under nodepool's control14:23
sdagueok, so precise31 failed at least 3 patches14:23
sdagueoh, ok, so how about those?14:23
mordredmriedem: ok14:23
fungisdague: jenkins would need some way of interpreting the failure and taking that node out of service. would probably be a jenkins plugin14:24
sdaguefungi: how do you do it now?14:24
sdaguemanually14:24
fungisdague: i log into the jenkins webui and click a button14:24
*** dcramer_ has joined #openstack-infra14:24
sdagueso jenkins has a REST interface14:24
sdagueI wonder if we could look at the static jobs and automate that14:24
fungisdague: it's certainly possible, but would probably fail up to several jobs before it was taken out of service14:25
sdagueok, still probably faster then human eyes :)14:25
fungiand the current plan, i think (last i heard) was to switch the static slaves to nodepool management anyway14:26
fungiwhich is probably a better use of our time14:26
sdagueok, cool14:26
*** tizzo has quit IRC14:27
anteayamordred: could you take a peek at a pm i sent you?14:27
sdagueagreed :) just trying to figure out all the fails that are doing us damage now14:27
fungithere's a non-zero chance right now that the recent rise in static node failures are due to us pushing jenkins too hard, so having each master start taking all of its slaves offline without a human in the loop to go fix them would potentially take us in the wrong direction as available testing resources go14:27
*** pblaho has joined #openstack-infra14:28
fungiwe're taking a bit of a wait-and-see approach since the jenkins master restarts over the weekend, but may decide this is a sign we need more masters and have scaled the current ones to too many slaves apiece14:28
mordredeek. that's a great scrollback to wake up to14:29
fungimordred: it was mostly about you14:29
fungi(okay, not really)14:30
mordredI think I might be leaning towards sdague in terms of kicking things out once they report failure14:30
mordredalthough I'm baffled about how to get more priority on these14:30
*** dizquierdo has joined #openstack-infra14:31
mordredsdague, ttx: perhaps a strongly worded email similar to the above thing sdague said about it not being a flaky gate or flaky tests?14:31
ttxmordred: we discussed it earlier. The correct wording is "f*cked gate" and "gate-f*cking bugs"14:32
ttxthe gate is perfectly fine and honest, it's just getting abused14:32
mordredthe gate is, in fact, showing us the current state of the software14:33
mordredit's just doing it in aggregate14:33
openstackgerritSean Dague proposed a change to openstack-infra/config: add tenant isolation jobs to neutron patches  https://review.openstack.org/4914414:33
ttxmordred: I think those issues are getting some attention. It's just tricky issues so everyone prefer to look at another problem14:33
mtreinishsdague: ^^^ so greenlighting tenat isolation on neutron caused some of the issues?14:34
sdaguemtreinish: that's the guess14:34
ttxclarkb: did you just throw some muscle at events processing ? the q is down to 014:34
ttxfor the first time in the last 10 hours at least14:34
sdaguebased on temporal corolation14:34
clarkbttx nothing from me. I am not actually here this is an illusion14:35
ttxsoething definitely happened14:35
anteayaclarkb: ghost tooth14:35
*** markmcclain has joined #openstack-infra14:37
mriedemsdague: looks like babel 1.1 isn't going to cut it, there is a bug in there for zh_CN14:37
sorenmordred: devpi is working out quite well. Thanks for the recommendation.14:37
mriedembabel.core.UnknownLocaleError: unknown locale 'zh_CN'14:37
mriedemhttp://babel.pocoo.org/docs/changelog/#version-1-314:37
jeblairttx: if you send me the status.json i can try to figure out why the status page didn't work14:38
jeblairthe post queue is bursty because the gate queue is being reset so much, zuul spends most of its time processing that.14:38
mtreinishsdague: that's pathetic, and we expect it to have a full gate at some point in the future?14:38
ttxjeblair: I happened to have saved it14:39
mordredsoren: w00t!14:39
*** davidhadas has joined #openstack-infra14:39
mordredmriedem: so you're saying we're going to need Babel 1.3?14:40
ttxjeblair: sent14:40
jeblairttx: event queues build up becaues the gate queue is being reset so much, zuul spends most of its time processing that.  :)  as soon as it settles for a bit, zuul works through the backlog quickly.14:40
mriedemmordred: yes14:40
mriedemmordred: added a note here: https://review.openstack.org/#/c/48739/14:40
mriedemi'll update my cinder bug14:40
*** Alex_Gaynor has joined #openstack-infra14:41
jeblairsdague, mordred: if you're very concerned with the pypi failures and the requirements changes, perhaps this would help: https://review.openstack.org/#/c/49093/14:42
*** davidhadas_ has quit IRC14:43
mriedemmordred: i just confirmed that babel 1.3 compiles zh_CN14:43
*** tkammer has quit IRC14:43
*** tkammer has joined #openstack-infra14:44
*** mgagne has joined #openstack-infra14:44
*** mgagne has joined #openstack-infra14:44
mordredjeblair: yes. I'm hoping that it will14:44
jeblairso let's merge it already? :)14:44
fungijeblair: i already approved14:44
jeblairfungi: thanks14:45
*** Ryan_Lane has joined #openstack-infra14:45
fungiit was on top of my pile to look at from overnight gate-things14:45
fungialso, https://review.openstack.org/49144 should probably get a high review priority so that neutron folk don't spend too long thinking their issues are resolved14:47
mriedemmordred: also confirmed that babel 1.3 compiles tl_PH, so i'll push up a patch to openstack-requirements for bumping babel to 1.314:47
mordred+2 from me - I'll let jeblair +A it14:47
mordredmriedem: you wannt just hijack the 1.1 patch?14:48
mriedemmordred: hijack how?14:48
mriedempush another patch to it?14:48
mordredyah14:48
mordredsdague: in our earlier conversation about ejecting things from the gate14:48
mriedemsdague: mordred: i'm not sure if my CLA will allow that - sdague - do you know?14:49
mriedem(pushing a change to a non-ibm patch)14:49
*** Ryan_Lane has quit IRC14:49
mordredsdague: we could, if we happen to see patches that have failed in the middle of the gate, just push commit message amending patches to them which will get them ejected (evil hack)14:49
sdaguemordred: yeh, I know, but I haven't fully scripted that hack yet :)14:50
mordred sdague still doens't really help with the recheck machinery though14:50
mordredsdague: :)14:50
sdaguegate-snipe14:50
sdaguemriedem: you should be fine14:51
openstackgerritSergey Lukjanov proposed a change to openstack-infra/config: Move savanna under openstack org  https://review.openstack.org/4849114:51
fungimordred: having zuul notice cdrv=-2 or aprv=0 comments in the gerrit stream on changes it's got in a dependent pipeline and dequeue them might be a good feature request14:51
mriedemsdague: ok, i'll save the transcript :_14:51
mriedem:)14:51
sdaguemriedem: isn't not even code ;)14:51
mriedemsdague: yeah, i feel i have to err on the side of caution though14:52
*** changbl has joined #openstack-infra14:52
*** rcleere has joined #openstack-infra14:54
sdagueyeh, no worries. I can snag it if you like.14:57
openstackgerritMatt Riedemann proposed a change to openstack/requirements: Raise Babel requirements to >= 1.3  https://review.openstack.org/4873914:59
openstackgerritSean Dague proposed a change to openstack/requirements: Raise Babel requirements to >= 1.3  https://review.openstack.org/4873914:59
sdaguedoh14:59
sdaguewell, that was a thing14:59
mriedemsdague: jinks14:59
mriedem*jinx14:59
sdaguewell, it looks live I've got patch 3, it will be funny to see how jenkins handles all of that15:00
mriedemsdague: if you don't mind, i'd like to push mine up so it addresses the cinder bug15:00
mriedemin the commit message15:00
*** matty_dubs|gone is now known as matty_dubs15:00
sdaguemriedem: sure15:00
sdagueso pull the review again and do another --amend15:00
sdaguethen it should sit on top15:01
*** jcoufal has quit IRC15:02
*** pblaho has quit IRC15:02
*** boris-42 has quit IRC15:02
sdaguejeblair: I don't understand the jenkins fail on my layout patch - http://logs.openstack.org/44/49144/3/check/gate-config-layout/8a686d1/console.html15:03
*** DinaBelova has quit IRC15:04
openstackgerritMatt Riedemann proposed a change to openstack/requirements: Raise Babel requirements to >= 1.3  https://review.openstack.org/4873915:04
sdagueas it looks like I have all the symbols cross listed right, unless i've gone blind15:04
openstackgerritA change was merged to openstack-infra/config: Set pip timeout to 60s when using pypi  https://review.openstack.org/4909315:04
jeblairsdague: needs to be added to the devstack-jobs job group at the end of devstack-gate.yaml15:05
jeblairsdague: (so that they are instantiated with check- and gate- variants)15:05
*** dkranz has joined #openstack-infra15:05
jeblairsdague: left comment15:06
*** yamahata has joined #openstack-infra15:06
sdaguegotcha15:06
openstackgerritSean Dague proposed a change to openstack-infra/config: add tenant isolation jobs to neutron patches  https://review.openstack.org/4914415:07
sdagueok, fixed15:07
sdaguejeblair: so what do you think about the idea in zuul that if you've failed a job which no one else above you in the queue is running, you don't get automattically restarted when a reset above you happens15:08
fungigah, yeah i overlooked that when i pointed out the other issues with the patch15:09
*** yamahata has quit IRC15:09
fungii guess we all did. thank you gate-config-layout job! ;)15:10
sdagueheh15:10
sdaguefungi: https://jenkins02.openstack.org/job/gate-ceilometer-python26/1282/console - crc fail?15:11
fungifun. i'll see if that tarball's still busted15:12
*** basha has joined #openstack-infra15:13
fungipossible that we have a race when a tarball is being uploaded while another job tries to download it15:13
jeblairsdague: i'm still thinking about it, but so far i'm not a big fan; it reverses a lot of the understanding about what zuul should be doing, so i don't want to embark on that lightly.15:13
ttxmordred, jeblair: I was discussing with sdague the need to temporarily restrict the gate to RC1 and gate fixes, but it seems things are flowing slightly better now ?15:13
freyeswhois mgagne15:13
mgagneI am me15:14
sdaguejeblair: so here's the thing15:14
freyesmgagne: sorry, I was trying to do /whois foo to check if you where available15:14
jeblairsdague: we _do_ see patches affecting following patches from time to time, and in the case of flakey failures, i'm not sure ejecting and reporting them more often is going to help.  it just means that people roll the dice more often than a machine.15:15
sdagueneutron is currently only passing about 4% of the tempest load a nova to land a change15:15
openstackgerritSergey Lukjanov proposed a change to openstack-infra/config: Add jobs for extra and dib repos  https://review.openstack.org/4916715:15
openstackgerritSergey Lukjanov proposed a change to openstack-infra/config: Remove unused savanna rtfd jobs  https://review.openstack.org/4916815:15
*** therve has quit IRC15:15
sdagueoh, different thing :)15:15
freyesmgagne: I updated the change 48661 (add git shallow clone support) in case you wanna take a second look :)15:15
fungisdague: that tarball's fine, so either it got corrupted in transit or it got downloaded while it was being overwritten15:15
jeblairsdague: parse error on "neutron is currently only passing about 4% of the tempest load a nova to land a change"15:15
sdaguejeblair: it's not related to this conversation15:15
*** Ryan_Lane has joined #openstack-infra15:16
jeblairoh.  stricken from record then.  :)15:16
sdaguejeblair: so basically right now patches have a window of "auto recheck"15:16
sdaguewhile the are auto rechecking, developers don't get notified of the races that are being exposed in their test run15:16
sdagueelastic-recheck doesn't get to see that data15:16
sdagueall that race behavior is hidden15:17
jeblairi don't really look at it that way; it's the same behavior we've always had it zuul, which is that your patch is rejected only if it fails on its own merit15:17
sdaguebut I think it made the assumption that the reason for a fail was related to something in the queue15:17
jeblairwe had a time when for one small case, patches were rejected _not_ based on their own merit (when there were merge conflicts), and we got, rightly so i think, complaints because of that15:17
jeblairsdague: that does not have to be hidden from e-r15:17
jeblairsdague: i suggested a while back that e-r should consume the zmq event stream15:18
jeblairsdague: and then it can look at all the jobs it wants15:18
sdaguebut it can't look at their logs15:18
jeblairwhy not?15:18
jeblair_all_ logs are saved15:18
sdagueeven the reset ones?15:18
jeblairyep15:18
*** therve has joined #openstack-infra15:19
sdaguewe just don't link them anywhere?15:19
jeblaircorrect.  you can still see them in the logs.o.o directory listings15:19
*** Ryan_Lane has quit IRC15:20
fungiand i believe you will still find them when querying logstash as well15:20
sdagueok, so can we change that, and actually report to gerrit when resets happen, and the location of the logs?15:20
jeblairsdague: i think we would be lynched.15:20
fungii think that would generate waaaay lots of gerrit comment spam unless we held that until the end, in which case we'd end up with potentially over-limit comments instead15:21
jeblairthe number of comments in gerrit would make it unusable...15:21
sdagueit's not spam, it's real failures15:21
jeblairit's possible real failures, based on a future set of changes which may not merge...15:21
sdaguewe have to get out of this mindset that the fails need to be swept under the rug15:21
sdagueat this point in time it's 90% real failures at least15:21
jeblairwe're not sweeping them under the rug; they just aren't confirmed15:22
sdaguethey are made invisible15:22
sdagueyou can't confirm invisible things15:22
sdagueand it makes people think the system is more reliable than it is15:23
jeblairi don't think anyone thinks the system is reliable15:23
sdagueso attacking races doesn't get prioritized15:23
mroddenso Elastic Recheck is my new best buddy15:23
*** Ryan_Lane has joined #openstack-infra15:24
jeblairsdague: consider it this way: currently the only test run that matters as far as reporting to gerrit goes is the "last" one -- the one where the patch is at the head of the queue.  with non-deterministic behavior, it has a certain % chance of merging.15:24
*** senk has joined #openstack-infra15:25
*** basha has quit IRC15:25
*** datsun180b has joined #openstack-infra15:26
jeblairsdague: your proposal is to make the _first_ test run the one that matters.  if that first run hits a failure, the patch gets ejected.  the same percentage chance applies there...15:26
jeblairsdague: except that if your patch has the bad luck to be at the end of the queue, it may get a chance to fail over and over again15:26
fungithis is why i was saying earlier that i don't think it will change the end result, statistically-speaking, other than to make the outcome less accurate15:27
*** julim has quit IRC15:27
jeblairsdague: so basically, you start with the same chance of failing or merging, and your proposal increases the chance of failure due to the length of the queue15:27
jeblairthe length of the queue is not an indication of the quality of your patch15:27
jeblairso it's being tied to an irrelevant variable15:27
fungiincreasingly so the longer the queue is ahead of you15:27
fungiyup, agreed15:27
*** Cyril2 has joined #openstack-infra15:27
sdagueok, fair15:27
*** senk has quit IRC15:29
ttxcome on 42640, you can do it15:30
ttxwe need some channel to place bets on merges in the gate15:30
ttxfunnier that snail races15:30
sdagueso I realize the crux of my problem is the lack of a promotion feature in the gate15:30
sdagueif we have 50 changes in the gate queue, and we have a race fix, we're going to wait 8 hrs for a merge15:31
jeblairsdague: move this patch to top?15:31
*** basha has joined #openstack-infra15:31
sdaguejeblair: yes15:31
jeblairsdague: yeah, that's high on my list15:31
sdagueif we kicked fast, at least the gate queue would turn over in order quicker15:31
sdagueso things would progress15:31
sdagueor if we had some probably matrix, on a gate reset reorder based on chance to succeed15:31
sdagueor lacking that, on smallest lines of change15:32
jeblairsdague: i think that an infra-root controlled method of promoting changes could be done relatively easily15:32
*** fungi has quit IRC15:33
jeblaircreating a ui to deal with that is slightly more complicated (zuul's web ui is read-only at the moment; dealing with authn/z etc is a bit of overhead that needs to be written)15:33
jeblairwe could continue down the 'magic incantations in gerrit comments' road, but that just feels weird.15:33
openstackgerritSergey Lukjanov proposed a change to openstack-infra/config: Add jobs for extra and dib repos  https://review.openstack.org/4916715:33
openstackgerritSergey Lukjanov proposed a change to openstack-infra/config: Remove unused savanna rtfd jobs  https://review.openstack.org/4916815:33
*** fungi has joined #openstack-infra15:33
openstackgerritMatthew Treinish proposed a change to openstack-infra/elastic-recheck: Add support for multiple bug matches in a failure  https://review.openstack.org/4917115:34
sdaguejeblair: what about another gerrit column15:34
jeblairi really wish we had gerrit buttons that just emitted events15:34
jeblairmaybe that's worth thinking about as a gerrit plugin.15:34
zulmordred:  ping so we are getting rid of d2to1 now?15:34
mordredzul: god yes. we've been geting rid of it for months15:36
mordredzul: upstream became unresponsive and there was a critical bug15:36
*** basha has quit IRC15:36
mordredzul: so we merged it into pbr and took over15:36
zulmordred:  ah15:36
jeblairsdague: the column might work (i'm not sure if you can have a fully-optional column), but that feels weirder than magic comments (it's there all the time, unlike comments which are there only when you need them)15:36
mordredjeblair: I think magic comments sounds great15:36
mordredjeblair: also, yes to exploring gerrit plugins that make buttons that emit events -especially if we can acl those15:37
jeblairokay, i can try to get over my aversion to a magic-comment-based ui.  :)15:38
*** dkranz has quit IRC15:39
*** AlexF has quit IRC15:40
*** julim has joined #openstack-infra15:42
sdague:)15:42
sdagueso before adding more magic comments....15:43
sdaguewe really need acls on magic comments15:43
jeblairsdague: i believe they exist15:43
sdaguebecause with the abuse of recheck no bug, I really don't want to open up "jump the queue" to anyone15:43
jeblairsdague: i believe wikimedia added support for having zuul only process certain comments from certain email addresses15:44
jeblairsdague: also, i think you just picked the comment text.  :)15:44
openstackgerritA change was merged to openstack/requirements: Raise/Relax WebTest requirement to match Keystone  https://review.openstack.org/4889815:47
openstackgerritA change was merged to openstack/requirements: os/requirements should be in sync with itself  https://review.openstack.org/4906415:47
mordredOMG. A change merged to requirements15:48
mordredand the WebTest change merged!15:48
mordredwoot!15:48
* mordred goes to see if a bunch of patches just got submitted by jenkins15:48
*** primemin1sterp has quit IRC15:49
jeblairmordred: i don't think the post jobs have run for that yet15:49
*** thomasbiege has joined #openstack-infra15:49
annegentlemordred: ttx: no technical committee meeting today, right?15:49
annegentleand jeblair that was a funny buzzkill :)15:49
*** svarnau has joined #openstack-infra15:50
annegentleI could hear mordred's excitement deflating from here15:50
*** primeministerp has joined #openstack-infra15:50
clarkbpoor mordred15:50
ttxannegentle: right15:50
annegentlettx: ok thanks15:51
ttxheyy.. we might actually have a rc1 today. #20 and #31 in queue are icehouse openings.15:52
ttx(keystone & glance)15:52
*** yassine has quit IRC15:52
*** dkranz has joined #openstack-infra15:52
fungifinally got around to checking out what was going on with precise31, but was able to relaunch the slave agent on it and it ran jobs successfully, so no idea what was going on there15:53
fungibut apparently precise27 and 39 are dead, dead, deadski15:53
fungigonna have to hard reboot them i think15:53
fungissh timeout15:53
*** SergeyLukjanov has quit IRC15:54
*** dprince has quit IRC15:54
*** thomasbiege has quit IRC15:54
*** senk has joined #openstack-infra15:54
*** senk has quit IRC15:56
*** emagana has joined #openstack-infra15:58
jeblairmordred: propose-requirements-updates: LOST15:58
*** senk has joined #openstack-infra15:58
sdaguejeblair: so now that this is actually working - can you re +A it - https://review.openstack.org/#/c/49144/ ?15:59
sdaguethen I'll run off to lunch / biking for a bit and stop annoying folks for a while :)15:59
jeblairsdague: done!16:00
*** matty_dubs is now known as matty_dubs|lunch16:01
*** adalbas has quit IRC16:02
jeblairmordred: i think gearman-plugin may have missed it for some reason.  i hit 'save' on the job in the jenkins ui and it's registered now.16:03
mordredjeblair: blast!16:03
mordredjeblair: can we inject a post event for that job? I'd love to see it run (and I really do need to actually learn how to do job injection)16:04
jeblairmordred: you could use the script in zuul/tools to manually submit the gearman job with the correct parameters16:04
mordredfunny, you answered the question as I asked it16:04
mordredjeblair: do I connect to zuul.o.o and run it from there?16:04
openstackgerritA change was merged to openstack-infra/config: add tenant isolation jobs to neutron patches  https://review.openstack.org/4914416:04
fungimordred: from the shell on zuul, yes16:04
jeblairmordred: yeah, do it on-host on zuul.o.o; the script has --help to tell you the params to use16:04
jeblairmordred: and just look here for the values: https://jenkins.openstack.org/job/post-mirror-python33/130/parameters/?16:04
fungimordred: you can look in my .bash_history to see recent invocations too16:05
jeblairmordred: and https://jenkins.openstack.org/job/post-mirror-python33/131/parameters/?16:05
jeblair(since there were 2 reqs changes)16:05
mordred./trigger-job.py --job propose-requirements-updates -project openstack/requirements --pipeline post --refname master --oldrev 910f4f2d58bf3811d889d38b3e2aae575f83a13f --newrev 921c2f99afda3287d3530d5c45c93b598563f90716:06
mordreddoes that look right?16:06
mordred--project16:06
mordredI probably don't need oldrev and newrev do it since it'sa post job16:07
*** crank has joined #openstack-infra16:08
jeblairmordred: i'd include them all16:11
*** dkliban has quit IRC16:12
*** boris-42 has joined #openstack-infra16:12
mordredjeblair: kk16:12
jeblairi'm pretty sure newrev is required16:12
Ryan_Lanehttps://review.openstack.org/#/c/47933/ <-- automatically abandoned :(16:12
jeblairRyan_Lane: sorry about that; i'm not a fan of auto-abandon myself; the script never seems to know when people are really busy.16:14
Ryan_Laneheh16:14
* Ryan_Lane nods16:14
Ryan_LaneI'm turning into a true californian. I'm using passive aggressiveness to poke for reviews rather than a poke16:14
*** thomasbiege has joined #openstack-infra16:15
*** adalbas has joined #openstack-infra16:15
*** osanchez has quit IRC16:15
*** mkoderer_ has joined #openstack-infra16:18
*** thomasbiege has quit IRC16:19
mordredRyan_Lane: watch out, they're going to take your Nola card away16:19
*** SergeyLukjanov has joined #openstack-infra16:20
*** DinaBelova has joined #openstack-infra16:20
*** DinaBelova has quit IRC16:20
*** flaper87 is now known as flaper87|afk16:21
portantemordred: do you know why "Jenkins" proposed a change in the Swift code base for various requirements?16:23
jeblairmordred: ! :)16:24
mordredportante: I do!16:24
fungiportante: success!16:24
portanteoh good16:24
mordredportante: it's a new magical feature and this is its first incarnation of success!16:24
portantemagical, oh, my, dragons, and castles, too?16:24
portante;)16:24
fungiportante: url? want to see how it came out16:24
mordredportante: we have rolled out automation of proposing syncs with the global requirements list any time a change lands to said list16:24
portanteneat16:25
portantesure, url would be great16:25
mordredhttps://review.openstack.org/#/q/topic:openstack/requirements,n,z16:25
fungiahh, even better16:25
jeblairswift devs don't seem to be receiving it very well16:25
mordrednope16:25
mordredI guess we should have expected that16:25
mordredI think "LOL... no" pretty  much sums it up16:26
jeblairmordred: it's always great to cap of months of collaborative work with someone laughing at you.16:26
portante;)16:26
portantethese things take time16:26
mordredno they don't16:26
mordredthese things have been in exactly this state since the project began16:27
*** DennyZhang has joined #openstack-infra16:28
*** DennyZhang has quit IRC16:28
portantecan you give "it" a name other than Jenkins when it posts a review, and possibly have it add an initial comment that describes what triggered it?16:29
mordredportante: it's pretty much identical to the translations update16:29
mordredI don't really think it needs a bunch of explanation - there is a global requirements list that the project has decided it will align to. this is a sync from that list. jenkins proposed the change. what more information would you want?16:30
portantewhen you say, "that the project" do you mean the individual project like swift, or the project meaning the greater openstack project?16:31
mordredOpenStack16:32
mordredswift is a sub-project of OpenStack16:32
*** DennyZhang has joined #openstack-infra16:32
portanteokay16:32
fungiboth precise27 and 39 wound up needing a hard reboot. 27 came up and was able to launch the slave agent and run jobs successfully, but i couldn't get the agent to launch on 39 until i deleted the slave from jenkins and re-added it from scratch16:35
funginow it's running jobs successfully, and we're back up to our full compliment16:35
jeblairmordred: the requirements propose job starts at the same time that the mirror update job starts, yeah?  that might cause a problem if proposed requirements changes can't actually fech their new requirements yet...16:36
openstackgerritDiane Fleming proposed a change to openstack-infra/config: Updated api.yaml for the netconn-api v1.0 and v2.0 refs - pom location changed  https://review.openstack.org/4903016:36
fungiunless those jobs are exempted from the mirror selection16:36
jeblairfungi: i'm increasingly of the opinion that nodepool for everything is the way to deal with this...16:37
fungijeblair: for slave failures? yes, agreed16:37
jeblairfungi: i don't think we want that; because we'd have to exempt _all_ jobs... and all jobs for all changes after them, etc...16:37
* fungi nods... i wonder if we can tell that all three mirror jobs have completed before proposing the change16:38
*** DinaBelova has joined #openstack-infra16:38
*** Ajaeger1 has joined #openstack-infra16:38
fungijob dependencies in zuul are limited to a tree for the moment, right?16:39
mordredI quit16:39
mordredportante, notmyname: I'm not going to talk to the other swift devs anymore. I will happily talk to you16:39
mordredit makes me too angry16:39
mordredjeblair: good point16:40
fungiportante had great constructive comments on https://review.openstack.org/4920616:42
mordredawesome16:42
mordredI honestly do prefer constructive comments. thank you portante16:42
fungiwhich ultimately should be addressed with a change to openstack/requirements and another to openstack/swift i think16:42
fungialso i think this underscores a strength of this update mechanism (as we saw with translations to some degree too)... control over how and when to approve the change continues to rest with the devs on each of the projects being integrated16:44
mordredfungi: ++16:44
fungiso it's not like release or infra are cramming these commits down any project's throat16:45
fungiceilometer already approved their update16:48
*** matty_dubs|lunch is now known as matty_dubs16:49
fungii'd be a little worried that there could be some resistance to the 4-line setup.py getting an hp copyright entry, though it doesn't particularly bother me16:52
*** derekh has quit IRC16:52
*** tkammer has quit IRC16:52
*** dkliban has joined #openstack-infra16:53
*** emagana has quit IRC16:55
hub_capfungi: mordred one less thing for me to do, +2!16:58
*** hashar has quit IRC16:58
zaroseems like the ci-puppetmaster.o.o and puppet-dashboard.o.o are down?16:58
hub_capand itll show me when i have requirements that arent in global (mordred i thought u pushed reviews for pexpect?)16:59
hub_caphttps://review.openstack.org/#/c/49207/1/requirements.txt16:59
*** emagana has joined #openstack-infra17:00
*** davidhadas has quit IRC17:00
*** julim_ has joined #openstack-infra17:00
*** Ryan_Lane has quit IRC17:01
*** davidhadas has joined #openstack-infra17:01
*** DennyZhang has quit IRC17:02
*** julim has quit IRC17:03
mordredhub_cap: I did - but it hasn't landed17:04
hub_capokey mordred17:05
mordredhub_cap: (so don't +2 it yet)17:05
*** afazekas_no_irq has quit IRC17:05
hub_caplooks like the review caused some issues w/ our shiz and cant be merged, so 1) ill fix that code in question, and 2) ill update the requirements as well17:05
mordredhub_cap: that said - I am hoping this will bring to people's attention when openstack/requirements is incompatible with their project17:05
mordredhub_cap: thank you!17:05
hub_capis ther a way to retrigger the jenkins job?17:05
hub_caponce ive cleaned up17:05
hub_capand yes im very happy that i dont have to see that my shits not inline w/ global- anymore17:06
hub_cap^ ^ by some manual process17:06
uvirtbothub_cap: Error: "^" is not a valid command.17:06
mordredportante: do you think adding a paragraph to the commit message about where the list came from and to propose changes to it if an entry here seems wrong would be helpful?17:06
hub_capu virt bot i hate u17:06
*** fbo is now known as fbo_away17:06
mordredhub_cap: yah. just recheck no bug the change17:06
hub_capsweet! and itll regen the diffs too???17:08
*** reed has joined #openstack-infra17:08
hub_capmordred: im renaming my project openstack-trove ;)17:08
zarofungi: do you know why those urls are not resolving?  they are referenced in the openstack wiki.17:08
mordredhub_cap: nope. the diffs will be the diffs17:09
mordredhub_cap: but if you update global requirements, it will submit a new patch to that change17:09
hub_capok cool, that makes sense for the pexpect thing17:09
hub_capbut wrt to my failures cuz maybe we arent using teh right lib (lets say the mock version, which is a issue w/ our client), and i go and make that change in my review to fix it17:10
fungizaro: the puppet-dashboard is being rebuilt (anteaya is still working on it). the puppet master has not had a reachable web interface for as long as i've been on the project17:11
*** mrmartin has joined #openstack-infra17:11
mordredhub_cap: so, in that case, just put the relevant requirement fix in your requirements file in your change17:11
mordredhub_cap: git should be able to figure out the merge17:11
hub_capoh duh17:12
* hub_cap overlooks the obvious17:12
anteayazaro: you can see the beta puppet-dashboard here: http://15.185.153.222/17:12
anteayaI'm working on puppeting it to bring it into the fold17:13
anteayafold as in a place to keep sheep safe17:13
fungiclearly we need more sheep17:13
hub_capflock: openstack sheep-aas17:14
hub_caplol flock? wtf i need breakfast17:14
anteayahub_cap: and some vowels17:14
* anteaya hands hub_cap some vowels17:14
hub_cap<317:15
anteaya:D17:15
* fungi imagines a flock of sheep flying overhead17:16
anteayawith no limits17:16
fungiflock of pigs works better, i think17:16
*** gyee has joined #openstack-infra17:17
jog0the bots are taking over: https://review.openstack.org/#/c/49195/17:18
jog0mordred: ^17:19
Alex_Gaynor`hah17:19
anteayamaybe but if the pigs are flying, ttx is where a Hawaiian shirt and posing for a portrait17:20
anteayaautograph pen at the ready17:20
pleia2flying sheep? it's only tuesday :)17:20
hub_caphttp://3.bp.blogspot.com/_moh5MaCXCBw/SVBGf63ZJWI/AAAAAAAAAt4/GlI0pes8LXE/S1600-R/gir.pig.flying.jpg17:20
anteaya*wearing17:21
clarkbjog0: I think that means we can just drink beer all day. THe bots are doing the real work17:22
mordredjog0: haha17:22
*** hogepodge has joined #openstack-infra17:23
*** sandywalsh has quit IRC17:23
fungiahh, gir, you never fail to amuse me17:24
jeblairjog0: that is a riot!17:24
fungiit's almost as if they're having a conversation17:25
fungisome projects may have been too eager to approve the requirements sync... https://review.openstack.org/4918917:27
*** prad has joined #openstack-infra17:29
jog0we have created a monster17:29
*** nati_ueno has joined #openstack-infra17:33
*** Cyril2 has quit IRC17:34
*** ArxCruz has quit IRC17:34
zaroanteaya: thnx. looks very nice.17:35
*** ArxCruz has joined #openstack-infra17:38
anteayathansk17:40
anteayathanks17:40
*** jerryz has joined #openstack-infra17:41
hub_capjog0: its only a monster when it starts -1'ing17:44
openstackgerritMatthew Treinish proposed a change to openstack/requirements: Add tempest to projects list  https://review.openstack.org/4922517:46
jog0hub_cap: as long as we don't have +2 bots ...17:47
fungijog0: i may be a +2 bot ;)17:48
dimslol17:49
*** hogepodge has quit IRC17:54
*** DinaBelova has quit IRC17:55
*** hogepodge has joined #openstack-infra17:56
*** tvb|afk has joined #openstack-infra17:57
*** chuckieb has joined #openstack-infra17:59
*** chuckieb|2 has joined #openstack-infra18:02
*** sarob has joined #openstack-infra18:03
*** DinaBelova has joined #openstack-infra18:04
openstackgerritSean Dague proposed a change to openstack-infra/elastic-recheck: don't look for things only in FAILURE logs  https://review.openstack.org/4922818:04
*** chuckieb has quit IRC18:05
mriedemsdague: i had to read that commit message a few times ^ :)18:05
sdagueheh18:05
*** gyee has quit IRC18:11
*** ericw has quit IRC18:11
mriedemmordred: hey, hate to bug you, just wondering about this? https://review.openstack.org/#/c/49006/18:12
mriedemmaybe automate force merge for the quantumclient branch? :)18:13
*** chuckieb has joined #openstack-infra18:16
openstackgerritDavid Peraza proposed a change to openstack/requirements: Adding sqlalchemy db2 dialect dependencies  https://review.openstack.org/4874518:17
mordredmriedem: sorry - I'll get to it in just a little bit :(18:19
mriedemmordred: no problem, i saw something about beer drinking and i'm assuming hell raising18:19
*** davidhadas has quit IRC18:20
*** chuckieb|2 has quit IRC18:20
Ajaeger1Hi Infra team, is there a chance to get https://review.openstack.org/#/c/47691/ and https://review.openstack.org/#/c/47897/ reviewed and approved, please?18:26
*** mrmartin_ has joined #openstack-infra18:28
*** mrmartin has quit IRC18:30
clarkbAjaeger1: reviewed, thank you for the new patchset18:30
*** rnirmal has joined #openstack-infra18:31
Ajaeger1clarkb: thanks for the review18:31
*** chuckieb has quit IRC18:32
*** mrmartin__ has joined #openstack-infra18:36
*** mrmartin_ has quit IRC18:39
*** Ryan_Lane has joined #openstack-infra18:40
*** qba73 has joined #openstack-infra18:41
Ryan_Lanemordred: heh, for real. I need to be way more aggressive to keep that18:41
*** DinaBelova has quit IRC18:44
*** osanchez has joined #openstack-infra18:52
*** ryanpetrello has quit IRC18:53
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Correct unattended upgrades origins assignment  https://review.openstack.org/4923518:53
*** thingee_zzz is now known as thingee18:54
annegentlepleia2: around? Could I get the text/slides of your workshop you did? Looking for more inspiration18:55
pleia2annegentle: the only slides I had were the generic ones I pulled from /marketing so folks knew what openstack was18:56
*** osanchez has quit IRC18:57
*** OlivierSanchez has joined #openstack-infra18:57
pleia2annegentle: http://princessleia.com/journal/?p=8526 has some more details about what I did, plus the handout that had some manual nova commands I had people try after I walked them through basics of starting a VM in horizon18:57
pleia2http://princessleia.com/presentations/CodeChixworkshopOpenStackDocumentation.pdf is the handout, you're welcome to use/modify/whatever18:58
*** derekh has joined #openstack-infra18:58
jeblairit's meeting time!18:59
lifelessin 60 seconds :P18:59
annegentlepleia2: thank you!18:59
pleia2annegentle: you're welcome, happy to help18:59
*** jerryz has quit IRC18:59
*** jerryz has joined #openstack-infra19:00
* pleia2 puts on 2 meetings hat19:00
annegentlepleia2: oh how cool you put the PDF of the Ops Guide on there, great thinking19:00
pleia2annegentle: yeah, I figured it was the best "hit the ground running" docs we have19:00
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Disable the salt minion data cache on master  https://review.openstack.org/4923819:04
*** melwitt has joined #openstack-infra19:05
*** che-arne has quit IRC19:06
*** ryanpetrello has joined #openstack-infra19:09
openstackgerritMatthew Treinish proposed a change to openstack-infra/elastic-recheck: Add support for multiple bug matches in a failure  https://review.openstack.org/4917119:10
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Make the salt master also a minion  https://review.openstack.org/4924019:10
*** davidhadas has joined #openstack-infra19:10
*** DinaBelova has joined #openstack-infra19:11
*** saper_ has joined #openstack-infra19:18
*** locke1051 has joined #openstack-infra19:19
*** annegentle_ has joined #openstack-infra19:21
*** fboz has joined #openstack-infra19:22
*** fboz is now known as fbo19:22
*** jeblair_ has joined #openstack-infra19:22
*** niska` has joined #openstack-infra19:24
*** jeblair has quit IRC19:24
*** DinaBelova has quit IRC19:24
*** jeblair_ is now known as jeblair19:24
*** GheRiver1 has joined #openstack-infra19:25
*** ttx` has joined #openstack-infra19:25
*** adam_g` has joined #openstack-infra19:26
*** lillie has joined #openstack-infra19:26
*** lillie is now known as Guest7410419:26
*** locke105 has quit IRC19:27
*** adam_g has quit IRC19:27
*** annegentle has quit IRC19:27
*** GheRivero has quit IRC19:27
*** olaph has quit IRC19:27
*** oubiwann has quit IRC19:27
*** niska has quit IRC19:27
*** fbo_away has quit IRC19:27
*** dtroyer has quit IRC19:27
*** EmilienM has quit IRC19:27
*** saper has quit IRC19:27
*** Guest31783 has quit IRC19:27
*** ttx has quit IRC19:27
*** Adri2000 has quit IRC19:27
*** davidhadas has quit IRC19:27
*** DinaBelova has joined #openstack-infra19:28
*** davidhadas has joined #openstack-infra19:28
*** ttx` is now known as ttx19:28
*** adam_g` is now known as adam_g19:28
*** ttx has quit IRC19:28
*** ttx has joined #openstack-infra19:28
*** oubiwann has joined #openstack-infra19:28
*** oubiwann is now known as Guest4960019:29
*** EmilienM has joined #openstack-infra19:30
*** dtroyer has joined #openstack-infra19:31
*** dizquierdo has left #openstack-infra19:32
*** vipul is now known as vipul-away19:35
*** davidhadas has quit IRC19:35
*** davidhadas has joined #openstack-infra19:36
*** Ryan_Lane has quit IRC19:37
openstackgerritA change was merged to openstack-infra/elastic-recheck: don't look for things only in FAILURE logs  https://review.openstack.org/4922819:37
*** olaph has joined #openstack-infra19:38
*** dcramer_ has quit IRC19:43
*** hashar has joined #openstack-infra19:43
openstackgerritSean Dague proposed a change to openstack-infra/elastic-recheck: add check_success tool  https://review.openstack.org/4924619:44
*** davidhadas has quit IRC19:45
*** davidhadas has joined #openstack-infra19:45
*** w_ has joined #openstack-infra19:52
*** olaph has quit IRC19:54
ttxthat "elastic recheck" dude is smarter than most our devs19:54
fungiat least they haven't been replaced by a small shell script. more like a modest-sized python script19:54
*** emagana has quit IRC19:54
anteayattx just handing out the compliments today19:54
ttxanteaya: I get more grumpy as we get closer to release19:55
anteayaoh goody19:55
anteayaI have more to look forward to19:55
ttxanteaya: my contract says I should be like that19:55
anteayaha ha ha19:55
pleia2hah19:55
jog0ttx: https://review.openstack.org/#/c/49195/ the bots are taking over19:55
anteayagood thing you are complying with your contract19:55
*** cody-somerville has quit IRC19:56
ttxjog0: now if the bots could fix the gate-fcking issues, we'd live in a perfect world19:56
jog0amen!19:57
ttxKeystone RC1 just delayed another 5/6 hours, thanks to bug 123040719:58
uvirtbotLaunchpad bug 1230407 in neutron "VMs can't progress through state changes because Neutron is deadlocking on it's database queries, and thus leaving networks in inconsistent states" [Critical,Confirmed] https://launchpad.net/bugs/123040719:58
*** boris-42 has quit IRC19:58
anteaya:(19:58
ttxGlance RC1 might hit in 35min in the best case scenario19:59
*** boris-42 has joined #openstack-infra19:59
anteayago go Glance RC120:00
* anteaya hands ttx a cat for petting as good luck20:00
*** DinaBelova has quit IRC20:01
*** thingee is now known as thingee_zzz20:02
anteayattx this would be your Glance patch you are waiting on? https://review.openstack.org/#/c/49029/20:03
ttxanteaya: don't name it ! Bad luck !20:05
*** vipul-away is now known as vipul20:05
* anteaya throws salt over her shoulder and swings a chicken20:06
*** cody-somerville has joined #openstack-infra20:06
*** cody-somerville has quit IRC20:06
*** cody-somerville has joined #openstack-infra20:06
*** enikanorov-w has quit IRC20:06
*** derekh has quit IRC20:07
*** enikanorov-w has joined #openstack-infra20:08
*** rnirmal has quit IRC20:08
lifelessttx: hey20:09
lifelessttx: I will send a mail on this but20:09
lifelessttx: whats your view on an API (currently on stackforge) doing releases20:09
lifelessttx: can we do it, does it interfere with incubation, what do we need to do over and above what we'd do for regular projects?20:10
ttxlifeless: you should be able to do whatever you want20:10
ttxlifeless: devil is in the details of course :)20:10
jeblairlifeless: what api do you have on stackforge?  didn't we move tripleo to openstack/ ?20:10
lifelessjeblair: tuskar20:11
lifelessjeblair: I plan to ask for the those projects to move once they have solid CI in place20:11
lifelessjeblair: (which means tempest etc IMO, and that probably comes behind the bm testing story given what tuskar does)20:12
jeblairlifeless: did tripleo have a scope expansion?20:13
lifelessjeblair: Not AFAIK20:13
jeblairlifeless: but tuskar repos are now part of tripleo?20:13
ttxjeblair: tuskar could actually be seen as part of the tripleO mission.20:13
lifelessjeblair: so tuskar fits under the original tripleo mission20:14
lifelessjeblair: deployment isn't 'get the code on machines' - thats not hard and not that interesting20:14
jeblairlifeless: it's okay, you don't have to convince me.  :)20:14
lifelessjeblair: sure :) just answering the questions20:15
mordredjeblair: answer is "yes, tuskar is managed by tripleo program now"20:15
lifelessjeblair: we did propose and then announce after discussion on the -dev list20:15
lifelessjeblair: should we have copied -infra?20:15
jeblairi guess i didn't expect a program to absorb a new project without a tc discussion, but that was clearly a lack of imagination on my part.  :)20:16
lifelessjeblair: (I'd like to avoid surprising you with this sort of thing)20:16
jeblairlifeless: well, we do normally like to help  :)20:16
jeblairlifeless: we should get the launchpad projects normalized and go ahead and move the git repos20:16
jeblairlifeless: we just scheduled a repo move this weekend for savanna20:16
lifelessjeblair: ok, cool - that would be great20:16
mordredjeblair: if I was smarter, I would have brought up tuskar when you asked that question20:17
lifelesswe have no plans to rename them at the moment20:17
lifelessjust shift their parent dior20:17
lifelessdir20:17
mordredright. that's a rename for us20:17
jeblairwe could probably lump tuskar in too, if it's ready; but i don't want to rush you if you want to think about renaming them or anything. but hey you just answered that.  :)20:17
lifelesssure, I was heading off questions about the final name20:17
lifelesswill you be sending announcements about the renames happening, or should I inform folk specifically ?20:18
lifelessand thanks20:18
jeblairlifeless: i'll send an announcement20:18
mordredlifeless: what would be great is if someone from tripleo could make a patch to infra renaming from stackforge/tuskar* to openstack/tuskar* - so that we make sure that we get everything that's appropriate20:19
*** julim_ has quit IRC20:19
mordredlifeless: keep in mind that once the repos move under openstack/ they are subject automatically to the restricted mirror and openstack/requirements20:19
jeblairlifeless: oct 5 1600 utc is the current scheduled time; devs will need to update git remotes then20:19
jeblairmordred, lifeless: oh, yes, if the mirror/reqs would be a problem, we can delay until that's ready.20:20
*** dmakogon_ has joined #openstack-infra20:20
*** johnthetubaguy has quit IRC20:20
SergeyLukjanovbtw the only requirement that we have in savanna that is not in openstack/requirements is python-savannaclient20:22
*** vipul is now known as vipul-away20:24
lifelessjeblair: I think we should wear the pain up front20:24
lifelessI will mail the list now to alert folk20:24
freyeshashar: hi, if you have some time to review my updated patch set (https://review.openstack.org/#/c/48661) would be great20:25
mordredSergeyLukjanov: we should probably go ahead and add a patch for that to o/r20:25
*** vipul-away is now known as vipul20:25
SergeyLukjanovmordred, yeah, I'll do it now20:25
hasharfreyes: not tonight sorry :/20:25
mordredhashar, mgagne, zaro, jeblair: did you see darraghb's question about jjb from last night?20:25
freyeshashar: no worries :)20:25
ryanpetrelloso in light of last week's pecan/wsme fiasco20:25
mordred20:22:52  mordred | having them have relationships, and also being able to upload things to places and then promote them20:26
mordred20:23:11  mordred | so, I think it might be worthwhile thinking about, especially as we think about multi-branch mirrors20:26
mordred20:23:31  mordred | it also has an offline mode where it's fine serving what it's already got if it can't talk to the network upstream20:26
ryanpetrellowe were wanting to set up something in CI that would gate pecan and wsme against eachother20:26
hasharfreyes: I noticed some activity, I might have commented one or two time but not that much :/  Been quite a busy day :)20:26
mgagnemordred: no, I was out of the office. IRC? review?20:26
ryanpetrelloI know we do this for official openstack projects20:26
ryanpetrelloanybody know how this works?20:26
mordredmgagne: hashar 04:50:15        darraghb | query on https://review.openstack.org/#/c/45165/, should the JJB code default to generating all XML elements even if the default is20:27
mordred                         | blank/no-value, or should it default to the plugin behaviour on which XML tags to set?20:27
mgagnemordred: checking20:27
hasharfreyes: ah your is actually quite simple. let me look20:27
clarkbryanpetrello: setup a job that tehy share, when you do that zuul treats them as dependent (the job needs to be added to the gate pipeline)20:27
clarkbryanpetrello: you can look at gate-tempest-devstack-vm-full for an example20:27
freyeshashar: indeed :) only trailing white spaces removal20:27
hasharmordred: regarding job, I am not sure whether we want to output empty XML element.  I think there is a bunch of existing modules that do that already.20:27
mordredryanpetrello: the trick is basically to have a snippet of code which checks out multiple repos of code appropriately - let me point you at the relevent bit from devstack-gate...20:28
hasharfreyes: look at my cover message :-)  The code from tests/base.py is copy pasted from tests/publishers/test_publishers.py20:28
hasharfreyes: so you will want to make tests/publishers/test_publishers.py to rely on the base class you introduced.20:29
openstackgerritSergey Lukjanov proposed a change to openstack/requirements: Allow use of python-savannaclient-0.3a3  https://review.openstack.org/4925220:29
freyeshashar: yes, I want to make it reusable, I have another patch to add some tests20:29
jeblairmordred, ryanpetrello: having that bit of code be a generic slave script that accepted a project list as arguments might be useful.20:29
mordredjeblair: I was just thinking the same thing20:30
ryanpetrello+120:30
freyeshashar: ok, I'll refactor it, but I'll split my patch, one to add base.py, another one for git shallow clone20:30
mordredryanpetrello: https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh#n7120:30
mordredthere is the non-generic part20:30
hasharfreyes: that would be ideal!20:30
mordredbut even it isn't bad20:30
freyeshashar: super, thanks for reviewing it20:30
mordredryanpetrello: $PROJECTS is pre-set to the list of projects to clone20:30
mordredryanpetrello: in your case, it would be PROJECTS="stackforge/wsme stackforge/pecan"20:30
ryanpetrelloright20:31
ryanpetrellookay, thanks!20:31
hasharfreyes: and thank you for enhancing the tests!20:31
*** tvb|afk has quit IRC20:31
mordredryanpetrello: I'm running around a little crazy today - but would be happy to help you sort out doing that - because I agree with jeblair having a more cookie-cutter answer to this question would be a nice building block20:31
*** sarob has quit IRC20:31
ryanpetrellomordred: yea, that would be awesome20:32
*** sarob has joined #openstack-infra20:32
*** flaper87|afk is now known as flaper8720:32
hasharfreyes: I will have a look at JJB changes tomorrow. Thx :)20:33
openstackgerritlifeless proposed a change to openstack-infra/config: Document bootstrapping of Gerrit ACLs.  https://review.openstack.org/4501120:33
openstackgerritSergey Lukjanov proposed a change to openstack-infra/config: Add jobs for extra and dib repos  https://review.openstack.org/4916720:36
openstackgerritSergey Lukjanov proposed a change to openstack-infra/config: Move savanna under openstack org  https://review.openstack.org/4849120:36
openstackgerritSergey Lukjanov proposed a change to openstack-infra/config: Remove unused savanna rtfd jobs  https://review.openstack.org/4916820:36
*** sarob has quit IRC20:36
openstackgerritlifeless proposed a change to openstack-infra/config: Docs on bringing up Jenkins in new infrastructures.  https://review.openstack.org/4521620:38
openstackgerritlifeless proposed a change to openstack-infra/config: Document spinning up a derived zuul.  https://review.openstack.org/4516420:38
openstackgerritlifeless proposed a change to openstack-infra/config: Document running custom slaves in ones own infra.  https://review.openstack.org/4534620:38
openstackgerritlifeless proposed a change to openstack-infra/config: Non-openstack-ci support for launch/dns.py.  https://review.openstack.org/4498020:38
*** Ryan_Lane has joined #openstack-infra20:38
lifelessok thats super weird20:39
lifelessI pushed up a change20:39
*** dcramer_ has joined #openstack-infra20:39
lifelessI9bb037c31e192f1d719cfdd3a1d59053eba0c5b020:39
lifelessand gerrit doesn't see it20:39
lifelessbut it's at the bottom of the stack I just updated20:39
*** emagana has joined #openstack-infra20:40
*** thomasm has quit IRC20:40
fungilifeless: supposed to be the parent of Ic1d5a511 i take it?20:42
openstackgerritSergey Lukjanov proposed a change to openstack/requirements: Add savanna to tracked projects list  https://review.openstack.org/4925420:42
lifelessyeah20:42
fungi35cfe04c4d82fc89cadd5e7ff73690d7d1858aa8Move tuskar to openstack.20:42
*** qba73 has quit IRC20:42
lifelessthats the one20:43
fungigerrit doesn't report a change with that sha... a draft perhaps?20:43
lifelessNFI20:44
lifelessI'll make a spurious change to it and repush20:44
fungii'll see if i can find it in the db20:44
openstackgerritlifeless proposed a change to openstack-infra/config: Move tuskar to openstack.  https://review.openstack.org/4925520:45
openstackgerritlifeless proposed a change to openstack-infra/config: Docs on bringing up Jenkins in new infrastructures.  https://review.openstack.org/4521620:45
openstackgerritlifeless proposed a change to openstack-infra/config: Document spinning up a derived zuul.  https://review.openstack.org/4516420:45
lifelesssorry about the noise..20:45
openstackgerritlifeless proposed a change to openstack-infra/config: Document running custom slaves in ones own infra.  https://review.openstack.org/4534620:45
openstackgerritlifeless proposed a change to openstack-infra/config: Non-openstack-ci support for launch/dns.py.  https://review.openstack.org/4498020:45
funginormally gerrit wouldn't allow you to parent a change on a ref it doesn't know20:45
openstackgerritlifeless proposed a change to openstack-infra/config: Document bootstrapping of Gerrit ACLs.  https://review.openstack.org/4501120:45
lifelessthere it goes20:45
lifelessI think I ticked a bug20:45
lifelessI will describe it in LP for you to consider20:45
fungiyou wouldn't be the first ;)20:45
fungiyeah, says patchset 1, so wasn't a draft, at least not with that change-id20:46
* mordred blames SpamapS20:47
*** thingee_zzz is now known as thingee20:47
mordredSpamapS is better than anyone else at doing bizarre things with patch uploads20:47
openstackgerritlifeless proposed a change to openstack-infra/config: Docs on bringing up Jenkins in new infrastructures.  https://review.openstack.org/4521620:48
openstackgerritlifeless proposed a change to openstack-infra/config: Document spinning up a derived zuul.  https://review.openstack.org/4516420:48
openstackgerritlifeless proposed a change to openstack-infra/config: Document running custom slaves in ones own infra.  https://review.openstack.org/4534620:48
openstackgerritlifeless proposed a change to openstack-infra/config: Non-openstack-ci support for launch/dns.py.  https://review.openstack.org/4498020:48
openstackgerritlifeless proposed a change to openstack-infra/config: Document bootstrapping of Gerrit ACLs.  https://review.openstack.org/4501120:48
openstackgerritlifeless proposed a change to openstack-infra/config: Move tuskar to openstack.  https://review.openstack.org/4925520:48
*** gyee has joined #openstack-infra20:50
openstackgerritA change was merged to openstack-infra/config: Use Jenkins templates for old manual jobs  https://review.openstack.org/4769120:51
lifelesshttps://bugs.launchpad.net/openstack-ci/+bug/123384620:51
lifelessfungi: ^20:51
uvirtbotLaunchpad bug 1233846 in openstack-ci "pushing a new commit under an abandonded one lead to a phantom base" [Undecided,New]20:51
lifelessI think that should be reproducable20:51
fungihaha, yes possibly20:52
fungithis is likely something reported/fixed upstream after 2.420:52
*** sarob has joined #openstack-infra20:52
*** dkliban has quit IRC20:53
mordredfungi: one of these days we'll get to upgrade, and it's going to be sexy20:54
lifelessmordred: jeblair: anyhow, https://review.openstack.org/49255 - the tuskar move for you20:55
*** Ryan_Lane has quit IRC20:56
openstackgerritAndreas Jaeger proposed a change to openstack-infra/config: Publish atom.xml for manuals  https://review.openstack.org/4789720:58
*** dizquierdo has joined #openstack-infra20:58
*** w_ is now known as olaph21:00
hasharlifeless: thanks for the doc update :-]21:01
openstackgerritAndreas Jaeger proposed a change to openstack-infra/config: Create new gates for openstack-manuals  https://review.openstack.org/4812821:01
*** thomasm has joined #openstack-infra21:02
*** harlowja has quit IRC21:02
* hashar waves21:02
lifelesshashar: o/21:03
*** weshay has quit IRC21:04
*** dcramer_ has quit IRC21:04
lifelesssorry21:04
*** hashar has quit IRC21:04
openstackgerritlifeless proposed a change to openstack-infra/config: Docs on bringing up Jenkins in new infrastructures.  https://review.openstack.org/4521621:04
openstackgerritlifeless proposed a change to openstack-infra/config: Document spinning up a derived zuul.  https://review.openstack.org/4516421:04
openstackgerritlifeless proposed a change to openstack-infra/config: Document running custom slaves in ones own infra.  https://review.openstack.org/4534621:04
openstackgerritlifeless proposed a change to openstack-infra/config: Non-openstack-ci support for launch/dns.py.  https://review.openstack.org/4498021:04
openstackgerritlifeless proposed a change to openstack-infra/config: Document bootstrapping of Gerrit ACLs.  https://review.openstack.org/4501121:04
openstackgerritlifeless proposed a change to openstack-infra/config: Move tuskar to openstack.  https://review.openstack.org/4925521:04
*** Ajaeger1 has quit IRC21:06
*** rfolco has quit IRC21:06
openstackgerritJoe Gordon proposed a change to openstack-infra/elastic-recheck: Bug 1230407 was at least two problems, split apart.  https://review.openstack.org/4926121:08
uvirtbotLaunchpad bug 1230407 in neutron "VMs can't progress through state changes because Neutron is deadlocking on it's database queries, and thus leaving networks in inconsistent states" [Critical,Confirmed] https://launchpad.net/bugs/123040721:08
*** gyee has quit IRC21:08
*** OlivierSanchez has quit IRC21:10
*** gyee has joined #openstack-infra21:11
*** mrmartin__ has quit IRC21:11
*** OlivierSanchez has joined #openstack-infra21:15
*** julim has joined #openstack-infra21:18
*** jcoufal has joined #openstack-infra21:19
*** weshay has joined #openstack-infra21:19
jog0looking for a review for https://review.openstack.org/#/c/49261/21:21
jog0elastic-recheck21:21
jog0sdague clarkb mtreinish ^21:21
*** jpich has quit IRC21:22
sdaguejog0: got my +221:22
mtreinishjog0: where are the flake8 results :)21:22
jog0mtreinish: good question21:26
*** Adri2000 has joined #openstack-infra21:27
*** ryanpetrello has quit IRC21:29
*** che-arne has joined #openstack-infra21:30
*** flaper87 is now known as flaper87|afk21:30
jog0mordred: https://review.openstack.org/#/c/49179/121:32
jog0looks like that needs to be pushed everywhere21:32
mordredjog0: ++21:33
jog0thats your bread and  butter want to blast it out21:33
*** dkranz has quit IRC21:33
*** alcabrera has quit IRC21:36
openstackgerritSean Dague proposed a change to openstack-infra/config: add the extra neutron jobs to neutronclient changes  https://review.openstack.org/4926321:36
sdaguemordred, jeblair can you look at that ^^^21:37
sdagueit would be helpful in knowing if we fixed the neutronclient race issue with nova vs. neutron21:38
openstackgerritMatthew Treinish proposed a change to openstack-infra/elastic-recheck: Add mox fixture to baste TestCase  https://review.openstack.org/4926421:38
openstackgerritMatthew Treinish proposed a change to openstack-infra/elastic-recheck: Make test_required_files.py real unit tests  https://review.openstack.org/4926521:38
sdagueotherwise we are rechecking a lot and getting a small amount of data each go21:38
openstackgerritFelipe Reyes proposed a change to openstack-infra/jenkins-job-builder: Moved get_scenarios() function to a base module to make it easier to reuse  https://review.openstack.org/4866121:42
*** harlowja has joined #openstack-infra21:42
mriedemcan anyone tell me if this is a 'recheck no bug'?  looks like a pypi hiccup: https://review.openstack.org/#/c/48739/21:43
fungimtreinish: really? we have baste test cases now? ;) (returning the nitpick favor)21:43
*** changbl has quit IRC21:43
openstackgerritFelipe Reyes proposed a change to openstack-infra/jenkins-job-builder: Moved get_scenarios() to base module to make it easier to reuse  https://review.openstack.org/4866121:43
openstackgerritFelipe Reyes proposed a change to openstack-infra/jenkins-job-builder: Moved get_scenarios() function to a base module to make it easier to reuse  https://review.openstack.org/4926721:44
openstackgerritFelipe Reyes proposed a change to openstack-infra/jenkins-job-builder: Added support for Git shallow clone parameter  https://review.openstack.org/4926821:44
openstackgerritJoe Gordon proposed a change to openstack-infra/elastic-recheck: Make test_queries save stdout in tox testr  https://review.openstack.org/4908021:44
mordredfungi: you should ALWAYS test the base21:44
mordreddamn21:44
mordredcouldn't even do the joke21:44
mordredfungi: you should ALWAYS test the BASTE21:44
mtreinishfungi: heh, oops21:45
fungimriedem: the devstack failures on that change do not look like pypi download problems21:46
mtreinishalthough I like baste vs base, it's grown on me21:46
mriedemfungi: sdague pointed out it's this: https://bugs.launchpad.net/openstack-ci/+bug/123259221:46
uvirtbotLaunchpad bug 1232592 in openstack-ci "pypi CDN bugs cause SSL errors when trying to install packages" [Undecided,New]21:46
*** dizquierdo has left #openstack-infra21:46
fungimriedem: yes, tehre's a cdn problem which causes trouble retrieving thrift, from what we've seen, but it's only one of the three failures on that test run21:47
fungithe two devstack-tempest job failures on it are unrelated to that issue21:47
mriedemfungi: yeah, the others are this: https://bugs.launchpad.net/tempest/+bug/123374321:48
uvirtbotLaunchpad bug 1233743 in tempest "TestStampPattern.test_stamp_pattern fails with timeout" [Undecided,New]21:48
mriedemi didn't really want to recheck on that since i know that's not the issue21:48
openstackgerritMatthew Treinish proposed a change to openstack-infra/elastic-recheck: Make test_required_files.py real unit tests  https://review.openstack.org/4926521:48
fungimriedem: i'd probably recheck with that bug number rather than no bug, since at least one of the failures causing you to recheck does have an actual bug associated with it21:48
openstackgerritMatthew Treinish proposed a change to openstack-infra/elastic-recheck: Add mox fixture to base TestCase  https://review.openstack.org/4926421:48
mriedemi mean, given the change21:48
mriedemfungi: yeah, definitely, it just wasn't the best21:49
mriedemwhich is why i asked21:49
openstackgerritA change was merged to openstack-infra/elastic-recheck: Bug 1230407 was at least two problems, split apart.  https://review.openstack.org/4926121:49
uvirtbotLaunchpad bug 1230407 in neutron "VMs can't progress through state changes because Neutron is deadlocking on it's database queries, and thus leaving networks in inconsistent states" [Critical,Confirmed] https://launchpad.net/bugs/123040721:49
*** matty_dubs is now known as matty_dubs|gone21:51
*** SergeyLukjanov has quit IRC21:54
*** johnthetubaguy has joined #openstack-infra21:55
anteayaso send this one through the queue again with reverify bug #1233743?21:57
uvirtbotLaunchpad bug 1233743 in tempest "TestStampPattern.test_stamp_pattern fails with timeout" [Undecided,New] https://launchpad.net/bugs/123374321:57
anteayahttps://review.openstack.org/#/c/49029/21:57
*** cody-somerville has quit IRC21:57
*** SergeyLukjanov has joined #openstack-infra21:58
openstackgerritClark Boylan proposed a change to openstack-infra/config: Remove gate-django_openstack_auth-noop job.  https://review.openstack.org/4927421:59
openstackgerritA change was merged to openstack-infra/config: add the extra neutron jobs to neutronclient changes  https://review.openstack.org/4926321:59
anteayawell I did it, hope it was right22:00
*** pcm_ has quit IRC22:00
*** johnthetubaguy has quit IRC22:01
*** SergeyLukjanov has quit IRC22:01
anteayattx :(22:01
*** thomasm has quit IRC22:01
ttx<- sad panda22:02
anteayayeah22:02
*** nati_ueno has quit IRC22:02
anteayano kidding22:02
anteayattx so you are traveling the end of this week?22:03
ttxI'm at a conference in Paris Thursday afternoon /Friday morning22:04
anteayanice22:04
ttxand in a train Thusrday morning and Friday afternoon22:04
anteayaokay so you will have time to send an email kicking off the TC nomination process on the Friday?22:05
ttxanteaya: I can certainly do it, but wouldn't mind if you did22:05
anteayaI can do it22:05
openstackgerritMatthew Treinish proposed a change to openstack-infra/elastic-recheck: Make test_required_files.py real unit tests  https://review.openstack.org/4926522:05
openstackgerritMatthew Treinish proposed a change to openstack-infra/elastic-recheck: Add mox fixture to base TestCase  https://review.openstack.org/4926422:05
ttxanteaya: that would be one less thing I need to think about. would be great :)22:06
anteayacan you email me the salient points and I'll do a first draft?22:06
anteayanp22:06
anteayaor just pull from the wikipage?22:06
annegentle_hi all, is there a sandbox repo for practicing with git review or did I imagine it?22:06
ttxanteaya: it's pretty much the same as the PTL election, pull from the wiki page and send me a draft if you want to be doubleplussure22:06
anteayaI want to be doubleplussure so I will22:07
anteayaand the following week I will basically be offline from Wednesday afternoon onward22:07
anteayaflying to Budapest and then first day of the conf on the Friday22:07
anteayaso starting the elections is all you, my friend22:08
ttxanteaya: will do22:08
*** boris-42 has quit IRC22:08
anteayagreat22:09
*** esker has quit IRC22:09
*** mriedem has quit IRC22:09
anteayaannegentle_: I do think review-dev is the server you are looking for: http://git.openstack.org/cgit/openstack-infra/config/tree/manifests/site.pp#n4522:10
*** esker has joined #openstack-infra22:10
*** cody-somerville has joined #openstack-infra22:11
openstackgerritJoe Gordon proposed a change to openstack-infra/elastic-recheck: Cleanup tests  https://review.openstack.org/4876422:13
*** esker has quit IRC22:14
*** nati_ueno has joined #openstack-infra22:15
*** Reapster_ is now known as Reapster22:16
*** rcleere has quit IRC22:17
*** michchap has quit IRC22:17
annegentle_anteaya: SWEET22:17
openstackgerritJoe Gordon proposed a change to openstack-infra/elastic-recheck: Enforce E501 max-line-length at 100 chars  https://review.openstack.org/4876322:18
*** michchap has joined #openstack-infra22:18
anteayaannegentle_: \o/22:20
*** datsun180b has quit IRC22:20
*** MarkAtwood has joined #openstack-infra22:23
*** thedodd has quit IRC22:25
*** flaper87|afk is now known as flaper8722:27
fungiannegentle_: openstack-dev/sandbox22:28
fungialso22:28
*** luhrs1 has joined #openstack-infra22:30
*** jerryz has quit IRC22:31
*** emagana has quit IRC22:31
*** emagana has joined #openstack-infra22:32
*** che-arne has quit IRC22:32
*** mriedem has joined #openstack-infra22:32
*** markmcclain has quit IRC22:33
openstackgerritJoe Gordon proposed a change to openstack-infra/elastic-recheck: Add query for bug 1232748  https://review.openstack.org/4928022:34
uvirtbotLaunchpad bug 1232748 in nova "[Postgres] DB error: (OperationalError) could not translate host name "localhost" to address" [Undecided,Confirmed] https://launchpad.net/bugs/123274822:34
*** ryanpetrello has joined #openstack-infra22:39
*** nati_ueno has quit IRC22:45
*** vipul is now known as vipul-away22:47
*** vipul-away is now known as vipul22:47
*** OlivierSanchez has quit IRC22:48
*** ryanpetrello has quit IRC22:50
*** dmakogon_ has quit IRC22:50
openstackgerritJoe Gordon proposed a change to openstack-infra/elastic-recheck: Cleanup README  https://review.openstack.org/4928122:51
*** sarob_ has joined #openstack-infra22:54
*** emagana has quit IRC22:55
openstackgerritA change was merged to openstack-infra/elastic-recheck: Add support for multiple bug matches in a failure  https://review.openstack.org/4917122:55
openstackgerritA change was merged to openstack/requirements: Align our setup.py with ourselves  https://review.openstack.org/4905922:55
*** shardy is now known as shardy_afk22:55
*** emagana has joined #openstack-infra22:55
*** sarob has quit IRC22:58
openstackgerritSean Dague proposed a change to openstack-infra/elastic-recheck: add requirements-install to watch test runs  https://review.openstack.org/4928322:58
*** sarob_ has quit IRC22:59
clarkbfyi I am restarting elastic-recheck on logstash.o.o at jog0's requiest. it apparently got stuck22:59
sdaguecool22:59
clarkbjog0 is working on a work around and I am aware of a larger issue in how we are (not) indexing console.html completely all the time22:59
sdaguewe probably need a heartbeating mechanism on the bot so puppet can know it's alive23:00
*** emagana has quit IRC23:00
*** sarob has joined #openstack-infra23:01
*** jcoufal has quit IRC23:05
*** nati_ueno has joined #openstack-infra23:08
clarkbrestarted. This looks like potentially a larger problem (due to asynchronous console.html copies) I will keep brainstorming for a better solution to it (current hacks are work arounds)23:12
*** dims has quit IRC23:13
*** reed has quit IRC23:13
*** sarob has quit IRC23:15
*** sarob has joined #openstack-infra23:15
*** MarkAtwood has quit IRC23:16
openstackgerritJoe Gordon proposed a change to openstack-infra/elastic-recheck: Stop using While True to wait for ElasticSearch  https://review.openstack.org/4876623:18
*** slong has joined #openstack-infra23:21
*** prad has quit IRC23:22
*** alexpilotti has quit IRC23:23
openstackgerritClark Boylan proposed a change to openstack-infra/config: Increase log worker retry timeout length.  https://review.openstack.org/4928923:23
*** dims has joined #openstack-infra23:27
*** flaper87 is now known as flaper87|afk23:30
*** vipul is now known as vipul-away23:38
*** sarob has quit IRC23:38
*** sarob has joined #openstack-infra23:39
*** vipul-away is now known as vipul23:41
*** AlexF has joined #openstack-infra23:42
*** sarob has quit IRC23:43
*** rfolco has joined #openstack-infra23:47
openstackgerritJames E. Blair proposed a change to openstack-infra/nodepool: Inspect the Gearman queue for immediate demand  https://review.openstack.org/4851723:50
jeblairclarkb, fungi, mordred: it's possible that's slightly mind-bending.23:51
*** DennyZhang has joined #openstack-infra23:52
jeblairclarkb, fungi, mordred: i'm sorry about that if it is.23:52
*** nati_ueno has quit IRC23:54
openstackgerritJames E. Blair proposed a change to openstack-infra/nodepool: Inspect the Gearman queue for immediate demand  https://review.openstack.org/4851723:54
*** rfolco has quit IRC23:56
*** jerryz has joined #openstack-infra23:56

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!