Friday, 2013-08-16

clarkbhurray00:04
*** zul has joined #openstack-infra00:10
*** zul has quit IRC00:11
lifelessmordred: are you going to follow up on https://code.launchpad.net/~mordred/python-fixtures/agressive-loggers/+merge/150237 ?00:11
*** zul has joined #openstack-infra00:13
*** vipul is now known as vipul-away00:13
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Use nodepool stats for graph  https://review.openstack.org/4224500:14
*** ryanpetrello has joined #openstack-infra00:16
clarkbwoot logrotate seems to work as expected. I am applying the changes to etherpad.o.o now00:17
openstackgerritJames E. Blair proposed a change to openstack-infra/nodepool: Require a target name when instantiating a node  https://review.openstack.org/4224600:17
openstackgerritA change was merged to openstack-infra/config: Switch etherpad_lite backups to mysql_backup.  https://review.openstack.org/4179100:19
*** colinmcnamara has quit IRC00:23
jeblaira job has completed, and the node was deleted and removed from jenkins!00:26
clarkbsuccess. jeblair should light a cigar and pour a glass of whiskey00:28
jeblairooh, and it just deleted a node that didn't come online: Exception: Timeout waiting for ssh access00:29
jeblairclarkb: maybe i'll light that glass of whiskey now!00:30
clarkbjeblair: I am going to leave comments on the first nodepool puppet change if you want to wait on fixing the lint stuff00:30
jeblairclarkb: ok00:30
*** dims has quit IRC00:30
*** ryanpetrello has quit IRC00:30
*** UtahDave has quit IRC00:31
*** vipul-away is now known as vipul00:32
*** Ryan_Lane has joined #openstack-infra00:33
*** ryanpetrello has joined #openstack-infra00:33
*** sarob_ has joined #openstack-infra00:35
clarkbjeblair: done00:38
*** sarob has quit IRC00:39
jeblairlifeless: just so you know, nodepool is basically two year old code that is already in production, but needs to be made into a daemon.  i know it has lots of things that could be improved, but i'm not aiming for perfect now.00:39
*** sarob_ has quit IRC00:40
jeblairlifeless: i'm aiming for 'runs in production and can provide the needed test nodes before the rush of changes that will happen in a couple of days for the H3 feature freezes'00:40
clarkbpuppet has been started again on etherpad and etherpad-dev00:40
lifelessjeblair: sure; I'm not core and don't know the code yet.00:41
lifelessjeblair: you and other core folk need to assess whether a suggestion I make is important at this stage or not.00:41
*** ryanpetrello has quit IRC00:41
lifelessjeblair: the use of symbolic constants rather than magic numbers for errno for instance, is trivial but makes a big difference00:42
jeblairlifeless: from what i've seen, your suggestions are good, and almost certainly correct; i'm likely to ignore/defer some of them though at the moment.00:44
jeblairlifeless: and i don't want you to be put off by that00:44
lifelessjeblair: oh I won't be; per clarkb and mordreds request, it's in my daily review scan - anything missing a review will get one from me daily :)00:44
lifelessjeblair: thanks for being clear though; much appreciated00:44
clarkblifeless: and thank you for taking the time to look00:44
jeblairlifeless: (it's also extremely difficult to test; i think the biggest bug is it needs a test suite, which will make cleanup changes a lot more palatable)00:44
jeblairlifeless: indeed00:44
clarkb++00:44
*** colinmcnamara has joined #openstack-infra00:44
jeblairlifeless: fwiw, zuul is much more mature and can stand up to that kind of scrutiny00:45
*** dims has joined #openstack-infra00:45
openstackgerritJames E. Blair proposed a change to openstack-infra/nodepool: Make the local script directory configurable  https://review.openstack.org/4223300:49
openstackgerritJames E. Blair proposed a change to openstack-infra/nodepool: Require a target name when instantiating a node  https://review.openstack.org/4224600:49
openstackgerritJames E. Blair proposed a change to openstack-infra/nodepool: Use MySQL  https://review.openstack.org/4223400:49
*** gyee has quit IRC00:50
fungioof! i missed the fun and whiskey lighting00:50
clarkbjeblair: 42234/2 addresses the comment i was going to leave on 42234/1 (import sys)00:51
fungitaking a look at those at least00:51
openstackgerritDan Bode proposed a change to openstack-infra/config: Add puppet-pip  https://review.openstack.org/3983300:52
openstackgerritJames E. Blair proposed a change to openstack-infra/nodepool: Make the target name required in the schema  https://review.openstack.org/4225100:52
*** ^d is now known as ^demon|away00:54
clarkbjeblair: what is the target name? is that the name used in jenkins?00:55
jeblairclarkb: no, it's jenkins's name; eg jenkins01 or jenkins0200:55
*** sarob has joined #openstack-infra00:56
jeblairclarkb: http://paste.openstack.org/show/44284/00:56
clarkbthanks00:57
clarkbjeblair: what do you think about having a default value for script-dir?00:58
clarkbthe old code used 'scripts'00:58
jeblairclarkb: that change was a from-production patch because it's basically contextless when installed in production.  /etc/nodepool/scripts is about the only thing that makes sense to me as a default; but i'm not sure it's a big imposition to require it in the config file.00:59
lifelesswhere is the global requirements repo ?01:00
clarkbjeblair: it isn't, I am fine with it as is. lifeless' comment about making target_name non NULL in the DB schema is good01:00
jeblairlifeless: openstack/requirements01:01
clarkblifeless: $HOST/openstack/requirements where $HOST is one of https://review.o.o https://github.com git://github.com git://git.o.o or https://git.o.o01:01
jeblairclarkb: so good i followed it up with a change to do that01:01
jeblairclarkb: haha01:01
lifelessjeblair: thanks01:01
jeblairclarkb: also https://review.o.o/p01:01
jeblairclarkb: and git://git.o.o/cgit01:02
clarkbhttps://review.openstack.org/#/c/42251/1..1/nodepool/nodedb.py does not open for me01:02
clarkboh wait nevermind01:02
fungiheh01:02
clarkbsomehow I convicned it to tell me there were diffs between patchsets 1 and 101:02
fungirange of 001:02
clarkb+2 -201:02
lifelessclarkb: it's A FEATURE01:02
*** mjfork has quit IRC01:06
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Add nodepool host  https://review.openstack.org/4223201:07
*** sarob has quit IRC01:09
openstackgerritlifeless proposed a change to openstack/requirements: Ignore the ending of commit messages.  https://review.openstack.org/4225501:09
openstackgerritlifeless proposed a change to openstack/requirements: Uncap testscenarios.  https://review.openstack.org/4225601:09
openstackgerritlifeless proposed a change to openstack/requirements: New fixtures release.  https://review.openstack.org/4225701:09
openstackgerritlifeless proposed a change to openstack/requirements: Ignore common files that should never be added.  https://review.openstack.org/4225801:09
*** sarob has joined #openstack-infra01:09
jeblairyay more things for nodepool to do. :)01:11
clarkbthe number of tests run today has been insane01:11
* fungi concurs01:11
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Increase nodepool size to 30 (5/provider)  https://review.openstack.org/4225901:11
*** Ryan_Lane has quit IRC01:12
jeblairthat's for later ^01:12
jeblairi'm about to head to dinner01:12
jeblairbut if it holds up, i think tomorrow we should be able to merge that and turn off the devstack-gate node launchers01:13
*** sarob has quit IRC01:14
jeblairhttp://graphite.openstack.org/render/?from=-24hours&height=170&until=now&width=310&bgcolor=ffffff&fgcolor=000000&target=color%28alias%28sumSeries%28stats.gauges.nodepool.target.*.devstack-precise.*.ready%29,%20%27devstack-precise%27%29,%20%27green%27%29&title=Available%20Test%20Nodes&_t=0.8664466904279092#137661566791801:14
jeblairthere's the new graph, btw ^01:14
lifelessis there some way to express 'this change in project A depends on other change in project B' ?01:16
lifelessfor CI specifically01:17
jeblairlifeless: no; we just use words01:17
lifelesspost-CI release version constraints should do it01:17
*** sarob has joined #openstack-infra01:17
lifelessjeblair: ok. I am thinking about how to safe pointless test runs.01:17
jeblairlifeless: https://bugs.launchpad.net/openstack-ci/+bug/102187901:18
uvirtbotLaunchpad bug 1021879 in openstack-ci "have zuul handle cross-repo-dependencies" [Medium,Triaged]01:18
lifelessjeblair: e.g. 'change in dib which needs a bumped requirement but global requirements hasn't changed yet - don't even /run/ this branch until that lands'01:18
lifelessjeblair: thanks01:18
fungigerrit is annoying at 640x480. i need to step up my timetable on upgrading these goggles01:20
*** jhesketh has quit IRC01:22
*** jhesketh has joined #openstack-infra01:23
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Remove devstack launch nodes and jobs  https://review.openstack.org/4226301:31
*** mriedem has joined #openstack-infra01:37
clarkbthat was fun. kernel update seemed to break my nvidia driver + X11 settings01:45
fungieek. i've been sticking with intel video chipsets because of years of hating nv and ati on 'nix01:47
jheskethAny zuul devs in here?01:48
clarkbjhesketh: yes01:48
clarkbfungi: I hit it with a sufficiently big hammer and it seems happy now, but for a while there I was a bit confused01:48
clarkbfungi: I was going to fall back on the intel hd 3000 if I had to01:49
jheskethclarkb: so I'm wondering about getting zuul to do more but I'm concerned about scaling.. For example, perhaps results should be separated from triggers so that we can have a different result action defined in the layout01:49
jheskethsuch as emailing a certain list for a particular gate01:49
jhesketh*emailing results01:50
jheskethso in the layout.yaml we'd define posting results to gerrit and XYZ (or any other plugin etc)01:50
jheskethwhat are your thoughts?01:50
clarkbcoming from gerrit land (which is what zuul has been focused on) triggers and results are tightly coupled01:51
*** anteaya has quit IRC01:51
clarkbif gerrit causes some work to be done we let gerrit know when it is complete.01:52
clarkbthat said I think this view is changing https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/zuul/layout.yaml#L25-L3801:52
clarkbnotice that the gerrit trigger section there only concerns itself with what should cause jobs to run01:53
clarkbthen further along we have the resutl reporting details01:53
clarkball that to say, yes I can see decoupling triggers from reporting results01:53
jheskethright, valid points though01:53
jheskethI guess it'd just need to be structured in a way that the result handlers know what to do if triggered from a different source01:54
jheskethactually, I guess only the gerrit result handler would be coupled to a gerrit trigger01:54
jheskethclarkb: so I guess the other discussion is whether or not it's good for zuul to be doing so much.. for example, would it be better for workers to have knowledge on how to process results01:55
jheskeththat way we distribute load01:56
jheskethI'm also concerned with how zuul can scale with gerrit given that it receives all events01:56
clarkbI am not too worried about load, our zuul is fairly heavily loaded right now but it keeps up.01:56
clarkbBut pushing more things into gearman is good in general01:56
jheskethI wonder if there is scope for a master zuul that receives all gerrit events and sends them off to other zuul servers to distribute to workers01:57
jheskethso in your opinion, if I wanted email reports from a particular worker that I'm designing would it be better for that worker to be emailing the results out or letting zuul collate and distribute?01:58
jhesketh(I eventually also want to get this working as a gate on infra)01:58
clarkbjhesketh: I am not sure we have a need for email in that way with openstack infra02:00
clarkbjhesketh: gerrit emails people when it gets comments and in other cases we have jenkins email people02:00
clarkbjhesketh: and email is cheap, if zuul were to do it it would go into the local spool and exim/postfix/sendmail/etc would handle it02:00
clarkbso I think it is fine if zuul itself handles email02:01
jheskethsure02:01
clarkbI would run all of this by jeblair before spending too much time on it though02:01
jheskethokay02:01
jheskethclarkb: taking a tangent for a bit, what does zuul do with results after it has published the reports?02:02
clarkbjhesketh: after it has left a comment in Gerrit?02:03
jheskethyeah02:03
*** sarob has quit IRC02:03
clarkbI believe they get garbage collected02:03
jheskethone of the reasons I think this may be helpful is because if I put my gate into the testing pipeline with no results being pushed back to gerrit I have to visit the worker to know if it was successful or not (in my understanding)02:03
*** sarob has joined #openstack-infra02:03
jheskethas won't it disappear from zuul as soon as my worker has finished with it02:04
clarkbcorrect, if the pipeline the job does not run in is not reporting to gerrit you will have to check the worker to see what happened02:04
clarkbwe have had jenkins email people when that is the case and we want to distribute results02:05
clarkbjhesketh: http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=23&page=2 has several load related graphs if you are interested. and I think today was one of the busiest days zuul has had02:05
jheskethokay sure02:05
jheskethshiny :-)02:05
jheskethso the scenario where we want gerrit to trigger work but don't want the results in the gerrit comment is a good case for having decoupled report modules02:06
jheskethbut given jenkins currently does that at the worker level (by just emailing) perhaps my worker should too02:06
*** yaguang has joined #openstack-infra02:06
clarkbits definitely one way to approach the problem. The biggset problem with jenkins emailing results is it emails on a per job basis02:07
clarkbit is often nice to see all of the jobs related to an event02:07
*** sarob has quit IRC02:08
jheskethright, and I'm guessing the html_description can't do that if you are hiding tests from gerrit02:08
*** sarob has joined #openstack-infra02:16
*** nati_ueno has quit IRC02:18
fungialso jenkins e-mail is not extremely flexible/extensible/configurable last i looked02:19
*** colinmcnamara has quit IRC02:24
*** colinmcnamara has joined #openstack-infra02:25
*** dims has quit IRC02:28
*** melwitt has quit IRC02:39
*** yaguang has quit IRC02:40
*** dguitarbite has joined #openstack-infra02:53
*** xchu has joined #openstack-infra02:54
*** jfriedly has quit IRC02:55
*** yaguang has joined #openstack-infra02:58
*** mriedem has quit IRC02:58
*** jhesketh has quit IRC03:10
*** jhesketh has joined #openstack-infra03:12
openstackgerritlifeless proposed a change to openstack/requirements: New fixtures release.  https://review.openstack.org/4225703:19
openstackgerritlifeless proposed a change to openstack/requirements: Ignore common files that should never be added.  https://review.openstack.org/4225803:19
*** colinmcnamara has quit IRC03:31
*** SergeyLukjanov has joined #openstack-infra03:32
*** colinmcnamara has joined #openstack-infra03:32
*** adalbas has quit IRC03:36
*** adalbas has joined #openstack-infra03:37
*** kspear has quit IRC03:38
*** xchu has quit IRC03:44
*** xchu has joined #openstack-infra04:01
*** ^demon|away has quit IRC04:02
yaguanghi, I find an error occurs  in jenkins ,  No distributions matching the version for netaddr>=0.7.6 (from -r /home/jenkins/workspace/gate-nova-docs/requirements.txt (line 18))04:02
*** ^demon|away has joined #openstack-infra04:03
yaguangthe url is here https://jenkins01.openstack.org/job/gate-nova-docs/924/console04:03
*** vogxn has joined #openstack-infra04:03
*** ^demon|away has quit IRC04:04
fungiyaguang: there was a network connectivity issue around 1600 utc yesterday where we were seeing that error04:07
yaguangfungi, the issue should still exists. I just submit a patch and find the issue04:08
fungiyes, i see your log is from less than an hour ago04:10
fungii'll reopen the bug report04:10
*** UtahDave has joined #openstack-infra04:10
fungiyaguang: for reference, bug 121275104:14
uvirtbotLaunchpad bug 1212751 in openstack-ci "netaddr could not be downloaded / installed" [Medium,Triaged] https://launchpad.net/bugs/121275104:14
*** afazekas_zz is now known as afazekas04:23
*** jhesketh has quit IRC04:23
*** jhesketh has joined #openstack-infra04:24
*** jhesketh_ has joined #openstack-infra04:28
*** jhesketh has quit IRC04:29
*** dguitarbite has quit IRC04:29
*** colinmcnamara has quit IRC04:33
*** nayward has joined #openstack-infra04:41
*** colinmcnamara has joined #openstack-infra04:46
*** sarob has quit IRC04:53
*** sarob has joined #openstack-infra04:54
*** sdake_ has joined #openstack-infra04:57
*** afazekas_ has joined #openstack-infra04:58
*** sarob has quit IRC04:58
*** colinmcnamara has quit IRC05:00
*** SergeyLukjanov has quit IRC05:05
*** xchu has quit IRC05:19
*** jerryz has quit IRC05:26
*** nicedice has quit IRC05:26
*** xchu has joined #openstack-infra05:28
*** kspear has joined #openstack-infra05:35
*** boris-42 has joined #openstack-infra05:40
*** jhesketh_ has quit IRC05:45
*** jhesketh has joined #openstack-infra05:46
*** fifieldt_ has joined #openstack-infra06:06
*** thomasbiege has joined #openstack-infra06:07
*** thomasbiege has quit IRC06:11
*** dkliban has joined #openstack-infra06:12
*** jhesketh has quit IRC06:29
*** jhesketh has joined #openstack-infra06:32
*** odyssey4me has joined #openstack-infra06:42
*** jerryz has joined #openstack-infra06:44
*** yaguang has quit IRC06:48
*** jerryz has quit IRC06:57
*** jerryz has joined #openstack-infra06:58
*** yaguang has joined #openstack-infra07:00
*** afazekas has quit IRC07:06
*** afazekas_ is now known as afazekas07:06
*** yaguang has quit IRC07:07
*** odyssey4me has quit IRC07:07
*** odyssey4me has joined #openstack-infra07:08
*** jerryz has quit IRC07:08
*** odyssey4me2 has joined #openstack-infra07:12
*** odyssey4me has quit IRC07:12
*** llu has joined #openstack-infra07:13
*** llu has left #openstack-infra07:13
*** odyssey4me has joined #openstack-infra07:14
*** odyssey4me2 has quit IRC07:16
*** llu has joined #openstack-infra07:24
lluhi guys, but I can't access the link https://jenkins01.openstack.org/job/gate-nova-docs/936/ when I saw the gate-nova-doc failure on zuul.openstack.org for my patch(#35764). Anyone know why?07:33
*** vogxn has quit IRC07:34
*** Ryan_Lane has joined #openstack-infra07:38
*** yaguang has joined #openstack-infra07:39
*** dina_belova has joined #openstack-infra07:42
*** psedlak has quit IRC07:44
*** dina_belova has quit IRC07:46
*** dina_belova has joined #openstack-infra07:47
*** SergeyLukjanov has joined #openstack-infra07:57
*** jerryz has joined #openstack-infra07:59
*** vkuz has quit IRC07:59
*** Ng_holiday is now known as Ng08:00
*** jpich has joined #openstack-infra08:03
*** boris-42 has quit IRC08:05
*** psedlak has joined #openstack-infra08:07
*** vogxn has joined #openstack-infra08:08
*** DennyZhang has joined #openstack-infra08:10
*** jerryz has quit IRC08:11
*** vogxn has quit IRC08:12
*** BobBall_Away is now known as BobBall08:18
lluIs gate-nova-docs check test broken? http://logs.openstack.org/64/35764/17/check/gate-nova-docs/4e87ead/console.html, the pip installation keeps failing to find netaddr>=0.7.608:20
Ryan_Lane:D08:33
Ryan_Lanethe vote window for rating openstack conference talks jumps after the page loads08:34
Ryan_Lane3 stars becomes a 0 star click08:34
morganfainbergRyan_Lane: oh thats fun.08:34
openstackgerritSerg Melikyan proposed a change to openstack-infra/config: Fix ACL for Murano projects  https://review.openstack.org/4165008:34
morganfainbergRyan_Lane: the next challenge is when the box runs away from the mouse cursor!08:35
*** fbo_away is now known as fbo08:37
*** boris-42 has joined #openstack-infra08:37
*** DennyZhang has quit IRC08:37
Ryan_Laneheh08:39
openstackgerritSerg Melikyan proposed a change to openstack-infra/config: Fix ACL for Murano projects  https://review.openstack.org/4165008:40
openstackgerritSerg Melikyan proposed a change to openstack-infra/config: Added murano-common project  https://review.openstack.org/4163408:40
jd__woh woh08:45
jd__I sense unstability in the gates08:45
*** lucasagomes has joined #openstack-infra08:45
jd__an unknown force is disturbing our tests08:45
*** yolanda has joined #openstack-infra08:46
yolandahi, good morning08:46
jd__we should call the infra team08:46
* jd__ looks up to the sky08:46
jd__hi yolanda08:46
yolandai've been progressing with my gerrit/zuul integration, but now when i submit a change it's stuck on status Submitted, Merge Pending08:46
yolandathe logs don't show any error, any idea if i'm missing something?08:46
yolandahi jd08:46
*** Ajaeger1 has joined #openstack-infra08:52
Ajaeger1hi, is jenkins broken?08:52
* Ajaeger1 just got twice a failure and the link by Jenkins does not work: https://review.openstack.org/#/c/42300/08:53
*** ruhe has joined #openstack-infra08:53
BobBallit appears so08:53
BobBallno jenkins jobs running at all08:53
BobBalland the gate jobs are looking weird08:53
BobBallunfortunately the whole of the infra team are asleep08:54
BobBallseems as thought jenkins01.openstack.org is down for some reason08:55
Ajaeger1BobBall, thanks08:56
Ajaeger1BobBall, is there a way to inform everybody about this?08:56
BobBallApart from sending a mail to openstack-dev? ;)08:58
BobBallbut I'm not sure what the problem is08:58
BobBallI'm only guessing08:58
BobBalljobs are referring to jenkins01 but it's not responding to me08:58
BobBallcould be something strange though08:58
*** odyssey4me2 has joined #openstack-infra09:00
*** odyssey4me3 has joined #openstack-infra09:02
*** odyssey4me has quit IRC09:03
*** odyssey4me2 has quit IRC09:05
Ajaeger1BobBall, could you send one to openstack-dev, please? you did the hard work of figuring it out ;)09:13
jd__I've opened https://bugs.launchpad.net/openstack-ci/+bug/121299009:17
uvirtbotLaunchpad bug 1212990 in openstack-ci "Disturbance in the force render tests UNSTABLE" [Undecided,New]09:17
openstackgerritRoman Podolyaka proposed a change to openstack-infra/config: Modify running of sqlalchemy-migrate tests  https://review.openstack.org/3930409:18
* jd__ dials 555-JNKNS-BRKN09:18
*** dina_belova has quit IRC09:26
*** DennyZhang has joined #openstack-infra09:30
*** UtahDave has quit IRC09:47
*** thomasbiege has joined #openstack-infra09:49
*** morganfainberg is now known as morganfainberg_a09:49
*** thomasbiege has quit IRC09:50
*** morganfainberg_a is now known as morganfainberg09:52
*** jhesketh has quit IRC09:53
*** morganfainberg is now known as morganfainberg|a09:54
*** morganfainberg|a is now known as morganfainberg09:55
*** morganfainberg is now known as morganfainberg|a09:56
*** rpodolyaka has joined #openstack-infra10:03
*** zaro has quit IRC10:06
*** ruhe has quit IRC10:10
*** zaro has joined #openstack-infra10:26
*** dina_belova has joined #openstack-infra10:26
*** xchu has quit IRC10:27
*** dina_belova has quit IRC10:31
*** pcm_ has joined #openstack-infra10:32
*** pcm_ has quit IRC10:32
*** pcm_ has joined #openstack-infra10:33
*** dina_belova has joined #openstack-infra10:34
*** dina_belova has quit IRC10:34
*** rpodolyaka has left #openstack-infra10:38
*** mjfork has joined #openstack-infra10:39
*** dina_belova has joined #openstack-infra10:40
*** dina_bel_ has joined #openstack-infra10:40
*** dina_belova has quit IRC10:40
*** dina_bel_ has quit IRC10:42
*** yaguang has quit IRC10:43
comstudwho broked jenkins10:43
BobBallnot me10:44
BobBallI blame Ajaeger110:44
* jd__ pleads not guilty10:44
*** DennyZha` has joined #openstack-infra10:45
*** ruhe has joined #openstack-infra10:45
*** DennyZhang has quit IRC10:46
BobBallhow best to raise the alarm comstud ?10:47
BobBallor do we just wait for hours until someone comes back online10:47
*** DennyZha` has quit IRC10:56
*** derekh has joined #openstack-infra10:57
fungi#status alert some sort of gating disruption has been identified--looking into it now11:04
fungimmm, joy. et tu statusbot?11:04
BobBallhehe11:05
BobBalllooks like jenkins01 is down11:05
BobBallthat was taking a number of the jobs11:05
BobBallmight not be the cause though of course11:05
BobBallbbl11:05
*** openstackstatus has joined #openstack-infra11:06
fungi#status alert some sort of gating disruption has been identified--looking into it now11:07
openstackstatusNOTICE: some sort of gating disruption has been identified--looking into it now11:07
*** ChanServ changes topic to "some sort of gating disruption has been identified--looking into it now"11:07
mordredmorning fungi11:08
*** odyssey4me3 has quit IRC11:08
*** dims has joined #openstack-infra11:08
mordredoh god. there's a disturbance in the force?11:08
fungimordred: for some definitions of "good"11:08
fungia great disturbance in the force, as if thousands of jenkins jobs cried out...11:10
fungiyeah11:10
dimsi see 'em a sea of red11:11
fungijenkins is in the process list on all three jenkins servers with start times from yesterday11:13
dimscan't seem to get to logs.openstack.org either11:14
fungiand yet apache is running on static.o.o where that's served11:15
fungiis something rotten in the state of rackspace?11:16
*** odyssey4me3 has joined #openstack-infra11:16
dimsfungi, managed to get url to one jenkins run - https://jenkins02.openstack.org/job/gate-nova-pep8/507/console11:17
*** mriedem has joined #openstack-infra11:17
dims2013-08-16 10:54:12.503 | [SCP] Connecting to static.openstack.org11:18
dims2013-08-16 10:54:14.221 | [SCP] Trying to create /srv/static/logs/16/4231611:18
dims2013-08-16 10:54:14.227 | ERROR: Failed to upload files11:18
fungihttps://status.rackspace.com/ says everything is peachy11:18
fungicrap11:19
fungi/dev/mapper/main-logs         1.5T  1.5T     0 100% /srv/static/logs11:19
mordredfungi: I just arrived from a redeye international flight - so let me know if ... poop11:19
dims2013-08-16 10:54:14.230 | at be.certipost.hudson.plugin.SCPSite.mkdirs(SCPSite.java:314)11:19
mordredI was going to say "tell me how to help"11:19
dimsright, mkdirs is failing that would do it11:19
fungi#status alert the log server has filled up, disrupting job completion--working on it now, ETA 12:30 UTC11:21
openstackstatusNOTICE: the log server has filled up, disrupting job completion--working on it now, ETA 12:30 UTC11:21
*** ChanServ changes topic to "the log server has filled up, disrupting job completion--working on it now, ETA 12:30 UTC"11:21
*** nayward has quit IRC11:22
fungiat least this time all i need to do is tack an additional cinder volume onto that vg and grow the lv/fs11:23
*** lucasagomes is now known as lucas-hungry11:24
mordredfungi: yay!11:24
*** lucas-hungry has left #openstack-infra11:26
fungiit's times like this when i'm glad i make cold brew coffee concentrate and keep it in the fridge11:27
fungifor those following along at home, i'm basically following http://ci.openstack.org/static.html11:28
openstackgerritafazekas proposed a change to openstack-infra/devstack-gate: Using the jenkins user for tempest run  https://review.openstack.org/4210111:30
dimsfungi, nice. thanks11:34
*** yamahata has quit IRC11:35
*** thomasbiege has joined #openstack-infra11:42
fungioh joy... encountered the same xen page allocation failure trying to attach the block device as jeblair saw11:42
fungirebooting static.o.o because the second nova nova volume-attach also faulted11:47
*** thomasbiege has quit IRC11:48
*** dina_belova has joined #openstack-infra11:53
fungii've got /srv/static/logs doing an online resize to 2 terabytes now, so this train has just about sailed back into the garage11:56
*** weshay has joined #openstack-infra11:57
*** dina_belova has quit IRC11:57
fungi"The filesystem on /dev/main/logs is now 538171392 blocks long."11:58
fungiFilesystem             Size  Used Avail Use% Mounted on11:58
fungi /dev/mapper/main-logs 2.0T  1.5T  545G  74% /srv/static/logs11:58
mordredawesome. nicely done fungi12:00
fungi#status log server has a larger filesystem now--rechecking/reverifying jobs, ETA 12:30 UTC12:00
fungi#status alert log server has a larger filesystem now--rechecking/reverifying jobs, ETA 12:30 UTC12:00
openstackstatusNOTICE: log server has a larger filesystem now--rechecking/reverifying jobs, ETA 12:30 UTC12:00
*** ChanServ changes topic to "log server has a larger filesystem now--rechecking/reverifying jobs, ETA 12:30 UTC"12:00
*** radix has joined #openstack-infra12:01
*** markmc has joined #openstack-infra12:01
*** CaptTofu has quit IRC12:02
*** CaptTofu has joined #openstack-infra12:02
*** CaptTofu has quit IRC12:04
*** CaptTofu has joined #openstack-infra12:04
*** ArxCruz has joined #openstack-infra12:04
*** CaptTofu has quit IRC12:05
*** CaptTofu has joined #openstack-infra12:05
BobBallfungi? Xen page allocation?12:08
fungiBobBall: yeah, when we try to attach a new cinder volume in rackspace we've on occasion seen dmesg report a page allocation failure and then the block device never appears12:11
BobBallGot some logs from that?12:12
BobBalli.e. the actual failure from dmesg :)12:12
fungii got that, then detached/deleted it, created a new one, tried to attach it, got the same failure again, rebooted the vm and the new volume showed up attached12:12
BobBallit's that whole reboot-in-the-middle step that destroys the logs, right? ;)12:12
fungiBobBall: yeah, though i'd almost guarantee it's the same as the one jeblair opened a trouble ticket on a couple months bach12:12
fungiBobBall: well, the syslog should have it when i get a moment to dig it out12:13
fungithe root fs wasn't full, just the place where we upload build logs12:13
BobBall1.5TB is a lot of build logs12:13
fungiBobBall: we compress them aggressively, and delete them once they're 6 months old12:14
fungiso yes, that is a *lot* of logs12:14
BobBallConsidered a compressed filesystem rather than individual logs?12:14
BobBallI guess the _vast_ majority of the logs are repetitive among groups12:14
BobBallor even multiple logs compressed as one and served from some form of applet would work12:15
fungiit's possible. the current plan is to relocate those into a swift object store instead though12:15
BobBallfair enough12:15
fungiwe just need to write a frontend to emulate apache's mod_autoindex12:16
fungiand rework our uploads to go straight to swift and then update the index on the "log server" (which will then only contain a log of the logs)12:17
BobBallbtw - diff subject... why does zuul sometimes not have any stylesheet?  Seems to be at least 1/4 of the time12:18
*** mberwanger has joined #openstack-infra12:18
fungisounds like your browser may not be grabbing it either because something's lagging/failing to serve in apache/fs-side or maybe the problem is on your end or packet loss in between12:20
*** odyssey4me3 has quit IRC12:20
fungii assume the page source still shows the link for it. maybe try reloading the css file a few times and see if you get errors?12:20
BobBallThe requested URL /bootstrap/css/bootstrap.min.css was not found on this server12:21
BobBallsame problem time and time again :D12:21
BobBallhttp://zuul.openstack.org/bootstrap/css/bootstrap.min.css doesn't exist, as it suggests12:21
BobBallsame with http://zuul.openstack.org/bootstrap/css/bootstrap-responsive.min.css (of course)12:22
fungiBobBall: yeah, you want http://status.openstack.org/zuul12:22
BobBalldon't suppose that zuul.openstack.org is loadbalanced and maybe some are set up and not others? ;)12:22
BobBallomg... I'm that stupid...12:22
fungihttp://zuul.openstack.org/ is a demo of its built-in example status interface12:22
*** ruhe has quit IRC12:22
BobBallwhen it works I use the right URL and when it doesn't I use the wrong URL!12:23
fungibut we don't ship bootstrap with zuul, and we don't use bootstrap in openstack's status interface for it12:23
BobBallI am suitably embarassed.12:23
BobBallthanks12:23
* fungi spends his life embarrassed12:23
fungino worries12:24
*** ruhe has joined #openstack-infra12:24
*** odyssey4me3 has joined #openstack-infra12:28
fungiokay, sifting through build failures, the first sign of trouble i see in job results seems to start around 0830z12:30
BobBallbtw, is it worth adding a cron job to check things like disk usage or jenkins job usage and give the infra team a warning when it's at some high percentage?12:30
*** SergeyLukjanov has quit IRC12:30
*** woodspa has joined #openstack-infra12:31
fungiit's been brought up before, and we can discuss it in tuesday's meeting if you like12:32
*** mberwanger has quit IRC12:33
*** mberwanger has joined #openstack-infra12:34
fungiultimately i think we'd be better off with an extensible monitoring system if we decide to go that route, but many, many years as a data center ops person probably clouds my opinions there12:34
*** mberwang_ has joined #openstack-infra12:35
fungiso far consensus has been that we benefit more from finding ways to proactively make important systems resistant to failure rather than spend time reacting to failure alerts12:36
BobBallyeah12:36
BobBallthat's the cloudy way ;)12:36
fungiand i think there may be some concern that starting to rely on a monitoring system will steer us to always reacting to alerts rather than just removing the risks indefinitely12:37
mordred++12:37
mordredfor instance - the swift engineering stuff12:37
*** Ajaeger1 has left #openstack-infra12:37
fungiyup, precisely12:37
mordredalso - our devs are like the best monitoring system ever :)12:37
BobBallhaha12:37
BobBall08:46 < jd__> I sense unstability in the gates12:38
fungiyeah, seriously. i woke up, saw people in irc complaining that i sleep too much (just kidding) and was able to fairly immediately find the issue12:38
BobBall16 minutes ;) not bad ;)12:38
*** mberwanger has quit IRC12:38
fungibut the down side there is that "devs noticing something is broken" is reaction to a failure rather than warning of an impending failure12:39
fungi#status ok still rechecking/reverifying false negative results on changes, but the gate is moving again12:41
openstackstatusNOTICE: still rechecking/reverifying false negative results on changes, but the gate is moving again12:41
*** ChanServ changes topic to "Discussion of OpenStack Developer Infrastructure | docs http://ci.openstack.org | bugs https://launchpad.net/openstack-ci/+milestone/grizzly | https://github.com/openstack-infra/config"12:41
*** thomasbiege has joined #openstack-infra12:41
mordredfungi: do you have a script for those recheck/reverfys?12:43
mordredfungi: I see you do that from time to time and I always wonder if it's something I can learn how to do sensibly?12:43
mordredor if you just brute force it12:44
openstackgerritKiall Mac Innes proposed a change to openstack-infra/config: Add the openstackstatus bot to #openstack-dns  https://review.openstack.org/4233512:47
*** dims has quit IRC12:48
*** thomasbiege2 has joined #openstack-infra12:48
openstackgerritKiall Mac Innes proposed a change to openstack-infra/config: Add the openstack bot to #openstack-dns  https://review.openstack.org/4233612:48
*** thomasbiege has quit IRC12:49
openstackgerritafazekas proposed a change to openstack-infra/devstack-gate: Using the jenkins user for tempest run  https://review.openstack.org/4210112:50
openstackgerritMonty Taylor proposed a change to openstack-dev/pbr: Rework run_shell_command  https://review.openstack.org/4233712:50
openstackgerritMonty Taylor proposed a change to openstack-dev/pbr: Ensure that setup_requires are installed  https://review.openstack.org/4233812:50
*** sandywalsh has quit IRC12:52
fungimordred: i mostly go through and clicky-clicky because i need to look and see if it failed in the right window, if someone else already left a recheck or reverify, if they did it correctly, and i usually also remove the negative check votes from jenkins too12:53
*** dina_belova has joined #openstack-infra12:53
fungitough to automate, though probably not impossible12:53
*** mberwang_ has quit IRC12:56
openstackgerritRoman Podolyaka proposed a change to openstack-infra/config: WIP: Run Nova DB API tests on MySQL and PostgreSQL  https://review.openstack.org/4214212:56
*** dina_belova has quit IRC12:57
*** ruhe has quit IRC12:58
fungii also generally try to remember to unsubscribe myself from the change when i recheck/reverify in situations like this, otherwise my review queue ends up full of mud13:00
*** SergeyLukjanov has joined #openstack-infra13:00
*** jjmb has quit IRC13:01
*** anteaya has joined #openstack-infra13:01
mordredfungi: oh - you can do that?13:02
fungimordred: after leaving a recheck or reverify comment, i hit the little X button next to my empty vote (and next to jenkins -1 vote if it was a recheck)13:03
fungithat keeps it out of my changes list13:03
mordredahhh. nice. i should start doing that13:03
BobBallSpeaking of which, why can I remove some people and not others?  e.g. on https://review.openstack.org/#/c/42144/ I can remove Cristopher but not Mate13:04
*** thomasbiege2 has quit IRC13:05
*** dims has joined #openstack-infra13:05
*** sandywalsh has joined #openstack-infra13:07
mordredhttps://review.openstack.org/#/c/42335/ <-- we should probably at some point make that a list of things somewhere13:09
mordredBobBall: hrm. interesting. no idea13:09
markmcyou know what's really annoying?13:10
*** russellb is now known as rustlebee13:11
markmcwell, gerrit's auto-completion for adding people to a review13:11
markmcthat's annoying13:11
*** comstud is now known as bearhands13:11
* markmc constantly adds the wrong people13:11
markmcbut what's super annoying, is you can't remove them from the review then13:11
markmconly they can remove themselves13:11
BobBallI was going to say when you can't stab a pea with your fork and it falls off the table... but yes, that's also annoying13:11
markmcBobBall, a cheeky two year old half-heartedly trying to eat peas with a fork13:12
markmcand they *all* fall off the table13:12
markmcnow that's annoying13:12
* markmc gets it all out there13:12
*** mberwanger has joined #openstack-infra13:12
mordredwow.13:12
* mordred knows what really bugs markmc now13:12
BobBallhaha13:13
markmcmordred, you're worse than a two year old13:13
mordredmarkmc: I'm SO much worse than a two year old13:13
markmcheh13:13
mordredalthough - funny story.13:13
mordredI just did a thing in Brazil and was flying back home last night13:13
*** thomasbiege has joined #openstack-infra13:14
mordredand the guy sitting next to me heard me talking on the phone to my mom before we took off about what I'd done13:14
mordredwhen I hung up the phone, he turns to me and says, so, you do OpenStack?13:14
mordredand then told me that he's just started doing openStack at VMware13:14
mordredand is working with Dan13:14
mordredthe world is VERY VERY small13:14
markmcheh, that's fun13:14
markmcnot some random "I read about that in the NYT"13:14
mordrednope. like, he's actually working on it13:15
mordredtold me about things they're running internally that I probably shouldn't know about and whatnot13:15
mordredbut actually fun to talk to13:15
fungii had the same experience on a flight back from the havana summit, some guy next to me was on his way back from a vacation in colorado and worked for netapp in rtp. i talked him into starting to come to the local user group meetings13:15
mordredvery interesting perspective - he's been doing vmware since it was an upstart thing that people ran under their desks and didn't tell anyone about - finds it weird to now be the bad guys13:16
fungithough i guess not the exact same experience, since he wasn't working on openstack yet13:16
openstackgerritDavid Caro proposed a change to openstack-infra/jenkins-job-builder: Fixing override-votes for gerrit trigger  https://review.openstack.org/4234113:16
*** dina_belova has joined #openstack-infra13:16
mordredfungi: nice - the people I talk to on planes who are tech and are not doing openstack already glaze over quickly13:16
*** zul has quit IRC13:16
mordredbtw - I should probably tell people13:17
mordredI've unwatched jjb13:17
mordredI do not feel competent to review most of the patches13:17
fungiwell, i was just trying to get work done and he saw openstack logos on my netbook screen and started asking all sorts of questions13:17
mordredI will review something if specifically asked, but in general, I think the jjb team is doing a great job13:17
fungiwell this is no good. i'm intermittently unable to load the web interface for jenkins0113:21
fungiproxy timeouts from its web service13:21
*** mberwanger has quit IRC13:21
*** dprince has joined #openstack-infra13:22
*** rfolco has joined #openstack-infra13:22
openstackgerritMonty Taylor proposed a change to openstack-infra/reviewstats: Clint's name is different on gerrit  https://review.openstack.org/4234313:23
*** dansmith is now known as Steely_Dan13:24
*** pentameter has joined #openstack-infra13:24
*** thomasbiege has quit IRC13:24
openstackgerritA change was merged to openstack-infra/config: Added murano-common project  https://review.openstack.org/4163413:25
*** thomasbiege has joined #openstack-infra13:25
fungiwell, not so much intermittently now13:26
*** dina_belova has quit IRC13:26
fungiyeah, there are a ton of jobs hung at 100% in the zuul status page and they all were running on jenkins0113:27
fungithe server seems to be basically idle (load average of 0.00 now) but jenkins is still running in the process list13:28
fungii suspect it's gone to lunch13:29
fungiunfortunately, i suspect even if i force it to restart, zuul is going to think those jobs are still running?13:29
*** xchu has joined #openstack-infra13:30
fungijeblair: would know, but it's probably still too early for him13:30
jd__thanks fungi btw13:30
fungijd__: you're welcome13:30
fungiwell, restarting the jenkins process on jenkins01 and seeing what zuul thinks once that's back up and responding13:32
openstackgerritA change was merged to openstack-infra/reviewstats: Clint's name is different on gerrit  https://review.openstack.org/4234313:32
fungi#status alert the earlier log server issues seem to have put one of the jenkins servers in a bad state, blocking the gate--working on that, ETA 14:00 UTC13:34
openstackstatusNOTICE: the earlier log server issues seem to have put one of the jenkins servers in a bad state, blocking the gate--working on that, ETA 14:00 UTC13:34
*** ChanServ changes topic to "the earlier log server issues seem to have put one of the jenkins servers in a bad state, blocking the gate--working on that, ETA 14:00 UTC"13:34
fungijenkins didn't stop cleanly on the server either. giving it a few minutes to react to the sigterms before i give up and send more drastic sigs13:35
*** openstackgerrit has quit IRC13:36
*** openstackgerrit has joined #openstack-infra13:37
*** xBsd has joined #openstack-infra13:37
fungizomg zuul is awesomesauce13:39
mordredyeah it is13:40
fungii had to beat jenkins01 senseless to get the old jvm to diaf13:40
mordredfungi: are we holding off on landing nodepool patches for jeblair ?13:40
openstackgerritA change was merged to openstack-infra/gitdm: add Deutsche Telekom colleagues and update domain-map  https://review.openstack.org/4215513:40
*** simonmcc has quit IRC13:40
fungimordred: i'm holding off nothing. i think i reviewed them all last night before i passed out, but if memory serves i spotted an issue in the image/provider yaml file which needs fixing13:41
mordredfungi: was just asking before the first one I reviewed had 2x +2 and no aprv13:41
mordredso I wasn'tsure if we were letting jeblair control landing13:41
fungioh, yeah i didn't approve any of them in case there was some subtle ordering which needed ot be observed between projects there13:42
fungiand also because it was late and i don't think anyone was around in case something broke from me merging a change prematurely13:43
*** thomasbiege has quit IRC13:43
*** simonmcc has joined #openstack-infra13:45
mordredyah13:45
*** thomasbiege has joined #openstack-infra13:45
*** SergeyLukjanov has quit IRC13:45
sandywalshmordred, hey! Is there a way for me to see all my review submissions in gerrit? (not my code reviews, but my branch submissions). I'm looking for an old patch that got squashed out of my local repo.13:46
*** simonmcc has joined #openstack-infra13:47
openstackgerritDavid Caro proposed a change to openstack-infra/jenkins-job-builder: Fixed timeout wrapper  https://review.openstack.org/4234813:48
*** burt has joined #openstack-infra13:48
*** SergeyLukjanov has joined #openstack-infra13:51
fungi#status ok the gate seems to be properly moving now, but some changes which were in limbo earlier are probably going to come back with negative votes now. rechecking/reverifying those too13:53
openstackstatusNOTICE: the gate seems to be properly moving now, but some changes which were in limbo earlier are probably going to come back with negative votes now. rechecking/reverifying those too13:53
*** ChanServ changes topic to "Discussion of OpenStack Developer Infrastructure | docs http://ci.openstack.org | bugs https://launchpad.net/openstack-ci/+milestone/grizzly | https://github.com/openstack-infra/config"13:53
*** beagles is now known as seagulls13:54
*** bnemec is now known as oldben13:54
*** ruhe has joined #openstack-infra13:54
jeblairfungi: mrmrmmmmm13:55
fungijeblair: actually, there's still a problem13:55
fungijeblair: i just noticed that now the only running changes are on 01 and everything zuul still sees running on 02 is stuck at 100% and i can't get to its web interface now13:55
fungiso same thing i was seeing earlier on 01. seems it eventually hit 02 as well, just took longer13:56
jeblairfungi: 1 sec before you restart13:59
fungijeblair: definitely. i'm holding off so you can grab sufficient diags13:59
jeblairjstack -F 15236 >stack14:00
jeblairfor the record14:00
*** ryanpetrello has joined #openstack-infra14:02
*** datsun180b has joined #openstack-infra14:02
fungii've added that to a more prominent part of my notes. i went looking for it earlier but didn't want to leave things broken long enough for me to track down the jstack utility cli14:03
*** datsun180b has quit IRC14:03
*** datsun180b has joined #openstack-infra14:03
fungianyway, if you're ready for me to restart the jenkins process, let me know14:03
fungior feel free to restart it yourself when you're ready14:04
*** HenryG has joined #openstack-infra14:05
*** DennyZhang has joined #openstack-infra14:05
jeblairthis is weird; i want more time to look14:06
fungijeblair: absolutely14:06
yolandahi, i am trying to use git branches from zuul in jenkins, that syntax should be correct, or am i missing something? http://host/p/path/to/project.git14:07
yolandai'm receiving an error like Not a git repository, Request not supported14:07
*** mjfork has left #openstack-infra14:07
yolandaif i try like http://host/p/ it also shows errors14:07
*** krtaylor has quit IRC14:08
*** changbl has quit IRC14:10
fungiyolanda: does the apache configuration for your zuul vhost have "ScriptAlias /p/ /usr/lib/git-core/git-http-backend/" and so on to provide access to those git repositories?14:11
yolandafungi, it has it14:11
fungiyolanda: for reference, https://git.openstack.org/cgit/openstack-infra/config/tree/modules/zuul/templates/zuul.vhost.erb#n1714:12
fungiis your apache error.log complaining about those requests?14:12
yolandafungi, is exactly like that14:13
yolandaerror.log complains if i try to fetch some repo, but not for the /p/ call14:13
yolandaalthough it shows error on website14:13
yolandaon browser14:13
fungidoes the path your access.log says it served for those requests actually exist on the filesystem?14:13
yolandafungi, for the /p/ it triggers a notfound14:14
yolanda"GET /p/ HTTP/1.1" 404 281 "-" "Wget/1.14 (linux-gnu)"14:14
fungiyolanda: does "/usr/lib/git-core/git-http-backend" exist on your server?14:14
yolandayes14:14
yolandai can call it from command line14:15
*** koobs has quit IRC14:16
fungitoddmorey's not in irc :(14:17
fungii've got someone asking about correcting a mistake in the talks list14:17
*** dprince has quit IRC14:18
openstackgerritMonty Taylor proposed a change to openstack-infra/nodepool: Change use of error numbers to errno  https://review.openstack.org/4235614:18
openstackgerritMonty Taylor proposed a change to openstack-infra/nodepool: Update style checking to hacking  https://review.openstack.org/4235714:18
*** DennyZhang has left #openstack-infra14:23
fungiyolanda: so when you try to fetch a change and are getting the request not supported error, are there corresponding entries in your apache access.log, and do they correspond to the actual git repositories on your zuul server's filesystem?14:23
yolandafungi, yes14:24
fungialso, are they readable by the user apache is running as?14:24
yolandabut even the simplest call, /p/ is triggering a 40414:24
yolandait has read access for everyhone14:24
yolandaeveryone14:24
openstackgerritA change was merged to openstack-infra/git-review: No longer check for new git-review releases  https://review.openstack.org/4221414:25
openstackgerritA change was merged to openstack-infra/git-review: Migrate to pbr.  https://review.openstack.org/3548614:26
*** dina_belova has joined #openstack-infra14:27
jeblairfungi: i think the last time i did this we were running jdk6; i think the thread dumps have changed in jdk7, and not for the better14:27
fungi:(14:27
*** cthulhup_ has joined #openstack-infra14:28
mordredYAY14:28
mordredI love it when things change and not for the better14:28
yolandafungi, i removed the final slash from ScriptAlias /p/ /usr/lib/git-core/git-http-backend14:29
yolandanow i have different error fatal: GIT_PROJECT_ROOT is set but PATH_INFO is not14:29
yolandabut seems that it's doing something, final slash didn't like it14:29
fungiinteresting. i wonder if an extra slash is making its way into the request14:30
*** zul has joined #openstack-infra14:31
*** dina_belova has quit IRC14:32
mroddengfdi... i was just ordered to put access controls on our internal replica of the openstack sources, "becauase of policy and compliance"14:32
mroddenwhat part of "open source" is difficult to understand...14:32
fungiyolanda: seems to get reported at https://github.com/git/git/blob/master/http-backend.c#L533 for what it's worth14:32
*** dprince has joined #openstack-infra14:32
mordredjeblair: btw - I'm +2ing all of your nodepool changes but leaving them unapproved as I think there are some dependency landing orders going on in there14:33
fungimrodden: you should agree to restrict read access to that source code to the same set of individuals we do14:33
mordredmrodden: ha. yeah. definitely access controls on a copy of readily available source code are necessary14:34
jeblairmordred: thx14:34
mroddenyeah they want me to put ACLs on it for read access and above14:34
jeblairmordred, fungi: so a lot of threads are waiting to lock the Jenkins object while in the process of performing a delete14:34
mroddenand then have a separate list for each one so its "compartmentalized"14:34
mroddenand approve each request for access to it14:34
mroddenjoy14:34
jeblairi can't find a way to determine what thread _has_ that lock14:34
mordredjeblair: AWESOME14:35
mordredjeblair: start deleting threads until the lock goes away - that's which one has it?14:35
yolandafungi, this shouldn't be empty, just provided from apache, right?14:35
jeblairmordred: heh14:35
fungijeblair: well, when i restarted jenkins on jenkins01 it seemed that lock got released, so presumably the thread holding that lock is still live14:36
mordredmaybe this is a thing to raise in #jenkins - tons of java dudes in there14:36
mordredI mean, I can't imagine that java doesn't have a way to discover what you'relooking for14:36
fungiyolanda: right, anything matching the alias patterns gets served directly from the filesystem and then everything else under /p/ gets sent to the git-http-backend cgi via the scriptalias there14:36
*** nayward has joined #openstack-infra14:37
*** dina_belova has joined #openstack-infra14:37
fungiyolanda: this is no different from how we serve git repositories from the filesystem via http on review.openstack.org or git.openstack.org14:38
yolandamm, problem i'm facing now is that PATH_INFO var, that is empty14:38
openstackgerritA change was merged to openstack-infra/config: Fix intermittent jenkins plugin build failure  https://review.openstack.org/4206214:39
*** boris-42 has quit IRC14:39
mordredbtw - I'm going to try to start working through review queue first thing in the morning, because I've been doing way more coding than reviewing lately, and that's not cool14:40
fungiyolanda: i believe $PATH_INFO gets passed to the cgi in the calling environment only if there's something after the cgi alias in the url. so http://host/p/ won't have anything $PATH_INFO but http://host/p/foo will14:41
jeblairmordred: well, that got them quiet.14:41
*** dina_belova has quit IRC14:41
fungiyolanda: so that error might just be artifact of you trying to browse /p/ directly14:42
jeblairmordred: (asking in #jenkins)14:42
*** _TheDodd_ has joined #openstack-infra14:42
jeblairoh, i just found the jbb command 'threadlocks'14:43
fungiyeah, everyone's suddenly afraid to talk in there, for fear you'll ask them directly ;)14:43
*** pentameter has quit IRC14:43
fungicrickets14:43
jeblairCommand 'threadlocks' is not supported on the target VM14:43
jeblairmaybe that has something to do with why the stacktraces have less lock info.14:44
mordredhow do we get them to be supported?14:45
jeblairmordred: i dunno, open a support contract with oracle?14:45
fungiinherit from something besides the notimplemented factory?14:46
jeblairha14:46
*** ruhe has quit IRC14:47
jeblairCommand 'lock' is not supported on a read-only VM connection14:47
*** rnirmal has joined #openstack-infra14:48
mordredwow14:48
mordredwhat a great design14:48
jeblairi mean, we could read the stacktraces of all 843 threads and figure it out.14:50
fungii'm going to take this lull as an opportunity to finish my breakfast and grab a quick shower, since i've been glued to irc from the moment i woke up. bbiab14:50
jeblairit's actually _less_ than 10,000 lines of stacktrace14:51
*** krtaylor has joined #openstack-infra14:51
jeblairbut not by much14:51
mordredjeblair: I'm writing an email to the internal java dev guys to see if they have any idea14:51
*** koobs has joined #openstack-infra14:51
yolandafungi, if i have my project for example in /var/lib/zuul/git/sunnyvale/openstack/ceilometer, a git clone like git clone git://91.189.93.35/p/sunnyvale/openstack/ceilometer.git should be ok?14:52
*** rcleere has joined #openstack-infra14:52
jeblairmordred: i have a read-only connection to the vm because i'm using jsadebugd (which i believe i need because we did not have the "foresight" to run all of our production servers with the debug server enabled all the time).14:53
yolandait triggers a Not a git repository error14:53
mordredjeblair: lovely14:53
mordredyolanda: did you git init the directory?14:54
yolandayes14:54
mordredyolanda: and/or do you know about manage-projects from jeepyb?14:54
mordredok14:54
yolandamordred, directory is a git repo, i can do git calls from it, and i don't know about manage-projects14:55
fungiyolanda: note that using git:// isn't likely going through apache. are you sure you're not using http://91.189.93.35/p/sunnyvale/openstack/ceilometer.git instead?14:55
mordredyolanda: https://git.openstack.org/cgit/openstack-infra/jeepyb is your friend14:56
yolandafungi, an http just triggers me the not found14:56
fungior https?14:56
mordredyolanda: it lets you manage the list of projects that are in gerrit from a yaml file14:56
openstackgerritafazekas proposed a change to openstack-infra/devstack-gate: Using the root user for tempest run  https://review.openstack.org/4210114:56
yolandamm, from command line http://91.189.93.35/p/sunnyvale/openstack/ceilometer.git/info/refs?service=git-upload-pack not found: did you run git update-server-info on the server?14:56
yolandamordred, what i'm trying is to setup zuul refs to work with jenkins14:56
mordredyolanda: it also has provisions/cron-job for following upstream14:57
mordredyup14:57
mordredyolanda: you have to init the repo differently for it to be a server directory...14:57
mordredyolanda: git --bare init14:57
*** fifieldt_ has quit IRC14:58
fungimordred: zuul needs non-bare repos though, right? at least the ones on zuul.o.o are non-bare in /var/lib/zuul/git14:58
*** woodspa_ has joined #openstack-infra15:01
yolandamordred, fungi, my repos just were automatically created in zuul, cloning from gerrit15:01
BobBallmordred: not sure we can fix it, but I've been hit a few times by the replacement sudo when using GLOBAL_VENV - e.g. being told to "pip install -U git-review", which when run with sudo installs it in the venv rather than the root, so git review doesn't pick it up15:02
fungiright, it will construct those clones if they don't already exist15:02
mordredoh. these are the zuul repos, not the gerrit mirror repos15:03
mordredgotcha15:03
* mordred shuts up15:03
*** yaguang has joined #openstack-infra15:03
mordredBobBall: ah. interesting15:03
mordredBobBall: I think ... I think we can - but I think we might need to be clever15:03
mordredOR15:03
*** ruhe has joined #openstack-infra15:03
*** ^d has joined #openstack-infra15:03
jeblairi love that the thread list has one kind of thread id, and the stack trace another15:03
*** ^d has quit IRC15:04
*** ^d has joined #openstack-infra15:04
mordredBobBall: we could put a source $dest/.venv/bin/activate into stackrc15:04
openstackgerritKhai Do proposed a change to openstack-infra/gearman-plugin: remove restriction on slave to run single job at a time  https://review.openstack.org/4222615:04
mordredBobBall: would that be a sensible workflow? or should we get cleverer15:04
BobBallstackrc?  not sure I understand why that'd help?15:04
*** woodspa has quit IRC15:04
mordredbecause that's output by devstack, yeah? with keys and stuff15:05
mordredso don't you normally source that before doing things on the box anyway?15:05
BobBallyou mean openrc?15:05
mordredyes. that is what Imean15:05
BobBallAh - yes - that'd help15:05
*** colinmcnamara has joined #openstack-infra15:06
BobBallor have our own sudo alias in devstack that sets it to use the venv?  might that work?15:06
*** pabelanger is now known as nubbie15:10
*** dina_belova has joined #openstack-infra15:10
*** dina_belova has quit IRC15:12
*** dina_belova has joined #openstack-infra15:12
*** nsaje1 has joined #openstack-infra15:12
*** changbl has joined #openstack-infra15:12
*** yaguang has quit IRC15:13
nsaje1hey guys, a question: is it possible to patch devstack to install MongoDB from a 3rd party repo? 10gen repo to be exact, since Ubuntu hasn't backported Mongo 2.2 yet15:14
nsaje1Ceilometer requires Mongodb >=2.2 so the API doesn't start on devstack gate now15:14
*** cthulhup_ has quit IRC15:15
mriedemnsaje1: ceilometer requires pymongo, not necessarily mongodb, right?15:15
mriedemi mean typically you'd use mongodb as the backing store15:16
mriedemnsaje1: but you can swap out the backend15:16
*** nayward has quit IRC15:17
*** mrodden has quit IRC15:17
nsaje1mriedem: yes, but mongodb is in use in devstack right now15:17
*** markmcclain has joined #openstack-infra15:17
nsaje1mriedem: I'd only install from a 3rd party until mongodb is backported15:18
nsaje1mriedem: well, I'm asking if it's possible :)15:18
*** seagulls has quit IRC15:20
*** beagles has joined #openstack-infra15:21
nsaje1mriedem: wouldn't want to rewrite the devstack ceilometer script using SQLAlchemy just because the right mongodb version isn't available :/15:21
jeblairmriedem: i believe jd__ and some folks in here discussed the issue yesterday15:21
*** nayward has joined #openstack-infra15:22
mriedemnsaje1: jeblair: ok, was just thinking out loud more or less, the ceilometer stuff is relatively new to me.15:22
jeblairmriedem: i'm a bit distracted right now with operational issues, and don't recall the outcome... jd__ would probably remember, or you could check eavesdrop15:22
mriedemi only bring it up because we're looking at it in ibm because of the db2 10.5 usage with pymongo15:22
mordrednsaje1: I believe jd__ said that 2.2 isn't actually requirede15:23
mordrednsaje1: and that there is a bug which is making it seem so15:23
mriedembecause we can't ship mongodb15:23
mordrednsaje1: but I could be wrong about that15:23
mordredbut - in general, we don't install packages from third party sources in devstack, because if we start doing that, we run the risk of becoming our own linux distro15:24
*** dprince has quit IRC15:24
ryanpetrellofor stackforge projects that have pypi uploads configured for release15:24
mordredI tihnk we need to get ceilometer gating on devstack, so that if someone tries to add a feature that requires 2.2, that feature would not land15:24
ryanpetrellohow does authentication to pypi happen?15:24
nsaje1mordred: I thought as much15:24
ryanpetrellois this something that stackforge project maintainers actually do by hand?15:24
mordredryanpetrello: openstackci user15:25
mordredryanpetrello: you need to add that as an owner or manager to your pypi project by hand (usually also running python setup.py register by hand first)15:25
ryanpetrellookay, figured there had to be some way to give permission15:25
mordredryanpetrello: and at that point, openstack can upload to your pypi thing15:25
ryanpetrellois this documented somewhere that I just missed it?15:25
mordredpossibly not15:25
mordredif it's not on the stackforge document, then no15:25
ryanpetrellookay15:26
*** yaguang has joined #openstack-infra15:26
mordredryanpetrello: python setup.py register ; log in to pypi ; click the link of your project name on the right ; click "Role" ; then add openstackci15:26
mordredis the tl;dr process15:26
ryanpetrelloyup15:26
ryanpetrellothanks :)15:26
fungii think we originally tried not to get too into the weeds on all the available options within our automation, for fear of scaring projects away from stackforge because it looks complicated15:27
mordredjeblair: I have sent an email to a pile of java devs, but have not heard back yet15:27
mordredfungi: I want to add pypi registration to manage_projects at some point15:28
fungiso i think that howto mentions that we have the ability to upload projects to pypi, but doesn't get into the details15:28
*** mrodden has joined #openstack-infra15:28
mordredfungi: because it would be nice to just have openstackci create the darned thing itself15:28
*** colinmcnamara has quit IRC15:28
mordredfungi: or maybe to upload - if it sees the project doesn't exist, it does a register15:28
mordredfungi: if it does exist and it can't upload to it - that'sa normal error and one that someone would need to correct15:29
*** colinmcnamara has joined #openstack-infra15:29
ryanpetrellofwiw, I didn't find it that complicated15:29
ryanpetrelloand I'm super impressed now that we have it up and running15:29
mordredwoot!15:29
mordredthat's what we like to hear15:29
ryanpetrellothe idea of being able to sign and tag a release, run tests, and then upload to pypi is *so nice*15:29
ryanpetrellothe pypi thing I think was my only confusion point15:29
ryanpetrellootherwise the stackforge doc was very clear15:30
fungimordred: would be neat, agreed15:30
mordredryanpetrello: one of these days, I'm going to convince someone add the ability to review tags in gerrit15:30
mordredryanpetrello: so that you could propose a tag, have that get tested/reviewed as usual, and when it lands, then trigger the upload15:30
mordredbut - that requires java hacking15:30
mordredew15:30
fungimordred: along those lines, it would also be cool if we came up with a way to inject detached developer pgp signatures of the tarballs for upload to pypi, but that's going to be human-workflow-complicated i think15:31
*** yaguang has quit IRC15:31
fungiautomated signing by contrast is easy to implement, but less meaningful overall15:32
*** yolanda has quit IRC15:32
*** krtaylor has quit IRC15:32
*** mrodden has quit IRC15:32
*** spawnofbelliott has quit IRC15:33
*** nsaje1 has quit IRC15:33
*** belliott has joined #openstack-infra15:35
*** cthulhup_ has joined #openstack-infra15:35
mordredhrm15:36
*** mrodden has joined #openstack-infra15:37
mordredso - we do require that the git sha is signed15:37
mordredthat doesn't really help with the trail to the tarball15:37
fungihowever there's no way to map that to a signature of the tarball, no15:37
jeblairfungi, mordred: i'm running out of ideas of how to figure out what's wrong with jenkins15:37
jeblairfungi, mordred: i could use some more of them.15:38
jd__who invoked me about MongoDB?15:38
BobBallmordred: Would you expect to ./run_tests in nova's personal venv or in GLOBAL_VENV?15:38
*** zehicle has joined #openstack-infra15:38
fungijeblair: well, presumably it was a cascade failure because of something which happened when static.o.o filled up15:39
fungijeblair: otherwise the timing was too coincidental15:39
*** nsaje1 has joined #openstack-infra15:39
fungijeblair: also it didn't happen right away, since while stuf was still broken i could get the jenkins01 and 02 interfaces to load15:40
fungiand then after jobs started being able to upload artifacts successfully again, 01 crumbled but 02 was still responsive15:40
fungithen once i got 01 running again, 02 went out to lunch15:41
jeblairfungi: i believe that's because it worked as long as there were available web threads that were not waiting on the Jenkins lock15:41
jeblairfungi: as d-g or nodepool continued to use those threads to modify nodes, they were also dedicated to waiting on that lock and were consumed15:41
fungiso connection avalanche/thundering herd15:41
jeblairi don't think so15:42
jeblairi think it happened one at a time, which is why 01 and 02 stopped responding at different times15:42
fungiahh, so once the thread pool was 100% consumed by threads waiting for locks, there were no threads to process an unlock?15:42
jeblairthere were no threads to handle your http request15:43
* fungi grasps at straws15:43
jeblairsomething is holding a lock on the jenkins object and i can't figure out what15:43
* mordred is out of ideas for tracking it down15:43
jeblairand it's important, because we wrote some of the code that does the scp plugin, and we wrote the code that's doing all this modifying of jenkins nodes15:44
*** markmc has quit IRC15:44
jeblairso no one else is going to fix it for us15:44
*** yaguang has joined #openstack-infra15:44
jeblairand if we don't figure it out, then we are doomed to a life of restarting jvms at odd hours15:44
jeblairwhich i am not interested in15:44
mordreddo you think we could jam a server into this condition again?15:44
jeblairso i'd really like to try to figure this out15:44
fungiwell, we've got one in this condition currently15:45
mordredlike, start a new one that's not connected to these and hammer it in a particular way until it jams? (perhaps with a full disk to start with)15:45
mordredI say that15:45
mordredbecause potentially if we start that server with the debug things that would let us do the lock stuff15:45
fungiahh15:45
mordredthen we might be able to see the thing15:45
mordredbut that only works if we think we could reproduce15:45
jeblairmordred: i have no idea which of our complex systems are required for this; or all of them.15:46
mordredjeblair: that is a good point15:46
mordredhrm15:46
fungiwell, in this case a potentially contributing factor is the higher-than-normal commit volume, which would be hard to duplicate i suspect15:47
mordredjeblair: you were saying earlier that we could read the tracebacks by hand, all 10k lines - that obviously won't work - but perhaps we could script/parse them?15:47
jeblairi have scanned one of the thread dumps to try to find anomalies;  what i discovered is that the two dumps that i have (one from jstack, one from jdb) are different15:48
mordredjeblair: any idea what the performance impact of running with the debugging server enabled is?15:48
jeblairmordred: root@jenkins02:/~jdb/{dump,stack}15:48
jeblairmordred: no15:48
mordredjeblair: because we could call this one a miss, restart them with debugging and wait until the next random time it dies (which is not optimium, but either it never dies again, or when it does we have proper debugging)15:49
jeblairmordred: i don't know if that would help15:49
*** xchu has quit IRC15:49
jeblairmordred: we'd want to test that and make sure that actually gives us what we need15:50
mordredyah15:50
openstackgerritafazekas proposed a change to openstack-infra/devstack-gate: Skip devstack/exercises by default  https://review.openstack.org/4208215:50
jd__jeblair, mordred, fungi: I was reading backlog about https://review.openstack.org/#/c/39237/ ; we need MongoDB > 2.2 for that, so it'd be good indeed to be able to use 10gen to get it since Ubuntu's lagging15:50
jd__cc nsaje1: ^15:50
ryanpetrellois http://status.openstack.org/zuul/ open source somewhere?15:51
ryanpetrellofigured it might be a component of https://github.com/openstack-infra/zuul15:51
ryanpetrellobut didn't see it in there15:51
mordredjd__: that would be a big departure from our current support model15:51
mordredryanpetrello: yeah - it's in our puppet repo - one sec15:51
mordredryanpetrello: https://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/zuul15:52
jd__mordred: I can propose to ./configure && make if you prefer, nah ;) I don't see much better solution :/15:52
*** cwj has joined #openstack-infra15:52
jd__mordred: unless we can have a devstack on RHEL? like we do for py2615:52
mordredjd__: RHEL has mongo 2.2 and ubuntu doesn't?15:53
jd__mordred: how ironic, isn't it?15:53
fungijd__: what that would mostly require is getting devstack to work on rhel. you would be a hero to many people if you managed to make that work15:53
jd__mordred: we run our mongodb test only in py26 because of that15:53
pleia2fungi: when you have a chance... feedback on https://review.openstack.org/#/c/42168/4/modules/cgit/manifests/init.pp real quick would be appreciated (hopefully my last patch for this)15:53
cwjanyone know if it is possible to configure the global git user.email and user.name settings for jenkins using jenkins-job-builder?15:53
jd__fungi: no no no, I thought devstack worked on RHEL, forget me! :)15:54
*** ^d has quit IRC15:54
ryanpetrellomordred: is there a process for contributing to this? (the zuul status board)15:55
*** zul has quit IRC15:55
fungiryanpetrello: yes, submit a review to review.openstack.org for it, same as for any openstack or openstack-infra project15:55
ryanpetrello(duh)15:56
ryanpetrellothanks15:56
fungiryanpetrello: in this case, it would be a patch to openstack-infra/config for the theming bits hosted/proxied through status.openstack.org, or a review to openstack-infra/zuul for the backend status.json interface it serves15:56
ryanpetrelloright15:57
fungipleia2: i'll have a look in a bit15:57
*** jpeeler has quit IRC15:58
*** jpeeler has joined #openstack-infra15:59
*** nayward has quit IRC16:04
*** MarkAtwood has joined #openstack-infra16:04
*** ruhe has quit IRC16:06
*** woodspa__ has joined #openstack-infra16:08
*** cthulhup_ has quit IRC16:08
*** sarob has joined #openstack-infra16:11
xBsdfolks, is jenkins and zulu fully functional now?16:11
*** fbo is now known as fbo_away16:11
xBsds/zulu/zuul/16:12
*** woodspa_ has quit IRC16:12
*** yaguang has quit IRC16:12
xBsdI see a bunch of frozen check jobs.16:12
*** ftcjeff has joined #openstack-infra16:14
*** yolanda has joined #openstack-infra16:16
clarkbmorning16:16
clarkbxBsd: on a phone so cant check directly16:17
clarkbbut best guess is there are not enough resources so things are queuing16:17
*** dprince has joined #openstack-infra16:17
xBsdopenstack/nova39920,60 min16:18
xBsdopenstack/tempest42325,10 min16:18
*** krtaylor has joined #openstack-infra16:18
xBsdmeans  0 mins16:18
xBsdthey freeze in that state for about hour16:19
xBsdfungi: it's the same issue which was with gate jobs16:19
mordredxBsd: we're still working through some things16:20
xBsdmordred: thanks ) ok, wait for solving )16:20
*** ^d has joined #openstack-infra16:20
fungixBsd: yeah, we've left it in that state so we could diagnose it in more detail. it's not holding up gating, but there are a handful of changes who aren't returning checks because of jenkins02 (i think, need to look back at the status page to be sure)16:21
*** cthulhup_ has joined #openstack-infra16:21
clarkbfungi mordred this related to static/logs?16:23
fungiclarkb: yup, deadlock in jenkins. i think it's close to figured out now16:25
*** nicedice has joined #openstack-infra16:25
mordredjd__: honestly, then, I think the best path forward is make ceilometer mongo config profile for centos devstack, and run ceilometer in sqlalchemy mode for ubuntu devstack16:26
*** dstufft_ has joined #openstack-infra16:26
*** dstufft has quit IRC16:26
*** dstufft_ is now known as dstufft16:26
anteayaI'm looking forward to helping load test asterisk, I which I believe is scheduled to happen in 34 minutes16:27
jd__mordred: ok :( we can't test everything with SQLalchemy but that's better than nothing16:27
mordredjd__: we want to get centos devstack in the gate - so consider it a timing issue16:27
*** zul has joined #openstack-infra16:30
*** odyssey4me3 has quit IRC16:30
*** datsun180b_ has joined #openstack-infra16:34
jeblairclarkb, zaro: with fungi and mordred's help, i believe i tracked down the problem to a race where nodepool deleted a node while gearman-plugin was trying to set it offline, and they deadlocked16:37
*** afazekas has quit IRC16:37
jeblairit's also possible that the timing around finishing jobs was different due to the scp issues, which is why this showed up then...16:37
clarkbjeblair: gotcha16:37
*** datsun180b has quit IRC16:37
*** datsun180b_ is now known as datsun180b16:37
clarkbjeblair: scp'ing the console log may happen after the onFinished16:37
fungijeblair: want to keep it in this state any longer, or shall i restart jenkins on 02 now?16:38
clarkbs/may/will/ because the console log copy is spun off in order to catch the end of the log16:38
jeblairclarkb: is that 'may' a "definitely is possible" or "i think it might be able to happen but we should test"?16:38
jeblairclarkb: ok16:38
clarkbjeblair: it is a "will happen"16:38
*** yolanda has quit IRC16:38
jeblairclarkb: and we have no indication when that's finished, yeah?16:38
jeblairfungi: i think we can restart it now16:39
jeblairfungi: please do16:39
fungiwill do16:39
clarkbwe do not. We could potentially add something to the scp plugin that notifies us of that16:39
clarkbthe big problem is that in order to catch the end of the log you must keep running after the test finishes16:39
*** mriedem1 has joined #openstack-infra16:40
*** mriedem has quit IRC16:40
*** ruhe has joined #openstack-infra16:40
clarkbjeblair: we could possibly update the Run object so that Jenkins knows internally that it is done. (can't just attach a new member object but a field in the env var set may work) but that is a big hack16:42
mordredclarkb, jeblair: what if the scp plugin emitted a zmq event16:42
mordredwhen _it_ was done16:42
clarkbmordred: it could do something like that too. It would potentially couple the two plugins fairly tightly16:43
mordredother plugins depend on plugins16:43
* mordred not sure it's a great idea16:43
*** radix has left #openstack-infra16:43
openstackgerritRyan Petrello proposed a change to openstack-infra/config: Add the ability to filter Zuul Status on multiple (comma-delimited) terms.  https://review.openstack.org/4238216:44
mordredor - is there a way to make the scp plugin consume the zmq plugin IFF the zmq plugin exists?16:44
mordredso like, if the plugin is there, use it, other wise, don't16:44
clarkbmordred: possibly. Not sure what the plugin registry looks like but if it is anything like entry points yes16:44
clarkbthe zmq plugin uses a simple thread safe queue, scp plugin could write directly to that16:45
clarkbjenkins internal event listeners should be based on a queue model rather than for loops at places they think should emit events16:46
jeblairi'm not positive about the degree to which this actually affected us...16:46
jeblairit could be a bit of a red herring16:46
*** nsaje1 has quit IRC16:46
clarkbthe netaddr thing is still biting us16:47
clarkbif no one else has had a chance to look at that I will try and sort out what is going on there16:48
mordredclarkb: what is the netaddr thing and how?16:48
mordredclarkb: can you point me at the issue?16:48
fungiclarkb: yeah, someone reported another incident around 0400 utc so i reopened the bug16:48
mordredI'm trying to learn more about what's going on with this16:48
clarkbmordred: https://jenkins01.openstack.org/job/gate-nova-docs/970/console16:48
clarkbits almost like we are bypassing our mirror somehow16:50
mordredclarkb: did we stop using PIP_BUILD_CACHE on jenkins slaves at some point?16:50
clarkbmordred: notice the list of candidates there does not match what we have on our mirror16:50
jeblairi'm going to try to get timing info from logs to determine how much of a factor the scp thing is16:50
mordredjeblair: cool16:50
fungimaybe some subset of slaves is missing the overrides to use our mirror?16:50
clarkbfungi: that particular gate-nova-docs job seems to have run the mirror selection script successfully16:51
clarkbmordred: I am not sure16:51
mordredpip conf file is there on tha tmachien16:51
fungihuh. is there any correlation between particular slaves or particular jobs? i guess you're looking via logstash16:51
clarkbgit log -p says we never used PIP_BUILD_CACHE in the infra/config repo16:51
clarkbfungi: no I haven't gotten that far16:52
clarkbfungi: will try that now16:52
mordredit's fine - I mean, that would just let us see easier where it was pulling from due to the way it reports to stdout16:52
mordredwhile I'm in here - who are the packstack people?16:53
mordredis that harlowja ?16:53
clarkbmordred: its redhat16:53
clarkbdprince may know16:53
mordreddprince: packstack is leaving a bunch of files around in ~jenkins16:53
clarkbmordred: it is always nova tests16:53
mordreddprince: of the form packstack-answers-20130711-124138.txt16:53
clarkbover the last 12 hours anyways16:53
clarkbhmm really need to add more info to the events to figure out which slave, master, and project caused a job to be built...16:54
*** jpich has quit IRC16:54
jeblairfungi, mordred, clarkb: wow, there are full thread dumps (and nice ones too) in the jenkins logs.  i wonder how that happened.16:56
fungieek16:56
*** mkerrin has quit IRC16:56
fungijeblair: perhaps it adds them when killed ungracefully?16:56
clarkbjeblair: it is a miracle16:56
clarkbmordred: precise7 9 and 10 have all done it16:56
clarkbmordred: which means both masters16:56
clarkb8 too16:57
jeblairfungi, mordred, clarkb: http://paste.openstack.org/show/44339/16:57
clarkbmordred: so I don't think this is slave or master specific16:57
jeblairthat ^ is the thread dump i wanted -- notice how it says which things are locked, and says right there that it's holding a lock on hudson.model.Hudson.16:57
mordredjeblair: has it succeeded on precise7 ?16:57
fungiha16:57
clarkbmordred: looking16:58
jeblairfungi: there are several thread dumps, at different times16:58
mordredclarkb: oh - you konw what else?16:58
*** sdake_ has quit IRC16:58
clarkbmordred: yes, looks like gate-nova-pep8 has succeeded on precise7 a few times16:58
mordredclarkb:  we should look and see if puppet has fixed /home/jenkins/.pip/pip.conf any time16:58
*** sdake_ has joined #openstack-infra16:58
clarkbpep8, docs, python2X are all affected16:58
*** sdake_ has quit IRC16:59
*** sdake_ has joined #openstack-infra16:59
mordredclarkb: because it's possible that some other job is overwriting ~/.pip/pip.conf erroneously16:59
*** nati_ueno has joined #openstack-infra16:59
mordredclarkb: and we should set its perms in puppet to be non writable to prevent that16:59
mordredbecause it's writable by the jenkins users16:59
mordredclarkb: any way of knowing which jobs ran before the fail on precise7 ?17:00
clarkbmordred: https://jenkins01.openstack.org/computer/precise7/builds17:00
mordredoh - wait17:00
mordredmodules/jenkins/files/slave_scripts/select-mirror.sh17:00
clarkbmordred: several savanna jobs17:00
mordredduh. we don't set that in puppet17:00
clarkbmordred: and yes we do it dynamically as part of the job itself17:01
dprinceclarkb/mordred: I believe packstack creates an answers file each time it runs.17:01
dprinceclarkb/mordred: So that you can re-run it with the same settings as before...17:01
dprinceclarkb/mordred: so it sounds like just a cleanup issue.17:01
mordreddprince: yeah. not a big deal17:01
mordredI just happened to see it17:01
anteayais anyone in the asterisk conference room #6000?17:02
*** Ryan_Lane has quit IRC17:02
clarkbanteaya: no, thank you for reminding me17:02
*** yolanda has joined #openstack-infra17:03
anteayaclarkb: welcome17:04
anteayaI have dialed into the room #6000 twice now and the call drops17:04
*** derekh has quit IRC17:04
clarkbanteaya: just happened to me too17:04
anteayaI am coming in with Skype using the PSTN number17:04
anteayaokay17:04
clarkbI am hitting the PSTN from verizon17:05
anteayarustlebee: ping17:05
pleia2drops for me too17:05
jeblairme too, using sip from my asterisk box17:05
anteayais the asterisk server up?17:05
jeblairanteaya: you get the prompt, right?17:05
anteayaare we doing the asterisk load testing, rustlebee ?17:05
anteayajeblair: yes, I get the prompt17:05
jeblairanteaya: "please enter the conference number.."17:06
anteayaI enter 6000#17:06
anteayayes17:06
anteayaand then the call drops17:06
jeblairdid we merge the change that fixes the conf bridge?17:06
*** krtaylor has quit IRC17:06
openstackgerritMonty Taylor proposed a change to openstack-infra/nodepool: Update style checking to hacking  https://review.openstack.org/4235717:08
clarkbthe first occurence of netaddr issues that I see is from 2013-08-15 15:45:00 in a gate-nova-python26 job17:09
clarkband the occurrences seem clustered together17:10
mordredclarkb: I agree that it looks lke the job is not running with our mirror17:10
jeblairmordred: i don't want to use hacking for nodepool.17:10
mordredjeblair: ok.17:10
jeblairmordred: also, myjenkins should not get style changes17:10
jeblairmordred: it is a set of methods that need to be upstreamed into the jenkins module17:10
jeblairand should follow its conventions17:10
*** cthulhup_ has quit IRC17:10
mordredjeblair: great! can I at least add ignore H into the tox.ini then?17:10
anteayalooks like rustlebee is afk and I don't see pablanger in channel17:11
jeblairmordred: zuul depends on hacking and then ignores H*; i figured why not cut out the middleman.17:11
mordredwell, I have hacking installed globally on my system, so it gets really noisy for me for things that don't at least ignore H17:11
jeblairmordred: ok, so add H to tox.ini but don't depend on hacking17:11
mordredk17:11
mordredjeblair: do you have opinions on python 3 support?17:11
jeblairheh, that was meant to be a question?17:11
jeblairgah17:11
mordred:)17:11
mordredbecause that was the real reason I touched it at all - were the py3 exception and print things17:12
jeblairmordred: i'm not tying right.  i meant to ask you "you want to add H to tox.ini but not depend on hacking?"17:12
jeblairmordred: that seems harmless to me, and i'm cool with that17:12
mordredcool17:12
harlowjamordred i'm not packstack17:12
harlowjai'm just anvil :-P17:12
jeblairmordred: i like python3 support17:12
mordredharlowja: k17:12
jeblairanteaya: i'm looking at the pbx host now17:13
mordredjeblair: because we could also depend on hacking but ignore the things that are not checks for python things if that's interesting to you17:13
anteayajeblair: k17:13
Alex_Gaynorwhich repository controls which IRC channels a review on a repo goes to?17:14
mordredAlex_Gaynor: it's in openstack-infra/config17:14
Alex_Gaynormordred: thanks17:14
vipulhi all, question about oslo.cfg release - you guys know when 1.2 goes to pypi?17:14
mordredAlex_Gaynor: look for gerritbot17:14
jeblairmordred: i was sort of thinking that reducing dependency count would be nice.  plus, at this point, we have py3 runners17:14
mordredvipul: yup. when we release havana17:14
mordredjeblair: k. I'm fine with that17:14
clarkbmordred: really? thats annoying17:15
vipulmordred: thanks.. what do projects that want to use it in the current cycle do (if there are projects like that)17:15
clarkb1.1 is not python3 friendly so all of those jobs I added to test py3k with the clients bomb out on oslo.config install17:15
mordredvipul: you can use it directly from tarballs.o.o17:16
mordredvipul: look at nova's requirements.txt17:16
jeblairAlex_Gaynor: E123 and E125 contradict the pep8 specification17:16
vipulmordred: will that totally screw up devstack based tests?17:16
Alex_Gaynorjeblair: I understand, and agree, I can just never remember what they are :)17:16
Alex_Gaynormordred: does http://bpaste.net/show/123617/ look about right?17:16
mordredAlex_Gaynor: yup17:16
mordredvipul: nope17:16
jeblairAlex_Gaynor: oh, ok, i misparsed that comment then.  :)17:16
Alex_Gaynormordred: okey doke17:16
Alex_Gaynorjeblair: ah sorry if I was unclear17:17
clarkbit is really interesting that only nova is affected by the netaddr thing17:17
mordredvipul: we actually install trunk oslo.config in devstack17:17
clarkbdoes any other project depend on netaddr?17:17
vipuloh nice17:17
mordredclarkb: yes17:17
mordredclarkb: swift does17:17
fungiclarkb: combination of new pip and site-packages?17:17
clarkbfungi: oh17:17
clarkbdamnit site packages17:17
clarkbnetaddr==0.7.5 is installed globally on precise717:18
openstackgerritAlex Gaynor proposed a change to openstack-infra/config: Notify the marconi IRC channel on gerrit events  https://review.openstack.org/4238617:18
* fungi is trying to think of things which changed in the last couple days and nova special snowflakes17:18
fungiclarkb: or maybe new tox in the past couple days too17:18
clarkbfungi: yeah possible some combo of site packages being too old forcing the --upgrade to do work in the new tox17:18
openstackgerritMonty Taylor proposed a change to openstack-infra/nodepool: Ignore OpenStack hacking rules  https://review.openstack.org/4235717:18
clarkbmordred: did we break the -U possibly?17:18
jeblairanteaya: it's up now17:19
mordredaaaaahhhhhhhhhh17:19
jeblairrustlebee: I restarted asterisk and that caused it to pick up a working config with the confbridge17:19
fungiclarkb: apt says Version: 0.7.5-4build2 is installed on our precise slaves17:19
*** jerryz has joined #openstack-infra17:19
pleia2woo, no drop17:20
*** psedlak has quit IRC17:20
anteayajeblair: k, I'm onto something else and will try again once done17:20
mordredthe nova require is just netaddr17:20
mordredwhich means it will always try to upgrade17:20
mordredbut that should be fine17:20
mordredthe question is - why is it not using .pip/pip.conf17:20
*** dina_belova has quit IRC17:21
clarkbmordred: what if it isn't upgrading17:21
clarkbmordred: and since site packages is enabled we get stuck with the old verison and it bombs out17:21
*** SergeyLukjanov has quit IRC17:22
mordredno - it's loking for candidate versions17:22
mordredand finding ones not on our mirror17:22
clarkbfungi: jog my memory on how to properly run a command across all of our slaves with slat?17:22
clarkb*salt. is it `salt \* test.ping` ?17:22
fungii'll have to jog my memory17:23
clarkbjesusaurus: ^17:23
fungiclarkb: looking back at my .bash_history the most recent thing i did with salt was...17:23
fungisudo salt -E '.*' cmd.run 'dpkg -l *jdk* 2>/dev/null|grep ^i||rpm -qa 2>/dev/null|grep -i jdk||echo NO JDK' >paste17:24
fungialso...17:24
fungisudo salt '*' test.ping|grep slave|sort|view -17:24
jesusaurusclarkb: `salt \* test.ping` will test all hosts for a connection17:25
fungihowever i think salt gets itself into a bad state, and nobody's had time to troubleshoot it17:25
clarkbfungi: ya, it doesn't seem to be working17:25
fungiright now it's timing out for me17:25
jesusaurus`salt -v \* test.ping` will tell you which hosts dont return within $timeout17:26
mordredjesusaurus: only if salt hasn't crapped itself17:26
jesusaurusoh, is it the master thats flailing?17:27
mordredjesusaurus: our salt minions decide to stop talking to the master at some point17:27
mordredbut since we dont' really use salt for anything, we don't notice when it happen17:27
mordredhappens17:27
jeblairso there are 6 of us on the conference bridge; load average is 017:27
mordredonly when we try to do something like look at a file on all of the hosts17:27
* mordred dials in17:27
jeblair(well, i'm on there 3 times)17:27
mordredsorry - was destracted by nova gate thing17:27
* anteaya is still on hold on another call17:28
*** jerryz has quit IRC17:30
*** jfriedly has joined #openstack-infra17:31
fungiis it supposed to be 6000? every time i try i get asked again to please enter my conference number followed by the pound key17:31
jeblairfungi: yeah17:31
*** sarob has quit IRC17:31
anteayafungi how are you dialing in?17:32
*** sarob has joined #openstack-infra17:32
fungianteaya: pots17:32
anteayaI don't know what that is17:32
fungihung up and trying again17:32
*** nayward has joined #openstack-infra17:33
fungipots==plain old telephone system17:33
mordredhttps://twitter.com/e_monty/status/367393004224409601/photo/117:33
clarkbwe appear to have tox 1.6.0 just about everywhere17:34
clarkbso I think this is quite possibly the problem17:34
clarkbnow to try and replicate locally17:34
mordredclarkb: wait - what is the problem? tox 1.6.0 ?17:34
fungihung up and dialled in several times, tried multiple times each time to join the conference but the autoattendant pretends that conference doesn't exist17:34
clarkbmordred: we think that is a possibility17:35
mordredo m g17:35
clarkbmordred: because it is new as of not long ago17:35
jeblairfungi: can you try again?17:35
fungirustlebee: if it helps, i'll be coming from a line with cid ending in 434417:35
* fungi tries again17:35
*** cthulhup has joined #openstack-infra17:36
*** sarob has quit IRC17:36
*** thomasbiege has quit IRC17:36
*** sarob has joined #openstack-infra17:37
*** melwitt has joined #openstack-infra17:37
*** yolanda has quit IRC17:37
anteayafungi ah okay, I was picturing a sturdier version of dixie cups and a string17:38
*** UtahDave has joined #openstack-infra17:38
*** sarob has quit IRC17:38
*** sarob has joined #openstack-infra17:39
*** jerryz has joined #openstack-infra17:39
anteayamordred: your airplane is beautiful17:40
marunmordred: poing17:41
*** morganfainberg|a is now known as morganfainberg17:41
clarkbmordred: venv installdeps: -U, -r/home/boylancl/tmp/test-tox/nova/requirements.txt, -r/home/boylancl/tmp/test-tox/nova/test-requirements.txt17:41
*** yolanda has joined #openstack-infra17:41
*** burt has quit IRC17:41
*** afazekas has joined #openstack-infra17:41
morganfainbergmordred: pong17:42
morganfainbergmordred:  whoopse wrong channel.17:42
rustlebeejeblair: really sorry i wasn't around ... :(17:43
jeblairrustlebee: no prob17:43
*** sarob has quit IRC17:43
jeblairrustlebee: several of us are on the call now17:43
rustlebeeok17:43
jeblair8 (3 of which are me)17:43
rustlebeek17:44
jeblairall on pstn except for one of mine, which is sip17:44
rustlebeei'll spin up an asterisk server real quick17:44
jeblairrustlebee: the consenses seems to be that we're all hearing a bit of choppyness17:45
jeblairalso, this is exciting17:45
jeblairone of my channels gas gone silent17:45
*** dkehn_ has joined #openstack-infra17:45
*** dkehn has quit IRC17:45
jeblairbut the other 2 are working17:45
rustlebeeif there are gaps in getting CPU time, it would cause that17:45
rustlebeeshould be roughly a 80 Kbps bidirectional stream per call17:46
rustlebeeUDP streams on random ports17:46
*** sarob has joined #openstack-infra17:49
Alex_GaynorIs the goal of nodepool to be more elastic with the number of CI workers we have? (And is there another place I should be listening to know things like this)17:51
*** ruhe has quit IRC17:52
clarkbAlex_Gaynor: I believe its first goal is to address the issues the old pool management had where it would get out of sync with jenkins17:52
SlickNikjeblair / clarkb: Do you know whom I can contact if there is a mistake with the details of one of the Summit Sessions I proposed?17:52
clarkbbecause it relied on jenkins jobs which sometimes don't work as expected17:52
Alex_GaynorAh17:52
clarkbAlex_Gaynor: by having a long running daemon you can deal with problems more flexibly17:52
clarkbSlickNik: I would check with reed17:52
anteayaanybody still in the asterisk conference call?17:53
anteayaor is that over?17:53
pleia2anteaya: yep17:53
pleia2we're on it17:53
anteayaokay17:53
SlickNikclarkb: does he usually hang out in #openstack-infra? Where can I find him?17:54
clarkbya he is often in here17:54
anteayayay, I'm in17:54
SlickNikCool, I'll ping him when he shows up.17:55
SlickNikThanks!17:55
*** cthulhup has quit IRC17:56
anteayaI just got dropped17:58
anteayaringing, ringing, ringing17:59
anteayanot connecting17:59
*** morganfainberg is now known as morganfainberg|a17:59
anteayatrying again18:00
anteayaringing, ringing, ringing18:00
anteayano connection18:00
anteayatrying again18:01
mordredAlex_Gaynor: yeah - it's mainly currently a refactor of the current CI pool code stuff18:01
anteayanope18:01
anteayaI can't seem to connect again18:01
openstackgerritRyan Petrello proposed a change to openstack-infra/config: Add the ability to perist the Zuul Status filter with a cookie.  https://review.openstack.org/4239318:02
jeblairanteaya: :(18:02
jeblairanteaya: i just tried over the pstn and connected18:02
anteaya:(18:02
anteayaokay I will try again18:02
*** dkehn_ has quit IRC18:02
*** dkehn has joined #openstack-infra18:03
fungifor those playing along, pbx system graphs at http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=3918:03
anteayaseems my issue might be related to my skype connection18:03
anteayasigh18:03
clarkbmordred: according to the buid log for the job before gate-nova-docs 970 and job after it we do not have overlapping tests18:05
mordredclarkb: what if we do like an inotify type thing to watch those files and log what hapens to them?18:05
clarkbwe do have a ton of salt minion processes18:06
clarkb...18:06
clarkbmordred: ya we can try that18:06
clarkbthat seems very heavy weight for debugging this though18:07
*** pentameter has joined #openstack-infra18:08
mordredyeah18:09
mordredit does18:09
clarkb/home/jenkins/workspace/gate-nova-python27/.tox/py27/bin/python2.7 ../bin/pip install --pre -U -r/home/jenkins/workspace/gate-nova-python27/requirements.txt -r/home/jenkins/workspace/gate-nova-python27/test-requirements.txt is the command being run to install the deps18:10
clarkbwhat is --pre?18:11
mordred  --pre                       Include pre-release and development versions. By18:11
mordred                              default, pip only finds stable versions.18:11
mordredwhy is that in there18:11
anteayaam I connected?18:11
clarkbfrom tox18:11
anteayaI installed skype on this laptop (my newest one) and the sound settings are less that ideal18:12
clarkbmordred: they did close my tox PR to make the -U stuff more formal saying there is some other way of doing it now18:12
anteayaI might be connected though18:12
anteayaI can't hear anything though, which might be my sound settings18:12
*** _TheDodd_ has quit IRC18:12
mordredinstall non-stable releases. (tox defaults to install with “–pre” everywhere).18:13
*** jergerber has quit IRC18:13
mordredok. we're going to have to change that18:13
mordredclarkb: add new EXPERIMENTAL “install_command” testenv-option to configure the installation18:13
mordredso, I believe the default installer_command is 'pip --pre'18:14
clarkbcould --pre behavior be causing this?18:14
clarkb(I don't expect it to but maybe it tries to be smart and looks for pre packages where it shouldn't?)18:14
*** morganfainberg|a is now known as morganfainberg18:15
anteayaokay, I don't have sound working properly on this laptop so I can't assess my asterisk experience18:15
*** dmakogon_ has joined #openstack-infra18:18
dmakogon_hey, guys, we seeing bug in devstack, this bug failing gerrit build of Trove dashboard.18:19
dmakogon_log: http://logs.openstack.org/28/42228/2/check/gate-tempest-devstack-vm-cells-full/12d70e4/console.html18:19
mordredclarkb: https://review.openstack.org/#/c/42178/18:19
mordredclarkb: a18:19
mordredclarkb: B18:19
mordredclarkb: it IS TOX18:19
mordred    during installation of packages HOME is now set to a pseudo18:20
mordred    location (envtmpdir/pseudo-home).  Also, if an index url was18:20
mordred    specified a .pydistutils.cfg file will be written so that index_url18:20
mordred    is set if a package contains a ``setup_requires``.18:20
dmakogon_log of devstack bug - http://logs.openstack.org/28/42228/2/check/gate-tempest-devstack-vm-cells-full/12d70e4/console.html18:21
*** dina_belova has joined #openstack-infra18:21
mordreddmakogon_: hey! do you think it's a bug in how devstack is being run? or devstack itself? if it's devstack itself, #openstack-qa is where dtroyer usually hangs out18:21
anteayaokay back in18:22
mordredclarkb: we can apparently set TOX_INDEX_URL18:23
*** sarob has quit IRC18:23
mordredclarkb: which will set it for pip.conf and .pydistutils.cfg for us18:23
*** sarob has joined #openstack-infra18:23
*** dina_belova has quit IRC18:25
*** sarob has quit IRC18:28
marunmordred: i tried to cc you but my mail client seems to have screwed it up.  please see email on os-dev with subject 'Gate breakage process'18:30
openstackgerritRussell Bryant proposed a change to openstack-infra/config: Fix join/leave sounds in the conference  https://review.openstack.org/4239618:30
clarkbmordred: o_O catching up. I stepped away to get something to drink18:30
mordredhttps://bitbucket.org/hpk42/tox/issue/116/new-pypi-override-breaks-people-who18:30
mordredclarkb: ^^18:31
mordredclarkb: I've got a not-perfect workaround for now18:31
*** dina_belova has joined #openstack-infra18:31
clarkbmordred: can we tell tox to look at the files we write?18:32
mordredno. not right now18:32
mordredthat's the bug I just filed18:32
mordredclarkb: can you pass arguments to scripts you source?18:33
clarkbhelp source says yes18:34
clarkbsource: source filename [arguments]18:34
clarkbdmakogon_: the cells jobs are expected to fail. Those tests run non voting until all of the issues get sorted out18:34
clarkbdmakogon_: so failures of that job shouldn't affect your ability to merge code. Then when those tests are reliable we will make them vote18:34
Alex_GaynorIs there a plan to get more test workers?18:35
jeblairAlex_Gaynor: yes, we're close to being able to18:35
Alex_Gaynorawesome18:35
jeblairAlex_Gaynor: that's what i've been working on for a couple weeks18:35
*** MarkAtwood has quit IRC18:35
*** sdake_ has quit IRC18:35
*** dina_belova has quit IRC18:36
fungiudp: round-trip min/avg/max = 52.3/78.6/297.7 ms18:36
fungiicmp: rtt min/avg/max/mdev = 54.562/55.293/57.363/0.851 ms18:36
fungifrom "random broadband provider in nc"18:37
openstackgerritMonty Taylor proposed a change to openstack-infra/config: Workaround tox 1.6 pypi workaround  https://review.openstack.org/4239718:37
clarkbfungi: are you just using udp ping to get the udp numbers?18:37
fungiclarkb: yeah, hping3 --udp18:37
mordredclarkb:, jeblair, fungi ^^18:37
mordredthat should address the netaddr problem18:38
clarkbmordred: can you note the bug in your commit message? I am lookin for the number now18:38
fungimordred: you mean as long as we don't end up needing to workaround the workaround for the workaround that is18:38
mordredfungi: yup18:38
mordredfungi: so thrilled18:38
clarkbmordred: 121275118:39
openstackgerritMonty Taylor proposed a change to openstack-infra/config: Workaround tox 1.6 pypi workaround  https://review.openstack.org/4239718:39
* fungi relocates to $lab with real computers and fan noise... brb18:39
mordredjeblair: you're going to be so thrilled about the details on that one18:39
clarkbmordred: does the tox thing only affect the pydistutilcsconfigblahmumblemumble file? and .pip/pip.conf is ok?18:40
*** SergeyLukjanov has joined #openstack-infra18:40
mordredyes18:40
mordredwell18:40
clarkbmordred: seems like netaddr is being installed by pip though18:40
mordredI mean18:40
mordredyes. it should be18:40
mordrednono18:40
mordredsorry18:40
mordredit's a hack to fix .pydistutils.cfg18:40
mordredwhich causes the pip commands to be run in a fake hom edir18:40
mordredso anything you have in ~ isn't going to work right18:40
mordredduring the pip install command18:40
*** nubbie has quit IRC18:41
anteayafungi mmmm fan noise18:41
mordredbut only while it's installing your requirements18:41
jeblairmordred: it didn't take long for pypi breakage to show up once we started accidentally using it, did it?18:42
fungiquite amusing, that18:42
clarkbjeblair: nope, I have a timestamp in sb18:42
clarkbjeblair: but it was about 8:45am PDT yesterday18:43
jeblairit's nice to have a 'yep, still need to do this' check every now and then.18:43
clarkband new tox was released around then iirc18:43
jeblairwow18:43
* jeblair goes back to thinking about jenkins18:43
clarkbless than 24 hours at least18:44
bodepdmordred: you guys must be slow this week ;)18:44
bodepdmordred: if you're getting around to puppet refactor patches18:44
mordredbodepd: I have started a new thing18:44
bodepdmordred: what new thing?18:44
mordredbodepd: I'm walking my whole review queue when I wake up before I write patches18:45
clarkbmordred: your workaround lgtm. I will let you decide if you want to address jeblair's comment before approving18:45
jeblairif you do, don't make me vote again.  :)18:45
clarkbfood is here /me does that18:45
bodepdmordred: I try to do code review from 10-12 every day18:45
bodepdmordred: and not outside of that window ;)18:45
openstackgerritMonty Taylor proposed a change to openstack-infra/config: Workaround tox 1.6 pypi workaround  https://review.openstack.org/4239718:45
mordredclarkb: ^^ - feel free to +2+A that18:46
bodepdmordred: what is tuskar? I noticed it started off as ironic18:46
bodepdmordred: and became something else.18:46
bodepdmordred: is that the top level for triple-o?18:46
mordredbodepd: it's a thing the guys at redhat wrote to drive tripleo deployments18:46
bodepdmordred: I'm just asking you b/c I looked through the commit history and say your name18:46
mordredbodepd: lifeless is working through it with them to see how well it fits in with things and whatnot18:47
bodepdmordred: it's really close to what I am working on atm...18:47
bodepdmordred: probably no big surprise :)18:47
mordredbodepd: then you should talk to both them and to lifeless...18:47
mordredbodepd: you know - for the reasons :)18:48
* mordred afks for a bit18:48
bodepdmordred: I had a conversation with lifeless before and his response about triple-o is that is was intended to kill my work18:48
bodepdmordred: I'm not really sure where a conversation goes from there ;)18:48
fungi90% chance no matter what lifeless says, he's at least somewhat trolling you18:49
bodepdfungi: that is helpful to know :)18:49
clarkbmordred: done18:50
afazekashttp://logs.openstack.org/28/41928/4/check/gate-grenade-devstack-vm/e641bfc/console.html.gz how to solve this ?18:50
*** wenlock has joined #openstack-infra18:51
fungibodepd: my take on tripleo, at least from a somewhat outside perspective, is that it should make your configuration management way easier, by abstracting away most of the nastiness and allowing you to just worry about parameters18:51
*** sarob has joined #openstack-infra18:51
bodepdfungi: we're starting to evaluate it piece-mill18:51
bodepdfungi: right now, we're looking at Heat vs. vagrant for deploying our CI tests against an openstack cluster18:52
openstackgerritA change was merged to openstack-infra/config: Workaround tox 1.6 pypi workaround  https://review.openstack.org/4239718:52
bodepdfungi: out main issue is that we have a team focusing on how, and one focusing on 3 months from now, but triple-o feels like 6+months from now18:53
fungiit seems to me that heat intends to take care of the "how" and let you worry about the "what" at a higher level18:53
wenlocki was going to be trying to understand heat next week… as well.18:53
*** openstackstatus has quit IRC18:53
wenlockI think i have a good understanding of vagrant18:53
fungibut yes, chances are in the near-term tripleo/heat work and classic configuration management automation are parallel projects with a lot of overlap18:54
fungiso if there are ways for you to steer in a common direction, maybe the future situation will be a less frustrating prospect18:55
bodepdfungi: we're evaluating :)18:56
wenlockis there an infra project that contains work on heat?18:56
*** dmakogon_ has left #openstack-infra18:56
bodepdwenlock: no, I'm off topic :) sorry18:56
clarkbafazekas: I am not sure. At first glance it looks like a legit failure though. The indication is that nova failed to upgrade18:57
clarkbhmm maybe not. set_up_bash_completion command not found18:58
fungiwenlock: we were mostly held up from doing anything with it infra-wise because our donated providers didn't have heat available in production and heat couldn't run stand-alone. now it can, which opens up the possibility in places but we'd need to refactor the things where it fits what we're doing to take advantage of it18:58
clarkbafazekas: looks like your change is adding that to devstack?18:59
clarkbI wonder if it is trying to use the old devstack functions against newer devstack? my knowledge of what grenade actually does under the covers is pretty basic18:59
*** thomasbiege has joined #openstack-infra19:00
*** ^d has quit IRC19:00
afazekasclarkb: may be19:00
clarkbafazekas: but it is breaking on the new line here https://review.openstack.org/#/c/41928/4/lib/neutron19:01
afazekashttps://github.com/openstack-dev/grenade/blob/master/functions   or it uses this19:01
*** ^d has joined #openstack-infra19:01
*** ^d has joined #openstack-infra19:01
*** dina_belova has joined #openstack-infra19:01
clarkbafazekas: looks like there are function updates from devstack in the grenade history. I think that may be it19:01
clarkbafazekas: so you will probably need to add the function to grenade first then to devstack19:01
afazekasclarkb: A six line simple change will be a little more :)19:02
*** thomasbiege has quit IRC19:03
*** ^d has quit IRC19:05
*** yolanda has quit IRC19:06
*** cthulhup has joined #openstack-infra19:07
afazekasclarkb: looks like it just sources the old functions, am I need to split the patch to two part ?19:08
*** mgagne has quit IRC19:09
clarkboh ya. define the function then use it19:11
*** cthulhup has quit IRC19:12
openstackgerritRyan Petrello proposed a change to openstack-infra/config: Add the ability to perist the Zuul Status filter with a cookie.  https://review.openstack.org/4239319:12
*** sandywalsh has quit IRC19:14
*** mriedem1 has quit IRC19:22
*** vipul is now known as vipul-away19:24
*** dprince has quit IRC19:25
*** sandywalsh has joined #openstack-infra19:26
*** vipul-away is now known as vipul19:26
* fungi needs to run some pre-travel errands and get dinner, but will bbl19:28
*** colinmcnamara has quit IRC19:28
openstackgerritElizabeth Krumbach Joseph proposed a change to openstack-infra/config: Add more details to git server documentation  https://review.openstack.org/4240519:30
jeblairmordred, fungi, clarkb, zaro: what if we didn't wait for the thread to join?19:33
mordredjeblair: say more words19:33
jeblairin the stop method in abstractworkerthread19:33
mordredlooking19:33
mordredhrm19:34
jeblairfrom the looks of it, nothing happens after that19:34
jeblaireither there or in gearmanproxy19:35
mordredjeblair: I will not pretend to fully understand the implications - but yes, it doesn't seem to be strictly necessary19:35
mordredor even useful19:35
jeblairif it were written a little differently, there might be a pattern where it would prevent a new worker from being started on a node, but i don't think that's the case19:35
jeblairpartly because we're calling stop outside of the lock on the worker list19:36
clarkbworker is the thing running in thread?19:36
clarkblooks likt it is a MyGearmanWorkerImpl so it is the thing talking togearman within this thead19:37
jeblairsort of.  the proxy has-a list of ExecutorWorkerThreads, each EWT has-a GearmanWorkerImpl; the GWI is the EWT.thread19:37
jeblairclarkb: yeah19:37
clarkbI think it is safe to remove the join.19:38
jeblairso basically, i think the upshot is that the join call does nothing other than just cause the calling thread to wait until its done, and print nice log messages19:38
clarkbat that point either it goes away or it doesn't and we don't really handle either case19:38
jeblairthe calling method waiting till it's done of course was the problem today.19:38
jeblairclarkb: yep19:39
jeblairit is likely to increase concurrency of the stopAll call.19:39
jeblairin that stopAll will probably now stop all of the threads at once instead of in rapid succession.19:39
*** mriedem has joined #openstack-infra19:39
jeblairclarkb: can you review https://review.openstack.org/#/c/4222619:41
jeblairand i'll base my change on that19:41
clarkbyup19:42
* mordred agrees with teh scrollback19:43
clarkbjeblair: done19:44
mordredclarkb: https://jenkins01.openstack.org/job/gate-nova-python26/1003/console19:44
mordredclarkb: worked19:44
mordredclarkb: it is appropriately setting and using the mirror19:45
jeblairmordred: yay!19:46
mordredjeblair: sigh :)19:46
*** vipul is now known as vipul-away19:46
mordredclarkb: also, can you look at this https://review.openstack.org/#/c/42337/ and tell me what I'm doing wrong on py26 and py3319:46
*** zul has quit IRC19:47
clarkbmordred: ya I will take a look19:47
*** shardy is now known as shardy_afk19:50
*** psedlak has joined #openstack-infra19:52
*** ^d has joined #openstack-infra19:53
*** weshay has quit IRC19:54
*** MarkAtwood has joined #openstack-infra19:57
*** vipul-away is now known as vipul19:57
xBsdfolks, could I start recheck job while the error job is not finished?19:59
xBsddoes it canceled the running job?19:59
clarkbjeblair: mordred fungi The gearman plugin jobs that should be converted to freestyle projects are still maven projects in jenkins. I am going to manually delete those jobs and see if that is what JJB needs to create them properly20:00
clarkbxBsd: it will not cancel the running job20:00
mordredclarkb: I believe that is the case20:00
mordredclarkb: I believe we have learned before that jjb cannot handle transition from maven to freestyle20:01
xBsdclarkb:  thanks20:01
clarkbxBsd: it is effectively a noop20:01
xBsdbtw, someone've just restarted the gate?20:02
clarkbxBsd: I think zuul did that automagically. There was a change that failed a test so as soon as it was kicked out everything behind it restarted20:04
lifelessbodepd: I did? Sorry!20:04
openstackgerritJames E. Blair proposed a change to openstack-infra/gearman-plugin: Don't wait for the worker thread to join  https://review.openstack.org/4241020:07
jeblairokay.  that seems to work; i think there may be some bad interactions with having multiple executors and disabling on complete, possibly while also stopping the plugin.20:08
jeblairbut it seems to work for our typcal case of one executor20:08
*** gyee has joined #openstack-infra20:08
jeblairi think on balance, it's probably a good idea for us to deploy now, and maybe test a bit further before actually making a release20:08
*** psedlak has quit IRC20:08
jeblairjenkins01 is in shutdown mode20:09
clarkbI am updating jobs on jenkins01. should be done shortly20:10
clarkb(to fix the gearman plugin jobs)20:10
clarkbjeblair: nevermind carry on. Looks like puppet is disabled on that server so it didn't get the new configs20:11
jeblairclarkb: yes, that's true; i did that to apply the new firewall rules20:11
clarkbjeblair: I see. Has that change merged yet?20:11
jeblairi'm about to start merging those changes to restore normalcy20:11
clarkbjeblair: ok, just let me know when you are ready for puppet again and I can update those jobs20:12
mordredjeblair: yay20:14
*** xBsd has quit IRC20:14
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Add nodepool host  https://review.openstack.org/4223220:15
jeblairmordred, clarkb: ^ review/aprv that at will20:16
*** sarob has quit IRC20:17
*** nayward has quit IRC20:17
*** sarob has joined #openstack-infra20:17
vipulclarkb, mordred: why is this failing the requirements check? https://review.openstack.org/#/c/42392/120:17
vipulI seem to have the required oslo.config>=1.2.0a320:18
vipulRequirement oslo.config>=1.2.0a does not match openstack/requirements value oslo.config>=1.2.0a320:18
mordredjeblair: what if I don't wanna?20:18
clarkbvipul: right they are different20:18
*** krtaylor has joined #openstack-infra20:18
mordredvipul: you need a 320:18
vipulhttps://review.openstack.org/#/c/42392/1/requirements.txt20:19
mordredoh!20:19
vipuli do20:19
mordredhe has one20:19
mordredwtf20:19
mordredDAMMIT20:19
mordredI hate this particular system20:19
mordredit's so fragile20:19
clarkbmordred: do you know what the problem is? my current hunch is that the test can't split the version off of that url properly20:26
clarkboh and this is dev-requirements involved20:28
mordredyeah20:29
mordredI'm needing to not think about it for a second, because I'm just going to get angry20:29
mordredmeh, that's the wrong word20:30
mordredfrustrated20:30
mordredesp because we have a new and better system SO CLOSE20:30
jeblairmordred: to what new and better system are you referring?20:30
clarkbmordred: can you review and possibly approve jeblair's change to add the nodepool node?20:30
clarkbthat is something useful that doesn't involve requirements20:31
jeblairmordred: i'll look into the requirements script (i'm waiting for jenkins to get to my changes)20:31
*** jpmelos has left #openstack-infra20:32
clarkbjeblair: I think I see one potential problem20:32
mordredjeblair: thanks!20:32
mordredclarkb: and yes20:32
mordredjeblair: the "upload pre-releases to pypi as wheels only and require pip 1.4" system20:33
mordredjeblair: because, zomg, grokking what's going on with the tarball url is a constant struggle20:33
jeblairmordred: gotcha20:33
clarkbjeblair: I think the version for the tarball is parsed by us, but the version without the tarballs url is processed by pkg_resources20:33
*** markmcclain has left #openstack-infra20:34
clarkbso we may end up with different results. I am trying to find where that actually happens so that I can confrim20:34
*** markmcclain has joined #openstack-infra20:34
jeblairvipul: add a newline to the end of the requirements.txt file20:34
*** woodspa__ is now known as woodspa20:34
jeblairclarkb:, mordred ^20:34
vipulwoah really?20:34
openstackgerritA change was merged to openstack-infra/nodepool: Make the local script directory configurable  https://review.openstack.org/4223320:35
mordredjeblair: thank you20:35
openstackgerritA change was merged to openstack-infra/nodepool: Use MySQL  https://review.openstack.org/4223420:35
jeblairvipul: yep.  obviously the error message should be corrected or we should change the parsing, but that's the immediate cause; should get you going.20:35
clarkbjeblair: can you point that out to me in the code?20:35
openstackgerritA change was merged to openstack-infra/nodepool: Require a target name when instantiating a node  https://review.openstack.org/4224620:35
openstackgerritA change was merged to openstack-infra/nodepool: Make the target name required in the schema  https://review.openstack.org/4225120:35
*** sdake_ has joined #openstack-infra20:36
*** sdake_ has quit IRC20:36
jeblairclarkb:             line = line[:line.find('#')]20:36
vipuljeblair: thanks for digging that up.. let's see how it goes20:36
*** sdake_ has joined #openstack-infra20:36
*** sdake_ has quit IRC20:36
*** sdake_ has joined #openstack-infra20:36
jeblairthat's a very silly error20:37
clarkbjeblair: that should make it egg=oslo.config>=1.2.something right?20:37
jeblairit says "strip off the # and anything after, or if it's not found, just strip off the rightmost char"20:37
jeblairso that means most lines were getting their newline removed there (instead of in the next line which actually calls a strip())20:38
jeblairso, having said that, should we also enforce newlines at the end of every line?20:38
jeblairwith a correct error message?20:38
clarkbI think we should if this is the alternative20:39
clarkbor maybe we can just be smarter about that check20:39
jeblairclarkb: it's not, it's quite a simple fix.  i've already written it.20:39
*** xBsd has joined #openstack-infra20:39
jeblairso, consider the problem fixed.  but now, as a matter of principle, should we have this script also validate that there's a newline at the end of the file?20:40
clarkbI'm on the fence. My editor does the right thing. But I know many other editors do not20:40
clarkbcould become frustrating for people with silly editors20:41
*** sarob has quit IRC20:42
mordredI think that if we were goign to validate newline20:44
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Fix line parsing in requirements check  https://review.openstack.org/4241520:44
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Add final newline check for requirements  https://review.openstack.org/4241620:44
mordredwe shoudl have a message that says that's what we're doing20:44
anteayamordred I sent you a pm20:44
jeblairmordred: wish granted :)20:44
jeblairclarkb: there are the two changes -- fix (and be loose), followed by be strict20:45
jeblairback to the other thing for me now20:45
clarkbjeblair: ty20:46
openstackgerritA change was merged to openstack-infra/config: Add nodepool host  https://review.openstack.org/4223220:47
*** pabelanger has joined #openstack-infra20:49
jeblairclarkb: you're clear to start puppet on jenkins* and run jjb20:51
jeblairclarkb: ci-puppetmaster has been updated20:51
clarkbjeblair: ok will do20:51
clarkbjeblair: I noticed that on jenkins.o.o it only updated one of the four jobs. Should I flush all of the cache or just ignore cache for this particular set of jobs?20:52
jeblairclarkb: it takes a loooong time to do all jobs; i'd try to narrow it20:52
*** rfolco has quit IRC20:53
jeblairlike 30 mins to an hour20:53
clarkbok, my only concern is that I think irgnore cache won't remove the offending jobs from the cache20:53
clarkbis it possible to remove them by hand? I haven't gone digging into that cache before20:53
jeblairclarkb: yes, it's just a json file20:53
clarkbI will let puppet run JJB then swing back around and cleanup what got missed20:54
jeblairclarkb, mordred, zaro: can you review this? https://review.openstack.org/#/c/42410/20:55
jeblairclarkb: when you're done fixing the jobs (since it would be nice for them to work when we aprv that)20:56
clarkbjenkins01 and jenkins02 got the jobs applied just fine20:57
clarkblooking more closely at jenkins.o.o now20:57
clarkbpuppet started on 01 and 02 as well20:58
pabelangerjeblair, appologies for missing out on testing, got called away from my computer20:59
rustlebeejeblair: was just thinking about testing ... can spin up other servers, but the DID is only going to work from one of them at a time21:00
rustlebeeso probably best to only have one set up at a time21:01
jeblairrustlebee: yeah, we can manually stop/start asterisk on them as needed21:01
rustlebeetrue21:01
rustlebeethat works21:01
rustlebeethey'll fight over the DID otherwise :-)21:01
jeblairi'm sure the provider would love that21:01
rustlebeeit's *possible* they could allow multiple registrations, but even if they did, the servers would fight over answering the call21:02
clarkbjenkins.o.o should be good now21:03
*** lcestari has quit IRC21:03
*** mrodden has quit IRC21:04
clarkbjeblair: I approved your gearman plugin change21:05
clarkbjeblair: why is jenkins01 ins shutdown only mode? there are no jobs running on it now so you can do what you need there possibly21:07
*** sdake has quit IRC21:07
*** sdake has joined #openstack-infra21:07
*** sdake has quit IRC21:07
*** sdake has joined #openstack-infra21:07
jeblairclarkb: to get the new version of the gearman plugin21:08
clarkbthat causes a bit of a chicken and egg... half capacity jenkins is slow to merge things21:09
jeblairclarkb: the irony is terrible21:09
mordredoh no! I just got a message fro mhpcloud21:09
mordredthey lost the node that housed  Instance Name: devstack-precise-1362719079.template.openstack.org21:09
mordredwhatever will we do?21:09
*** wenlock has quit IRC21:11
*** changbl has quit IRC21:11
*** adalbas has quit IRC21:14
clarkbmordred: by the way the source stuff for TOX_THING got applied when I started puppet on jenkins0[12]21:15
bodepdlifeless: no worries. I'm still extremely interested in the work you guys are doing :)21:15
*** ryanpetrello has quit IRC21:15
*** pentameter has quit IRC21:16
bodepdlifeless: the pace of innovation is staggering21:16
*** colinmcnamara has joined #openstack-infra21:17
*** mrodden has joined #openstack-infra21:18
*** thomasbiege has joined #openstack-infra21:18
*** thomasbiege has quit IRC21:19
mordredclarkb: the source stuff for ... oh, that's fine21:21
clarkbwoo gate reset again... I think there is flakyness in grenade and neutron devstack tests21:21
openstackgerritA change was merged to openstack-infra/gearman-plugin: remove restriction on slave to run single job at a time  https://review.openstack.org/4222621:22
openstackgerritA change was merged to openstack-infra/gearman-plugin: Don't wait for the worker thread to join  https://review.openstack.org/4241021:22
clarkbjgriffith: https://jenkins02.openstack.org/job/gate-grenade-devstack-vm/1931/console21:22
clarkbjeblair: ^ gearman-plugin changes merged21:22
clarkbnow we wait for hpi artifact upload21:22
jeblairi'll just build it myself21:23
jgriffithclarkb: looking21:23
jgriffithclarkb: any way to get the cinder logs back?21:25
*** dina_belova has quit IRC21:25
clarkbjgriffith: let me see if I can get them21:25
jgriffithclarkb: and is this a one-off or have you seen multiple failed to creat vol?21:25
clarkbjgriffith: http://logs.openstack.org/31/42331/1/gate/gate-grenade-devstack-vm/3d78ceb/logs/new/21:25
clarkbjgriffith: not sure yet that was my next thing to check21:26
jgriffithclarkb: cinder-volume never started?21:26
jgriffithclarkb: so the creates fail due to inablity to schedule21:27
clarkbjgriffith: interesting. Looks like it started the first time around http://logs.openstack.org/31/42331/1/gate/gate-grenade-devstack-vm/3d78ceb/logs/old/21:27
jeblairrestarting jenkins0121:27
clarkbbut after the upgrade it didn't?21:27
jgriffithclarkb: don't ahve any insight as to why the startup failed21:27
jgriffithclarkb: indeed, it's not starting up after the upgrade for some reason21:28
jeblairclarkb, zaro: jenkins complained about postbuilders in a freestyle project; i think at least some jobs still need some work21:29
jeblairjenkins01 is running gearman-plugin 0.0.3-7-g8e6201221:29
*** adalbas has joined #openstack-infra21:29
jeblairlet's give the queue some time to recover before shutting jenkins02 down21:30
clarkbjgriffith: I am not seeming more incidences of this over the last 4 hours /me extends search range21:31
clarkb12 hours is same story. So this may be a fluke21:32
jgriffithclarkb: no changes in grenade recently... /me looking at grenade logs21:32
jgriffithclarkb: concerning because we saw service start issues several months back21:35
jgriffithclarkb: think you were involved, when we reworked the startup scripts21:35
zarojeblair: looking into build failure.21:36
jgriffithclarkb: Keep me posted if you see more, I'm concerned that the "touch" of the log file apparantly didn't even work, indicating we never really tried to start the service?21:37
clarkbjgriffith: http://paste.openstack.org/show/44352/21:37
*** toddmorey has joined #openstack-infra21:37
*** adalbas has quit IRC21:38
jgriffithclarkb: interesting, it says it started it21:39
clarkbya21:41
clarkbjgriffith: + screen -S stack -p c-vol -X stuff 'cd /opt/stack/new/cinder && /opt/stack/new/cinder/bin/cinder-volume --config-file /etc/cinder/cinder.conf || touch "/opt/stack/new/status/stack/c-vol.failure" I wonder if we should be trying to get the .failure file?21:42
clarkbseems like || echo "c-vol failed to start" would be more useful21:43
jgriffithclarkb: +121:43
clarkbdtroyer: ^21:43
openstackgerritKhai Do proposed a change to openstack-infra/config: move copy step to builder  https://review.openstack.org/4242221:44
dtroyerjgriffith: that's a good point…we could do both actually...21:46
jgriffithclarkb: deserves credit there :)21:46
jgriffithdtroyer: I like the idea of both21:47
dtroyerjgriffith: a little tee magic would do it I think21:47
jgriffithdtroyer: didn't we go down this road a while back?21:47
clarkbboth works. I just find the logs to be easier to deal with than the presence of specific files21:47
clarkbI can write the patch if I can find where it should go21:48
openstackgerritPetr Blaho proposed a change to openstack-infra/config: Adds Jenkins jobs for python-tuskarclient  https://review.openstack.org/4188721:49
dtroyerit would be in the command line of every screen_it command21:49
clarkbdtroyer: if I update devstack will grenage pick that up21:49
clarkblooks like it is a function so I cna just update that function21:49
dtroyerchange "|| touch …." to something like " || echo "message" | tee filename"21:50
dtroyeroh, wait, yeah, it is in the wrapper….duh21:50
dtroyerclarkb: yup, grenade trunk checks out devstack trunk so they stay in sync.21:51
clarkbdtroyer: http://paste.openstack.org/show/44353/ that look correct?21:51
clarkbnot sure what the newline $NL thing is for21:52
dtroyerclarkb: close, there's some quote nastiness in there21:52
dtroyerthe $NL adds a newline for screen's benefit21:52
clarkbI need to escape the quotes I added21:52
dtroyeryou need to escape the " in the echo21:52
clarkbhttp://paste.openstack.org/show/44354/ better?21:53
dtroyerI think so.  what I'm unsure about is the precedence between '||' (or) and '|' (pipe).  you may need to wrap the echo | tee bit in a () subshell…21:55
clarkblocal testing seems to indicate it works as expected21:55
dtroyeryeah, my simple test worked too21:56
clarkbtrue || echo "foo" | tee foo.bar then false || echo "foo" | tee foo.bar21:57
clarkbso I think that is fine. /me pushes21:57
jgriffithdtroyer: clarkb worked for me21:57
dtroyerquit copying my shell commands!21:57
clarkbdtroyer: jgriffith https://review.openstack.org/4242721:59
clarkbit will probably getting tested some time far in the future21:59
clarkbthis push to thefeature freeze has been insane22:00
dtroyerthe review before it was < 1 hour… there are advantages to working late on Friday22:01
* jgriffith is changing his work week to Wed-Sunday22:01
clarkboh no the secret is out22:01
jgriffithor just straight friday night to monday am :)22:02
clarkbthough us infra people tend to do more of the Sunday -Sunday22:02
clarkbThough I think just about all of us are doing normal people stuff this weekend22:02
*** ftcjeff has quit IRC22:04
jgriffithclarkb: can you clarify *normal-people*22:05
fungiclarkb: dtroyer the rule of thumb is that &&, || and ; don't continue pipelines and are higher order than | is22:05
dtroyerexcept on odd Thursdays and full moons22:06
clarkbjgriffith: I think of my brother when I think normal people22:07
fungithe test load from this morning is just not catching up22:07
clarkbfungi: yeah it is insane. Turning a jenkins off for an extended period of time didn't help either though22:07
clarkbthankfully hte weekend is around the corner22:08
jgriffithclarkb: interesting... now that you mention it22:08
clarkbjgriffith: he has a job that my mother understands22:09
*** mriedem has quit IRC22:09
clarkband doesn't work on his days off22:09
clarkbfungi: when will you be joining the not so sunny anymore northwest?22:09
anteayaclarkb: your parents don't understand your work either?22:10
clarkbanteaya: nope. They get really confused when I try to explain that OpenStack is open and free as in beer22:10
anteayaI keep getting advice, and when I try to tell them, they just keep giving me the same advice22:10
anteayaha ha ha22:10
anteayamy father keeps wanting me to track my hours22:10
anteaya"You have to keep track of your hours."22:10
clarkb"Why would anyone want to pay you to work on something that is free?"22:10
anteayaI have given up trying to explain, so I just nod and try to curb my tongue22:11
anteayaI am bad at the tongue curbing though22:11
anteayaha ha ha22:11
clarkbjeblair: fungi mordred https://review.openstack.org/#/c/42422/ I will approve that one in a bit if no one beats me to it22:11
jeblairfungi: i don't think it's the test load from this morning; it's the test load from only having one jenkins22:11
*** woodspa has quit IRC22:12
clarkbfungi: and I think that if you are comfortable with potentially flaky tests we should merge https://review.openstack.org/#/c/35104/2222:13
clarkbfungi: I will update my vote22:13
clarkb(because debugging things will only be harder if we keep hitting that change with a hammer)22:13
jeblairi just got really freaked out because i saw idle devstack nodes22:14
jeblairbut i think that's correct -- all the devstack jobs are actually currently being run22:14
jeblairwe're waiting on unit test runners22:15
clarkbjeblair: yeah we fell behind on precise workers pretty badly22:15
jeblairwe because of the way d-g works, more load shifts to the other jenkins for devstack jobs if we shut one down22:15
jeblairbut not unit test runners22:16
jeblairwe actually do need to add more, i think; it probably wouldn't hurt to go ahead and add, say 4 precise and 2 centos nodes22:16
*** blamar has joined #openstack-infra22:16
clarkbwe basically go into a tighter create, destroy loop in d-g when that happens22:16
jeblairso i think there's a problem with nodepool and mysql connections; i'm going to be spending some time working deeply on that22:17
clarkbhttp://tinyurl.com/mc2k5oj is pretty impressive22:18
clarkbI should change that to per day22:18
jeblairwow22:19
clarkbit is claiming almost 450 changes merged on 8/1222:19
clarkbnot sure if that is correct but wow22:19
fungiclarkb: we're due in mid-evening tomorrow22:25
*** SergeyLukjanov has quit IRC22:25
*** dina_belova has joined #openstack-infra22:25
*** blamar has quit IRC22:25
clarkbhttp://tinyurl.com/mz8qgpl there is the daily counter22:26
clarkbwhich is uhm a little impressive if correct22:26
*** sarob has joined #openstack-infra22:28
*** bookmage has quit IRC22:28
*** sarob has quit IRC22:28
*** sarob has joined #openstack-infra22:29
*** dina_belova has quit IRC22:30
fungii would believe that22:31
NobodyCamFilter projects <- TY :)22:32
*** wu_wenxiang has joined #openstack-infra22:33
*** rcleere has quit IRC22:35
*** wu_wenxiang has quit IRC22:35
*** mriedem has joined #openstack-infra22:35
clarkbNobodyCam: that was a community contribution.22:35
*** dina_belova has joined #openstack-infra22:36
jeblairclarkb: every contribution is a community contribution22:36
openstackgerritA change was merged to openstack-infra/config: move copy step to builder  https://review.openstack.org/4242222:36
clarkbjeblair: fair neough. I meant to say came from someone other than the coremudgeons22:36
clarkbit is a handy little feature. I already want it to filter on change number as well :)22:37
jeblairthat was from sergey lukjanov22:37
jeblairand ryan petrello has been enhancing it, it looks like22:38
*** moted has quit IRC22:38
*** moted has joined #openstack-infra22:39
*** dina_belova has quit IRC22:40
NobodyCamawesome and I just made use use of it22:41
NobodyCams/use use/use/22:42
openstackgerritA change was merged to openstack-infra/config: Add more details to git server documentation  https://review.openstack.org/4240522:47
clarkbthat was fast. I assume that means jenkins is much happier now22:47
fungiclarkb: well, it prioritizes gate jobs22:48
fungii was waiting for that to come back with check results, but that was probably just lazy of me22:48
pleia2I do love the spiffy new progress bars zuul status page22:48
fungiespecially since i doubt any of our check jobs would have had anything to say about it anyway, being pure documentation updates22:49
clarkbfungi: I approved it after the check jobs reported22:49
fungiyou are quick like ninja22:49
fungioh, and i stand corrected. we actually arrive around 2pm pdt22:50
clarkbjeblair: dtroyer: https://review.openstack.org/#/c/42427/ has passed tests \o/22:50
fungithe marvels of travelling against the earth's rotation22:50
clarkbfungi: I will be driving to portland saturday then back again on sunday22:51
fungihave fun!22:51
clarkbfungi: I am going to try and bring back a growler or two of the homebrew22:51
fungisounds like a lot of hassle for an overnight, but beer definitely makes it worthwhile22:51
clarkbit can be a pain, but I prefer the short stays22:52
fungii am pleased because the bar a block from my townhouse started a brewery in their back room (a sizeable one at that) and started serving their first four varieties a couple weeks ago. good stuff22:54
*** xBsd has quit IRC22:54
clarkbfungi: https://review.openstack.org/#/c/42415/ can I request that that change get reviewed?22:54
fungitaking a look22:54
fungiand then i need to pack two weeks of provisions22:54
pleia2ooh, is it bug fungi for reviews time?22:54
clarkbpleia2: I guess. Do you have changes I should review but haven't yet?22:55
pleia2I swear it's quick! https://review.openstack.org/#/c/42168/4/modules/cgit/manifests/init.pp just a question about /var/www directory22:55
clarkbI actually did a reasonable job yesterday of catching up on things22:55
fungipleia2: sure22:55
*** rnirmal has quit IRC22:56
*** nati_uen_ has joined #openstack-infra22:56
fungioh, yeah i already looked at 42415 and was waiting on that to come back with a jenkns +1. lgtm. also a very fun python slice mistake ;)22:56
fungipleia2: oh, and i missed that mordred -1'd 42168 because you were waiting on answers from me! i just saw he had set it wip and figured you were still working on it :/22:58
pleia2fungi: hah, yeah22:58
openstackgerritA change was merged to openstack-infra/config: Fix line parsing in requirements check  https://review.openstack.org/4241522:58
*** nati_ueno has quit IRC22:59
*** ^d has quit IRC23:00
*** ^d has joined #openstack-infra23:00
*** ^d has joined #openstack-infra23:00
vipulis jenkins slow today or what?23:00
clarkbyes23:02
clarkbvipul: you guys are running a lot of jobs23:02
vipulhardly :P23:02
clarkband there were some problems earlier today that amplified the affects of running lots of jobs23:02
vipulok, just checking23:03
*** ^d has quit IRC23:05
clarkbvipul: also, a little while back we made the gate higher priority than everything else which can starve the check queues23:05
*** datsun180b has quit IRC23:06
*** zul has joined #openstack-infra23:16
*** gyee has quit IRC23:16
*** colinmcnamara has quit IRC23:21
*** colinmcnamara has joined #openstack-infra23:26
*** pcm_ has quit IRC23:30
*** UtahDave has quit IRC23:30
*** pcm_ has joined #openstack-infra23:34
*** pcm_ has quit IRC23:34
*** dina_belova has joined #openstack-infra23:36
*** vipul is now known as vipul-away23:38
openstackgerritA change was merged to openstack/requirements: Allow use of oslo.messaging 1.2.0a5  https://review.openstack.org/4222923:38
*** dina_belova has quit IRC23:41
openstackgerritElizabeth Krumbach Joseph proposed a change to openstack-infra/config: Add static web directory for cgit & initial files  https://review.openstack.org/4216823:41
pleia2figured I might as well do the final relative path cleanup for the CSS23:41
*** vipul-away is now known as vipul23:53
mordredwoot23:55

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!