Wednesday, 2014-02-26

fungimspreitz: that sounds like evil gubments00:00
*** thuc has quit IRC00:00
fungi(the kill ssh connections with long key exchanges issue i mean)00:01
*** thuc has joined #openstack-infra00:01
mspreitzOr incompetent security vendors00:01
mspreitzI am going to try it from precise, see if that makes the difference00:02
fungithat would be most network security device vendors, yes00:02
ArxCruzdansmith: hey https://review.openstack.org/#/c/76388/ ;D00:02
dansmithArxCruz: I just saw that!00:02
ArxCruzyou passed on IBM PowerKVM Testing ;D00:02
ArxCruzdansmith: I was talking with krtaylor to configure our ci for always send a "No" when the patch is from you :)00:03
krtaylorhehheh00:03
*** moted has quit IRC00:03
dansmithArxCruz: +1 to that00:03
*** bhuvan_ has joined #openstack-infra00:03
fungiArxCruz: krtaylor: what we really need is a gerrit filter to keep dansmith's dependency chains at <=500:03
cyeohArxCruz: oooh nice to see it reporting!00:03
ArxCruz:)00:04
ArxCruzfungi: agreed :)00:04
krtaylorYES00:04
fungithough then we'd miss those 50-change rebase bombs00:04
ArxCruzlol00:05
*** bhuvan has quit IRC00:05
krtaylorI thought it was fitting that our first patch review was for dansmith00:05
openstackgerritMatt Riedemann proposed a change to openstack-infra/elastic-recheck: Remove resolved_at mention in readme  https://review.openstack.org/7639300:05
dansmithoh was I really the first one?00:05
*** thuc has quit IRC00:05
fungiclarkb: i'm enabling puppet on es02 now00:06
clarkbfungi: ok00:07
*** bhuvan has joined #openstack-infra00:08
*** bhuvan_ has quit IRC00:09
*** michchap has quit IRC00:10
*** bhuvan_ has joined #openstack-infra00:13
clarkbfungi: how is it? I don't see es02 in bigdesk yet00:13
*** lcostantino has joined #openstack-infra00:13
fungiclarkb: waiting for the agent to take its natural course, but i'll double-check logs00:14
*** bhuvan has quit IRC00:14
openstackgerritClark Boylan proposed a change to openstack-infra/zuul: Add a remote url override location  https://review.openstack.org/7605700:14
*** yamahata has quit IRC00:14
clarkbjeblair: jhesketh mordred fungi ^ that addresses the existing comments and adds docs for the feature.00:14
clarkbmordred: AaronGr ^ you should probably pull that down and test it00:14
clarkber I should've run tox locally too /me does that now00:15
dansmithArxCruz: plans for extending that to be a full run of tempest?00:15
fungiclarkb: my bad :(00:15
fungiFeb 26 00:13:33 elasticsearch02 puppet-agent[24532]: (/Stage[main]/Elasticsearch/File[/etc/elasticsearch/templates]/ensure) change from absent to directory failed: Cannot create /etc/elasticsearch/templates; parent directory /etc/elasticsearch does not exist00:15
ArxCruzkrtaylor: can answer that for you :D00:15
fungithe package probably created that before00:15
ArxCruzdansmith: ^00:15
clarkbfungi: oh right00:15
* fungi does a quick scrub for similar problems00:15
sdaguejeblair: cool on logstash queue!00:16
clarkboh good my change is tox clean00:16
*** pmathews has quit IRC00:16
krtaylordansmith, it is really a matter of increasing hardware, all in good time00:17
dansmithkrtaylor: ah, makes sense00:17
sdaguejeblair: any chance we could figure out how far behind we are in logstash instead of just queue depth?00:18
*** michchap has joined #openstack-infra00:18
*** lcostantino has quit IRC00:19
jeblairsdague: translate depth -> time?00:19
sdaguejeblair: yeh, basically00:20
clarkbsdague: you can figure that out with a simple es query00:20
clarkbjust look for the last console.html timestamp essentially00:20
clarkbwhich is probably more accurate than depth -> time00:20
jeblairsdague: oh, yeah, i guess there's two things related to time...00:20
sdagueclarkb: true00:20
jeblairsdague: clarkb described "what's the most recent entry", and now-then is how far behind we are...00:21
sdaguethat would be a nice thing to have a running graph for, as that would give us an idea on the impact00:21
jeblairsdague: but there's also "how long do we have to catch up?"00:21
jeblairsdague: which is a function of queue depth and a variable00:21
*** michchap has quit IRC00:21
sdaguewell, to an approximation, that's first derivative of delay00:22
openstackgerritMarton Kiss proposed a change to openstack-infra/config: Fix openstackid vhost override  https://review.openstack.org/7639700:22
*** michchap has joined #openstack-infra00:22
mrmartinfungi: I made a patch that fix a symlink file removal in openstackid config, may I get an approval for that? https://review.openstack.org/#/c/76397/00:23
jeblairsdague: yes, also the 1st derivative of depth00:23
jeblair(btw, graphite does derivates too i think)00:24
jeblairsdague: at any rate, queue depth should at least shed some visibility on it -- i mean, when things are working as planned, the queue shouldn't be more than a few hundred or maybe thousand.00:24
sdagueok, so 90k... bad?00:25
jeblairsdague: yes, very bad00:25
*** dolphm is now known as dolphm_50300:25
jeblairsdague: if memory serves, that translates to "many hours" for most of those time values00:25
fungimrmartin: it looks fine, passes checks and only touches openstackid, so i'm fine approving it to speed things along00:25
mrmartinfungi: if it pass through well, may I ask you another rm -rf /srv/openstackid ? I hope it will work now, passed the local vagrant tests00:26
sdagueok, maybe I'll just get intuition on it. The units just don't mean anything to me yet00:26
mrmartinif it won't I'll go to sleep :)00:26
fungimrmartin: sure, i'll do that once the change is in place on the puppet master in a few minutes, though it's also very late where you are, so sleep is a good idea regardless00:27
openstackgerritA change was merged to openstack-infra/config: Fix openstackid vhost override  https://review.openstack.org/7639700:27
fungithat ^ should appear on the master in about 2 minutes, just after 00:30 utc00:28
mrmartinfungi: I want to see something that works00:28
fungimrmartin: i know how you feel00:28
mrmartin:)00:28
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Puppet the /etc/elasticsearch directory  https://review.openstack.org/7639900:29
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Remove old elasticsearch workers  https://review.openstack.org/7605100:29
fungiclarkb: i think https://review.openstack.org/76399 is all es02 and later need. i'll try that in a dev env00:30
*** matsuhashi has joined #openstack-infra00:30
clarkbk00:30
clarkblooks good from here00:31
fungimrmartin: deleted it again just now. looks like the next puppet run should be in a minute or two, around 00:34 utc00:31
mrmartinoh great00:32
clarkbjhesketh: https://review.openstack.org/#/c/68828/10/zuul/cmd/client.py has a neat use of babel I will need to remember that rtick00:32
*** CaptTofu has joined #openstack-infra00:34
jheskethclarkb: :-), that was me googling to not reinvent the wheel00:36
fungiclarkb: elasticsearch service didn't start on es02... if you have a moment, ssh in and gawk at dmesg with me whydontcha00:36
clarkbsure00:37
fungisome serious oom action up in here00:37
fungijava invoked oom-killer: gfp_mask=0x280da, order=0, oom_adj=0, oom_score_adj=000:37
clarkbhuh00:38
fungii see one java process already running... did it try to start twice?00:38
clarkboh maybe00:38
clarkbif it did that it would explain ooming since it mlockalls half the ram each time00:38
fungiyep00:38
fungioh, and we have no swap on these things00:39
clarkbwhcih is desirable00:39
fungithe launch script didn't do anything with our ephemeral disk00:39
clarkband one reason for taking half the memory up front00:39
clarkboh does launch script do the swap thing now?00:39
fungimaybe... checking00:40
clarkbthere are a lot of no routes to host in the elasticsearch log, I don't think it can talk to the cluster right now00:40
fungiwhat size swap do we want on these. just mkswap the entire ephemeral disk or want partitions00:41
fungi?00:41
clarkbI don't think we want swap00:41
fungino? okay, no worrying about it in that case00:42
*** chenxu_ has joined #openstack-infra00:42
clarkbby mlocking all of the memory upfront we keep es under control and have ltos of memory for everything else00:42
clarkband swap is pretty fatal to es00:42
*** mriedem has joined #openstack-infra00:42
mspreitzAh, I was using the command line for cherry-picking via "Anonymous HTTP" rather than SSH or HTTPS00:42
*** shashank_ has quit IRC00:42
fungiclarkb: worth noting the memory utilization on es0100:42
mrmartinfungi: thanks, it works now, I really going sleep :)00:43
mrmartinhave a nice day00:43
clarkbfungi: that looks about right, lots of cached data00:43
fungimrmartin: thanks again for working on it!00:43
clarkbfungi: on 02 I think we should stop the service and try starting it again cleanly00:43
fungiclarkb: oh, right, half of that is in cache. good00:43
clarkbI don't understand these no route to host errors00:43
fungidoing00:43
mspreitzWhen I try the command line for SSH, experimenting with .ssh/config shows that git fetch will succeed or fail depending on length of algorithm negotiation packet00:43
clarkboh I know00:44
clarkbpuppet isn't running on logstash worker1600:44
* clarkb fixes that00:44
*** mrmartin has quit IRC00:44
fungiahh, firewall update missing there ;)00:44
mspreitzLooks like https://github.com/net-ssh/net-ssh/issues/9300:44
*** bhuvan_ has quit IRC00:44
clarkbfungi: and 02 has joined the cluster00:45
clarkbfungi: did you restart the service too?00:45
fungiclarkb: i did00:45
fungimspreitz: where is ruby net-ssh involved in the situation?00:46
*** dolphm_503 is now known as dolphm00:47
clarkbfungi: 02 looks happy to me00:48
fungiclarkb: any ideas on the double-start and oom there? should we merge 76399 and see if es03 hits that on the first go?00:49
clarkbfungi: best guess is it might have been leftovers from the previous run?00:49
clarkbthe puppet run that failed00:49
fungithat's all i've got :/00:49
fungii mean, the initscript really ought to be smart enough not to let that happen in the first place, right?00:50
fungiso weird00:50
clarkbI would hope so00:50
*** UtahDave has quit IRC00:51
fungiit makes a pidfile and does what looks like some fairly standard handling/checking for it00:51
clarkbis it possibly pushing that into es itself so it had to fork and do the mlock before it could notice there was another process going?00:52
*** amcrn has joined #openstack-infra00:52
fungiunless the initscript returns control before the pidfile exists, and the maintscripts and puppet service ensure raced one another00:52
fungiyeah, similar to what i'm thinking00:52
fungiif it's package install related, at least it's only a concern for first bootstrap00:53
*** bhuvan has joined #openstack-infra00:53
*** shashank_ has joined #openstack-infra00:53
fungioh, though i guess on puppeted es package upgrades we might also run into it00:53
clarkbI have always done those by hand because you don't really want version mismatches00:54
clarkbso coordinated upgrades all at once is a good thing after having stopped all the nodes00:55
fungiapproved 76399 which should merge prior to 01:00, at which point i'll enable puppet on es0300:55
clarkbthough I was thinking it wouldn't be terrible to orchestrate that if we could stop all the gearman workers, then do the massive upgrade00:55
clarkbbut salt and all that need to happen first00:55
fungii guess there's not enough redundancy to upgrade half the cluster offline and pivor00:56
fungipivot00:56
clarkbright, we could do that by doubling the data we keep00:56
openstackgerritA change was merged to openstack-infra/config: Puppet the /etc/elasticsearch directory  https://review.openstack.org/7639900:56
clarkbwhich doesn't seem worth it00:56
clarkbat least not yet00:56
fungiyeah, it's already a metric crapton of resources just to run what we're running00:57
jeblairPIVOT! http://www.youtube.com/watch?v=n67RYI_0sc000:57
*** thuc has joined #openstack-infra00:57
*** Ryan_Lane has joined #openstack-infra00:57
Ryan_Lanehowdy00:57
fungiit's a wandering Ryan_Lane!00:57
jeblairRyan_Lane: long time no see!00:58
fungilong time no irc ;)00:58
Ryan_Laneheh00:58
*** atiwari has quit IRC00:58
Ryan_LaneI've been avoiding a lot of channels since at the new job00:58
*** lcheng has quit IRC00:58
*** bhuvan_ has joined #openstack-infra00:58
clarkbjeblair: ha00:58
fungiclearly you forgot you should not avoid *this* channel00:58
Ryan_LaneI like to align communities though, so I'd like to pick some brains :)00:58
Ryan_Lanehave you guys considered phabricator over gerrit?00:58
fungioh boy yes00:59
fungiwe did look it over at least00:59
Ryan_Laneit's come a *long* way since last time wikimedia looked at it00:59
Ryan_Laneto a point we're considering it again00:59
clarkbRyan_Lane: there seemed to be a philosophical difference when we looked at it00:59
jeblairRyan_Lane: the fact that it didn't actually do anything with git repos was kind of a big minus00:59
clarkbthere is no enforcement of when things can merge and so on, its all manual behind the scenes00:59
clarkbthat00:59
Ryan_LaneI think it does now00:59
jeblairRyan_Lane: neato, maybe we should look again.01:00
fungithey've replaced hadn-waving with mechanical hands?01:00
Ryan_Lanethat was my major hesitation too01:00
Ryan_Laneand lyft is considering it and I'd really like to use something that has zuul integration :D01:00
*** wenlock has quit IRC01:01
Ryan_Lanewikimedia will obviously need zuul integration too01:01
*** bhuvan has quit IRC01:01
reedRyan_Lane, good to see you back here01:02
clarkbooh I would defnitely be willing to look at it again01:02
jeblairRyan_Lane: we'll definitely accept zuul patches for it (supporting non-gerrit is considered in-scope for zuul) even if we don't switch (ever or for a while)01:02
Ryan_Lanecool01:02
funginon-gerrit, much like non-jenkins, means non-jave potentially01:02
fungijava01:02
Ryan_Lanereed: good to be back :)01:03
Ryan_LaneI'd really like to be able to share some CI effort at lyft as well, so I'll likely be around01:03
*** bhuvan has joined #openstack-infra01:03
jeblairRyan_Lane: so are you at the "gap analysis" phase or more like the "let's poke at this" phase?01:03
Ryan_Laneit's installed in our labs environment and people are poking around at it01:03
fungiphabricator is written in php... guess i'll still have that to hold against it01:04
Ryan_Laneat lyft we're at the gap analysis phase01:04
mspreitzfungi: read all the way down, the problem is not the software it is something in the network and many kinds of software can trigger it01:04
Ryan_Laneit's easier to deal with php than java :)01:04
jeblairfungi: yeah, we can still gripe about the language.  :)01:04
Ryan_LaneI wish it was python01:04
*** bhuvan_ has quit IRC01:04
clarkbwell it comes from facebook, we should be happy ti speaks git01:04
fungijeblair: works for me. i'd actually get work done if i didn't have language preferences to complain about01:04
Ryan_Lanehahaha. indeed01:04
Ryan_Laneit's not maintained by facebook anymore, though01:04
Ryan_Lanehasn't been for a couple years I think01:05
reeduh, my danger alert went off01:05
lifelessRyan_Lane: what do they use now?01:05
reedfacebook, php, unmaintained ... sirens off01:06
clarkbRyan_Lane: right the maintainer left to do phabricator full time01:06
clarkblifeless: they use hg01:06
lifelessclarkb: I meant other than phabrifcator01:06
jeblairRyan_Lane: i'd love to hear about what you find01:07
clarkbjhesketh: https://review.openstack.org/#/c/68828/10/zuul/cmd/client.py I am doing proper review of that now. lines 165 to 167, does that mean last worker suffix value wins?01:07
clarkbjhesketh: is that intentional?01:07
clarkbjhesketh: oh wait I grok now nevermind01:07
clarkblifeless: that I do not know01:07
fungimspreitz: oh, interesting. this takes us back to the misbehaving (or malicious!) middleboxes theory01:08
jheskethclarkb: it's to access sub-dicts using a period in the column name01:08
*** sarob has quit IRC01:08
*** bhuvan_ has joined #openstack-infra01:08
clarkbRyan_Lane: there was a thread active today in the gerrit ml about gerrit critics01:08
clarkbGreg kh is one of them01:08
clarkbwas interesting to read01:08
*** sarob has joined #openstack-infra01:08
Ryan_Laneyeah, but phabricator felll into the same issues01:09
Ryan_Lanein the article01:09
reedthe website has some funny text :) http://phabricator.org/01:09
*** bhuvan has quit IRC01:09
Ryan_Lanesince it uses the same workflow01:09
Ryan_Lanejeblair: I'll let you know.01:09
Ryan_LaneI brought this up because I like to keep toolchains close :)01:10
reedWritten in PHP so literally anyone can contribute, even if they have no idea how to program.01:10
Ryan_Lanehahaha01:10
reedtheir words not mine :)01:10
fungireed: that's always been how i characterize php amyway01:10
fungianyway01:10
reedYou can make text bold. Bold text wins arguments!01:10
*** oubiwann_ has joined #openstack-infra01:10
reedthey're funny :)01:10
*** gokrokve_ has quit IRC01:10
anteayaand all caps gets you escorted to the door01:11
fungianteaya: and don't get me started on ansi blink escapes01:11
anteayano01:11
clarkbjhesketh: I am feeling dense. how does worker_name which is in the default list match the worker.name key?01:11
anteayano don't get started on anything blinky01:11
clarkbjhesketh: wouldn't that fail to match removing worker_name from fields?01:12
jheskethclarkb: ah, it wouldn't.. that'd be an error from a previous patchset01:12
clarkbjhesketh: so I should leave a legit comment on that?01:12
fungiclarkb: so with the fix in place, es03 tried to double-start and oom'd too01:12
jheskethyep01:12
jheskethclarkb: so it was worker_name, and the job had 'worker_name', 'worker_ips' etc in the dict, but now it goes job['worker']['names']01:13
clarkbfungi: weird I have no idea why that would happen01:13
clarkbfungi: oh you know what01:13
*** sarob has quit IRC01:13
clarkbfungi: is it double forking?01:13
clarkbfungi: we might want to set HEAP_SIZE to 29g if so01:13
fungiit... might be? i dunno, java and all01:13
fungias long as it stays inside its jvm, it can fork itself all it likes01:14
clarkbjhesketh: I will finish reviewing this and leave comments01:14
jheskethcheerws01:14
jhesketh*cheers01:14
fungibut yeah, i suppose leaving it a little room for briefly doubling its memory footprint, while somewhat insane from a memory utilization perspective, is certainly doable01:15
fungiclarkb: though if that were really the case, i'd expect to hit it any time we started the service, not just the first time around01:15
*** zhiyan_ is now known as zhiyan01:16
clarkbfungi: right and on the old nodes we set heap size to 16GB on 30GB nodes so should hit it every time01:16
clarkbso I really don't know what is going on01:16
* fungi blames puppet some more, fir good measure01:16
clarkbjesusaurus: ^ have you seen that with es?01:16
clarkbI will be going to an es + logstash thing early march in seattle maybe they will know there if we don't get it sorted before then01:16
sdagueRyan_Lane: so I'm looking at the phabricator site. I wonder how it would deal with openstack review load. They look like they are at 8k reviews total in their system, and I think that's currently about 5 weeks for us.01:17
fungiinterestingly, at start it immediately gobbles as much cache in ram as it uses resident01:18
fungiis that just a javaism, or elasticsearch-specific behavior?01:18
clarkbI thin es specific01:18
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Add storyboard and openstackid-dev ssl certs  https://review.openstack.org/7640701:18
Ryan_Lanesdague: it's used at facebook still01:19
Ryan_Laneand I'd imagine they have a lot more01:19
*** reed has quit IRC01:19
sdaguesure, I wonder what their workflows look like01:19
Ryan_Laneyeah, me too01:20
fungisdague: they post reviews on their wall01:20
sdaguehah01:20
* jeblair likes this01:20
* fungi has learnt some faceplant lingo01:20
clarkbfungi: and just like our gerrit everonye can see them. there is no privacy :)01:20
Ryan_LaneI've heard their deployer actually has a dislike button01:20
jeblairactually i don't like it but there's no button for that01:20
sdaguejeblair: you really think the internet needs more drive by hate?01:20
Ryan_Lane(and I'm not kidding ;) )01:21
jeblairsdague: only being able to like things is double plus good01:21
*** fifieldt has joined #openstack-infra01:21
fungisome source code needs more drive-by hate01:21
clarkbjhesketh: and the formatJSON() move was just copy pasta for the most part?01:21
*** ryanpetrello has joined #openstack-infra01:21
clarkbI wish gerrit would let me align diffs manually sometimes01:21
* anteaya thinks sdague makes a good point01:21
clarkbsdague: downvotes everywhere01:22
sdagueI save that for gerrit :)01:22
fungiclarkb is cmdrtaco in disguise01:22
sdagueRyan_Lane: it would be really interesting to see if you can make a sane integration here. Because as much as I have issues with gerrit, I also look at what01:22
jheskethclarkb: it's similar but a lot more information has been added into the jobs list to better reflect the data model01:22
sdague's in the public phabricator, and my gerrit workflow, and can't imagine getting that interface to handle my workflow01:23
Ryan_LaneI think pre-merge workflow is similar01:23
Ryan_Lanewhat's different about it?01:23
sdaguethe review view01:23
Ryan_Laneah01:24
Ryan_Laneanything specific missing?01:24
mspreitzNew question: how do I make git tell me where my FETCH_HEAD is pointing?01:25
fungimspreitz: git show01:25
jeblairgit show FETCH_HEAD01:25
fungithat ^01:25
mspreitz$ git show FETCH_HEAD fatal: ambiguous argument 'FETCH_HEAD': unknown revision or path not in the working tree.01:26
fungior you can git checkout FETCH_HEAD and then just git show01:26
fungimspreitz: is that before or after a fetch?01:26
mspreitzafter a fetch01:26
fungistrange. works for me when i try it01:27
sdagueRyan_Lane: so views like this I use a lot - http://i.imgur.com/aRERaXd.png01:27
*** zhiwei has joined #openstack-infra01:27
jeblaircat .git/FETCH_HEAD01:27
sdagueit being screen dense, having some idea on current votes, and who voted for stuff01:28
clarkbjhesketh: the worker_name thing was the only thing I found. looks good otherwise01:28
*** oubiwann_ has quit IRC01:28
Ryan_Lanesdague: ahhh, ok01:28
jheskeththanks clarkb, I'll reroll01:28
clarkbI need to eat dinner shortly but can rereview if you push up a fix01:28
mspreitzI have git version 1.8.3.2 on MacOSX01:28
Ryan_Laneinteresting01:28
Ryan_Lanewikimedia doesn't really have that use case as much01:29
fungiclarkb: should we proceed puppeting and restarting services on the other half of the new cluster members, or try more options i can't seem to think of?01:29
mspreitzsorry, I was reading wrong terminal01:30
clarkbfungi: can you try doing a puppet agent --test in order to get verbose output and see if puppet is kicking it more than once?01:30
mspreitzI have git version 1.8.3.2 on Ubuntu saucy01:30
clarkbfungi: I just relized there may be a subscribe somewhere that does something that triggers it twice01:30
*** tomhe has quit IRC01:30
*** mrodden has quit IRC01:30
sdagueRyan_Lane: how many reviews does a person tend to do over some unit time at wikimedia?01:30
fungiclarkb: sure, i'll do that and record the output into a (redacted if necessary) paste01:30
clarkbfungi: and puppet being at fault would explain why it didn't happen before considering puppet order isn't garunteed and determined by the catalog which we just cahnged01:30
Ryan_Laneno clue. they track that poorly01:30
*** tjones has quit IRC01:30
mspreitzgit checkout FETCH_HEAD also craps out for me01:30
mspreitzlike this: error: pathspec 'FETCH_HEAD' did not match any file(s) known to git01:31
*** zul has quit IRC01:31
sdagueso http://stackalytics.com/?release=icehouse&metric=marks&project_type=openstack&module=&company=&user_id=01:31
sdaguewe'll have a dozen people in the openstack community with > 1000 reviews during icehouse on current course and speed01:31
* Ryan_Lane nods01:31
fungiyarg01:32
*** apevec has quit IRC01:32
*** bhuvan has joined #openstack-infra01:32
openstackgerritJoshua Hesketh proposed a change to openstack-infra/zuul: Add support to list running jobs to zuul client  https://review.openstack.org/6882801:32
fungihow am i in the top 10 contributions by engineers?01:32
fungithough kudos to ajaeger!01:32
anteayais jenkins in there?01:32
jheskethclarkb: ^ (whenever :-))01:32
*** lnxnut has quit IRC01:33
sdagueyeh, looks like 13 in havana, so realistically I bet we end up with 15 in icehouse (I bet anyone with > 750 now will hit 1000 by end of cycle)01:33
sdagueanteaya: no, this is reviews :)01:33
anteayasdague: ah01:33
Ryan_LaneI wonder if it's a matter of having reasonable queries01:33
fungii need to find another 75 patches to review asap ;)01:34
clarkbjeblair: if jhesketh's change still lg to you we can rpobably double tab it and get it merged01:34
anteayafungi: like you have to look01:34
sdagueRyan_Lane: reasonable queries is definitely key, honestly gerrit only partially works that way. Having the queries as get strings so you can bookmark them is goodness as well.01:34
fungianteaya: good point. my present watched backlog is at least triple that i think01:34
Ryan_Lanelooks like phabricator can save queries01:35
clarkbfungi: you jeblair and I are01:35
anteayano doubt01:35
mspreitzOh, I see.  I was discounting the "git fetch" that failed.  If I try again right after a "git fetch" that succeeds then "git show FETCH_HEAD" works01:35
sdaguemore fun stat01:35
fungimspreitz: that does make sense01:35
*** jergerber has quit IRC01:35
sdaguehit the last page on stackalytics01:35
*** Guest63393 has quit IRC01:35
*** Guest63393 has joined #openstack-infra01:35
fungiclarkb: you're beating me by at least several reviews though01:35
*** Guest63393 is now known as persia01:35
sdague1011 - the number of individuals that have reviewed code in icehouse01:35
*** thuc has quit IRC01:35
*** krotscheck has quit IRC01:35
*** bhuvan_ has quit IRC01:35
jesusaurusclarkb: fungi: im confused, es03 tried to start two separate nodes on the same host?01:36
*** thuc has joined #openstack-infra01:36
Ryan_Lanesdague: yep. lots of reviewers01:36
fungijesusaurus: it did indeed. still digging into it01:36
fungijesusaurus: it's also repeatable01:36
*** bhuvan has quit IRC01:36
jesusaurusfungi: what are the steps to reproduce?01:36
clarkbjesusaurus: ya not sure if init script/es fault or puppet double tapping weird01:37
fungijesusaurus: use out puppet, and build a new cluster member?01:37
fungis/out/our/01:37
Ryan_Laneoh, wow. phabricator's task manager works across projects: https://secure.phabricator.com/T308901:37
Ryan_Laneisn't that the reason launchpad is still being used for issues?01:37
anteayawe are looking for the twelfth reviewer01:38
* Ryan_Lane hates launchpad01:38
fungiRyan_Lane: that was one of our major requirements for storyboard, so good to know we're not insane for wanting it01:38
Ryan_Laneit's a feature of jira too01:38
Ryan_Laneso it's not insane at all01:38
jeblairRyan_Lane: mordred and ttx can probably provide more details, but there was still some fundamental misalignments between phabricator and what we need01:39
jeblairfor bugs01:39
jeblairand blueprints01:39
Ryan_Laneyep, not surprising01:39
*** sabari has quit IRC01:39
Ryan_LaneI'm just going through the features now and seeing some nice things :)01:39
jeblairRyan_Lane: yeah, we looked hard at it for bugs01:39
*** gokrokve has joined #openstack-infra01:39
*** thuc has quit IRC01:40
jesusaurusclarkb: to answer your question, there was one lone incident where i found a second node running on 9201/9301, but i simply killed the second process and never saw any other strange behaviour01:40
fungiclarkb: jesusaurus: http://paste.openstack.org/show/6963301:41
*** yamahata has joined #openstack-infra01:41
*** yamahata has quit IRC01:41
*** nosnos has joined #openstack-infra01:41
*** yamahata has joined #openstack-infra01:42
fungiclarkb: jesusaurus: and that puppet agent --test did indeed wind up trying to start a second jvm, which then got oom-killed01:42
fungiand following a service stop/start, it joined the cluster normally and successfully01:42
clarkbso new theory. package install starts it then service double taps01:44
jesusaurusfungi: before that --test run es04 already had elasticsearch installed and running?01:44
fungiclarkb: jesusaurus: staring at that log, i'm back to my original supposition, which is that the initscript returns control to the calling process before the pidfile is created01:44
jesusaurusclarkb: oh, thats a possibility01:44
clarkbfungi: can you try a test change that doesnt manage the service?01:44
fungijesusaurus: before that the elasticsearch puppet module was not applied at all01:44
fungiclarkb: doing now01:45
clarkbfungi: that seems to be a general direction we are headed in anyways01:45
*** esker has joined #openstack-infra01:45
*** apevec has joined #openstack-infra01:46
sdagueI do like the fact that phabricator does all the diffs in one page01:46
*** apevec has quit IRC01:46
*** apevec has joined #openstack-infra01:46
Ryan_Lanesdague: same. I've wanted that in gerrit since forever01:47
Ryan_Lanewell, phabricator has stolen a couple hours of my day now. back to the grind for me ;)01:48
*** amcrn has quit IRC01:49
clarkbwoot /me has hibiki 12 and code reviews to do now01:49
jeblairsdague: you know someone wrote a patch to gerrit for that, but didn't sign the cla, so it was never merged.01:49
jeblair(CLAs are evil)01:49
clarkbjeblair: did you catch jhesketh's updated patch? the delta between ps 10 and 11 is mall01:49
jeblairclarkb: is it urgent?01:50
jheskethnope01:50
jhesketh(the real delta is one character, just rebased while I had the chance)01:50
clarkbjhesketh: no01:50
clarkbgah01:50
clarkbjeblair: no not urgent at all01:50
*** ryanpetrello has quit IRC01:52
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Don't ensure elasticsearch service is running  https://review.openstack.org/7641201:53
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Remove old elasticsearch workers  https://review.openstack.org/7605101:53
fungiclarkb: as suspected, https://review.openstack.org/76412 there seems to solve it01:53
fungiso my first theory was probably correct01:53
clarkbya I think so01:53
sdaguejeblair: sigh01:53
fungiclarkb: if we merge that, i'll puppet es06 from master with it, make sure that is sane, and call it a day01:54
sdagueI wonder if we are ever going to get rid of our CLA, or if that will be the board fight to end all board fights01:54
fungisdague: the tc should fire the board01:54
fungi;)01:54
clarkbfungi: I +2'd it. Looks relatively safe. I did not however check to see if there were subscriptions to the service or notifies to it01:54
clarkbsdague: you could fork openstack and have the fork be CLA free01:55
fungiclarkb: well, i did apply the same change from a puppet dev env on es05 and it didn't error. i think a subscrive/refresh would have counted as a reference to a nonexistent service and errored in that case01:55
clarkbfungi: maybe01:56
* clarkb looks closer01:56
*** ryanpetrello has joined #openstack-infra01:56
fungiclarkb: if there's any mention of a service, it's not in the init.pp anyway (which is the only puppetfile in the module)01:57
clarkbfungi: git grep "'elasticsearch'" doesn't show any incidences either01:57
clarkbso I think your change is fine as is01:57
*** khyati has quit IRC01:57
fungiinto the drink with it, i say01:57
openstackgerritA change was merged to openstack-infra/config: Don't ensure elasticsearch service is running  https://review.openstack.org/7641201:59
*** gyee has quit IRC02:00
openstackgerritKhai Do proposed a change to openstack-infra/gerritlib: add getVersion and listPlugin commands and update replicate command  https://review.openstack.org/6976802:02
*** ryanpetrello has quit IRC02:02
*** gyee has joined #openstack-infra02:03
*** gyee has quit IRC02:04
fungitomorrow i'll try to remember to get the new elasticsearch cluster members and the puppetdb server into cacti too02:04
clarkbzaro: ^ failed pep802:05
*** gyee has joined #openstack-infra02:05
openstackgerritKhai Do proposed a change to openstack-infra/gerritlib: add getVersion and listPlugin commands and update replicate command  https://review.openstack.org/6976802:05
clarkbzaro: whats with ascii_names?02:06
*** mgagne has quit IRC02:08
*** krotscheck has joined #openstack-infra02:08
*** chenxu_ has quit IRC02:09
fungiclarkb: with that last patch merged, enabling puppet agent on es06 cause it to install, start and join the cluster successfully02:10
fungii'll check back in on it in the morning to see how shard replication is coming along02:10
clarkbperfect thanks again02:11
clarkbI am watching a movie now02:11
fungies01 has a dozen shards at this point, and es02 has a couple as well02:11
fungiclarkb: great--i'm going to do the same. have a good evening02:12
clarkbyou too02:12
*** arborism has joined #openstack-infra02:12
*** arborism is now known as amcrn02:12
*** Sukhdev has joined #openstack-infra02:15
*** changbl has joined #openstack-infra02:19
*** mspreitz has left #openstack-infra02:25
*** yaguang has joined #openstack-infra02:25
*** esker has quit IRC02:26
*** SumitNaiksatam has quit IRC02:28
*** thuc has joined #openstack-infra02:29
openstackgerritA change was merged to openstack/requirements: Bump python-savannaclient to 0.5.0  https://review.openstack.org/7635702:29
*** thuc_ has joined #openstack-infra02:29
*** thomasem has joined #openstack-infra02:29
openstackgerritKhai Do proposed a change to openstack-infra/gerritlib: add getVersion and listPlugin commands and update replicate command  https://review.openstack.org/6976802:31
*** amcrn has quit IRC02:32
*** thuc has quit IRC02:33
*** mgagne has joined #openstack-infra02:35
*** apevec has quit IRC02:36
zaroclarkb: ^02:36
*** mgagne has quit IRC02:38
zaroclarkb: caught another 3 episodes of game of thrones on plane.  pretty good.02:38
*** shashank_ has quit IRC02:40
*** Ryan_Lane has quit IRC02:48
*** thomasem has quit IRC02:50
*** david-lyle has joined #openstack-infra02:52
*** yaguang has quit IRC02:52
*** Ryan_Lane has joined #openstack-infra02:55
*** yaguang has joined #openstack-infra02:56
*** sabari has joined #openstack-infra02:57
*** mspreitz has joined #openstack-infra03:00
mspreitzDoes anybody here test with Python 2.6 on ubuntu?03:00
mspreitzIs anybody here?03:01
fifieldtprobably03:01
pleia2it's tough because none of the versions of Ubuntu that can a) run modern openstack can b) easily run python 2.6, the last release that met this criteria was 11.10/oneiric (now EOL)03:04
pleia2which is why we use centos to test 2.6 in the infra now03:04
mspreitzAh, thanks03:05
mspreitzSo developers normally use Ubuntu 12.04 and test with only Python 2.7, right?03:06
pleia2we also have 3.3 packages for 12.04 that we test some things on, but yeah, 2.7 is the status quo03:06
mspreitzthanks03:07
pleia2sure03:07
*** zhiwei has quit IRC03:11
*** Sukhdev has quit IRC03:12
*** amotoki_ has quit IRC03:13
*** morganfainberg is now known as morganfainberg_Z03:19
*** krotscheck has quit IRC03:19
*** Ryan_Lane has quit IRC03:23
*** jcooley_ has quit IRC03:23
ianwmspreitz: python2.6 is used on RHEL603:24
mspreitzBut most developers just use Ubuntu 12.04, right?03:25
ianwmspreitz: hard to say, fedora is popular too03:26
*** jcooley_ has joined #openstack-infra03:27
*** sabari has quit IRC03:27
mspreitzDoes Fedora have Python 2.6?03:27
*** cody-somerville has joined #openstack-infra03:29
*** NikitaKonovalov_ is now known as NikitaKonovalov03:29
ianwno, RHEL/Centos 6 is the only python2.6 environment03:30
*** CaptTofu has quit IRC03:35
*** pcrews has quit IRC03:35
*** nati_ueno has quit IRC03:37
*** matsuhashi has quit IRC03:37
*** NikitaKonovalov is now known as NikitaKonovalov_03:39
*** rcleere has joined #openstack-infra03:40
*** rfolco has quit IRC03:41
*** thuc_ has quit IRC03:41
*** thuc has joined #openstack-infra03:42
*** khyati has joined #openstack-infra03:51
*** gyee has quit IRC03:52
*** mriedem has quit IRC03:55
*** markwash has quit IRC03:56
*** harlowja is now known as harlowja_away03:57
*** sarob has joined #openstack-infra04:00
*** sarob has quit IRC04:08
*** ArxCruz has quit IRC04:12
*** ArxCruz has joined #openstack-infra04:19
*** ArxCruz has quit IRC04:20
*** sabari has joined #openstack-infra04:23
*** Ryan_Lane has joined #openstack-infra04:24
*** wenlock has joined #openstack-infra04:27
*** jcooley_ has quit IRC04:27
*** ArxCruz has joined #openstack-infra04:31
*** ArxCruz has quit IRC04:31
*** matsuhashi has joined #openstack-infra04:32
jheskethjeblair, clarkb: when you guys get a chance, could you please respond to my comments on this: https://review.openstack.org/#/c/73461/04:33
*** ArxCruz has joined #openstack-infra04:34
*** ArxCruz has quit IRC04:35
*** lcheng has joined #openstack-infra04:37
clarkbdone04:38
*** sarob has joined #openstack-infra04:40
*** sarob has quit IRC04:43
*** masayukig has joined #openstack-infra04:45
*** ArxCruz has joined #openstack-infra04:50
*** ArxCruz has quit IRC04:50
*** ryanpetrello has joined #openstack-infra04:52
*** jcooley_ has joined #openstack-infra04:55
*** boris-42 has quit IRC04:57
*** boris-42 has joined #openstack-infra04:59
*** jcooley_ has quit IRC05:00
*** NikitaKonovalov_ is now known as NikitaKonovalov05:00
*** masayukig has quit IRC05:01
*** jcooley_ has joined #openstack-infra05:01
*** ArxCruz has joined #openstack-infra05:04
*** ArxCruz has quit IRC05:05
*** NikitaKonovalov is now known as NikitaKonovalov_05:06
jheskeththanks clarkb05:06
*** chandan_kumar has joined #openstack-infra05:07
openstackgerritJoshua Hesketh proposed a change to openstack-infra/config: Set up opportunistic bare-metal postgres db  https://review.openstack.org/7346105:09
*** coolsvap has joined #openstack-infra05:11
*** jcooley_ has quit IRC05:12
*** vogxn has joined #openstack-infra05:12
*** ryanpetrello has quit IRC05:18
*** jcooley_ has joined #openstack-infra05:18
*** sarob has joined #openstack-infra05:18
*** ArxCruz has joined #openstack-infra05:19
*** ArxCruz has quit IRC05:20
*** jcooley_ has quit IRC05:21
*** vogxn has quit IRC05:24
*** khyati has quit IRC05:28
*** gokrokve has quit IRC05:31
*** gokrokve has joined #openstack-infra05:31
*** talluri has joined #openstack-infra05:33
*** ArxCruz has joined #openstack-infra05:34
*** ArxCruz has quit IRC05:35
*** gokrokve has quit IRC05:36
*** CaptTofu has joined #openstack-infra05:36
*** nicedice has quit IRC05:36
*** gokrokve has joined #openstack-infra05:38
*** CaptTofu has quit IRC05:41
*** dolphm is now known as dolphm_50305:41
*** dkliban has quit IRC05:43
*** sarob has quit IRC05:47
*** sdake_ has quit IRC05:47
*** ArxCruz has joined #openstack-infra05:49
*** ArxCruz has quit IRC05:50
*** nati_ueno has joined #openstack-infra05:50
*** talluri has quit IRC05:53
*** vogxn has joined #openstack-infra05:54
*** zhiyan is now known as zhiyan_05:56
*** vogxn has quit IRC06:00
*** talluri has joined #openstack-infra06:03
*** yolanda_ has joined #openstack-infra06:07
*** talluri has quit IRC06:09
*** vkozhukalov_ has joined #openstack-infra06:10
*** dolphm_503 is now known as dolphm06:11
openstackgerritA change was merged to openstack-infra/devstack-gate: Add oslo.vmware  https://review.openstack.org/7555506:12
*** vkozhukalov_ has quit IRC06:16
*** jcooley_ has joined #openstack-infra06:19
*** gokrokve has quit IRC06:20
*** gokrokve has joined #openstack-infra06:20
*** wenlock has quit IRC06:20
*** gokrokve has quit IRC06:21
*** dolphm is now known as dolphm_50306:23
*** talluri has joined #openstack-infra06:23
*** rwsu has quit IRC06:27
*** rlandy has joined #openstack-infra06:32
*** jcooley_ has quit IRC06:37
*** wchrisj has quit IRC06:41
jheskethany grenade devs in here able to help me with a problem?06:42
*** lcheng has quit IRC06:42
mordredRyan_Lane: yeah - it doesn't do bug tasks though (which is why we're still on launchpad and writing storyboard) - and its integrated code review makes it hard for us to use it integrated with gerrit06:46
mordredRyan_Lane: but yea - I found it very tempting when looking at it and I tried to make it make sense to use it06:46
*** pblaho has joined #openstack-infra06:47
*** denis_makogon has joined #openstack-infra06:47
Ryan_Lanemordred: well, the idea would be to move away from gerrit completely06:47
Ryan_Lanethe code review in phabricator looks quite a bit better06:47
*** talluri_ has joined #openstack-infra06:47
*** rcarrillocruz has joined #openstack-infra06:48
*** flaper87|afk is now known as flaper8706:48
*** sarob has joined #openstack-infra06:48
*** rcarrillocruz1 has quit IRC06:50
*** saju_m has joined #openstack-infra06:50
clarkbphabricator doesn't host repositories, what does that mean?06:50
clarkber I guess it is beta. Does that mean the merge in phabricator is beta?06:50
*** talluri has quit IRC06:50
*** ildikov_ has quit IRC06:51
*** valentinbud has joined #openstack-infra06:51
*** sarob has quit IRC06:53
*** thuc has quit IRC06:55
*** thuc has joined #openstack-infra06:55
*** dolphm_503 is now known as dolphm06:56
*** mspreitz has quit IRC06:58
clarkbfinally figured out where phabricator self hosts, there is a land revision to hosted repository button06:59
clarkbso it does seem to have grown that possibly as a beta feature06:59
*** thuc has quit IRC07:00
Ryan_Laneclarkb: yep07:05
Ryan_LaneI found that out recently07:05
Ryan_Lanewhich is what made me reconsider it07:05
*** dolphm is now known as dolphm_50307:05
*** rcarrillocruz1 has joined #openstack-infra07:06
*** sdake_ has joined #openstack-infra07:06
*** rcarrillocruz has quit IRC07:07
*** rcarrillocruz has joined #openstack-infra07:10
*** rcarrillocruz1 has quit IRC07:12
*** jcooley_ has joined #openstack-infra07:15
*** sld has joined #openstack-infra07:20
*** gokrokve has joined #openstack-infra07:21
*** jcooley_ has quit IRC07:21
*** nati_ueno has quit IRC07:24
*** jlibosva has joined #openstack-infra07:25
*** yolanda_ has quit IRC07:26
*** gokrokve has quit IRC07:26
*** NikitaKonovalov_ is now known as NikitaKonovalov07:27
*** skraynev_afk is now known as skraynev07:32
*** sabari has quit IRC07:35
*** CaptTofu has joined #openstack-infra07:36
*** jcoufal has joined #openstack-infra07:38
*** CaptTofu has quit IRC07:41
*** sld has left #openstack-infra07:42
*** vkozhukalov_ has joined #openstack-infra07:48
*** sarob has joined #openstack-infra07:50
*** sarob has quit IRC07:54
*** luqas has joined #openstack-infra07:56
*** dolphm_503 is now known as dolphm07:56
*** shashank_ has joined #openstack-infra07:57
*** e0ne has joined #openstack-infra07:58
*** afazekas has joined #openstack-infra08:05
*** luqas has quit IRC08:06
*** dolphm is now known as dolphm_50308:06
*** luqas has joined #openstack-infra08:08
*** ildikov_ has joined #openstack-infra08:16
mordredRyan_Lane: sure. but that would be a bit of a large undertaking ...08:18
Ryan_Lanetotally agree :)08:19
mordredRyan_Lane: we'd need to add an event stream so that zuul could do its thing08:19
Ryan_Laneyep. wikimedia is considering phabricator again08:19
Ryan_Laneso some of this would be needed by them too08:19
mordredah. they don't want to use storyboard from us?08:19
Ryan_Lanethey use bugzilla08:19
Ryan_Lanethey don't really need a bug tracker08:19
mordredoh08:19
Ryan_Lanethey want to replace gerrit08:19
mordredso why would they want to move to phabricator?08:20
mordredah08:20
mordredwell, that would make me sad08:20
*** jgallard has joined #openstack-infra08:20
Ryan_Lanegerrit has a pretty unresponsive upstream08:20
Ryan_Laneit's UX is crap08:20
Ryan_Laneits*08:20
mordredwe have some thoughts on UX for gerrit, fwiw08:20
Ryan_Laneits UX is consistently getting worse08:20
mordredwhich don't involve upstream patches08:20
Ryan_Lanehave you seen the new change page?08:20
mordredyes08:20
Ryan_Lanethe only thing gerrit does really well is handle the repos08:21
mordredright. but that's the hard part08:21
Ryan_Lanewell, and its gating model08:21
mordreda UI can be added08:21
Ryan_Lanephabricator now handles repos08:21
mordredon top of its APIs08:21
Ryan_Lanegerrits public APIs kind of suck :(08:21
mordredthe ssh api is actually quite amazing08:22
Ryan_Laneyeah, the ssh api isn't bad, but I wouldn't build a UI on it08:22
mordredin that it allows the effects of event/callback without the receiving end needingto be pulic08:22
*** gokrokve has joined #openstack-infra08:22
mordredlike, from the event stream perspective - this makes the "run a 3rd party testing" thing work really well08:22
Ryan_LaneI honestly think moving away from gerrit is such a major undertaking that wikimedia may not do it08:22
Ryan_Lanebut at some point something's got to change08:23
mordredyeah. it's a BIG amount of work08:23
mordredyeah08:23
mordredwell, we're with you on that :)08:23
Ryan_Lanegerrit either needs to improve, or it's got to go08:23
mordredI'm basically being me right now because we agree and would love to work together on that08:23
Ryan_Laneheh08:23
mordredfrom my perspective, I'd like to replace teh UI first, which would mean that gerrit would be down to being the repo engine08:24
Ryan_LaneI think wikimedia is likely to investigate it for a while08:24
*** denis_makogon has quit IRC08:24
mordredthen replacing the repo engine would be much easier08:24
Ryan_Laneyeah, but at that point, why not use phabricator?08:24
mordredbecause we can do it piecemeal08:24
mordredlike we've been working towards replacing jenkins08:24
mordredeven while adding jenkins features08:24
mordredbecause we've gota  large production deploy08:24
Ryan_Laneah, so you'd switch out the ui, then you'd switch to phabricator?08:25
mordredwe can't just replace a large porition all in one go08:25
mordredno08:25
Ryan_Lanethen you'd switch the ui again?08:25
mordredI wouldn't switch to phabricator at that point08:25
Ryan_Lanewhat would manage the repos?08:25
Ryan_LaneI think phabricator is doing an amazing job with the UI work08:25
mordredthat's what I'm saying - at that point, what we'd need is an api service that can manage repos08:25
mordredsure they are - but they're also a big thing adn I don't need most of what they're doing08:26
mordredit's too hard08:26
mordredif we were starting from scratch today, sure08:26
* Ryan_Lane nods08:26
mordredbut they're pitching a whole worldview08:26
mordredwhich is great08:26
Ryan_Laneso basically write something new from scratch?08:26
mordredit's what we're doing too08:26
mordredyeah08:26
mordredall we'd need is an api server08:26
Ryan_Lanethat scares me ;)08:26
mordredwe're good at writing those around here08:26
Ryan_Lanean api server that would manage repos?08:27
mordredyeah08:27
*** gokrokve has quit IRC08:27
Ryan_Lanewell, it would surely be nice to have a python alternative08:27
mordredor, a scale-out system that can manage repos across multiple hosts - gerrit it too monolithic08:27
mordredis08:27
Ryan_LaneI just worry that it would take ages to reach feature parity with gerrit or phabricator08:27
mordredI agree08:27
mordredthe question is - which features do we actually need08:27
Ryan_Laneah. so... I've been thinking about scaling git across systems...08:28
mordredand/or which ones can we do away with if we're not working on a generalized solution that needs a bunch of customizability08:28
mordredyeah08:28
mordred?08:28
Ryan_Lanebut I'll just wait till I have something implemented till I discuss it :)08:28
mordredok. I'm in favor of thoughts on scaling git08:28
mordredRyan_Lane: have you been following the recent zuul merger work?08:29
Ryan_Lanenope08:29
Ryan_Lanebut I have a deployment system using salt that can easily be extended to scale out git08:29
Ryan_Laneand already have a game plan for making it work for that08:29
Ryan_Lanewhat does zuul merger do?08:29
* Ryan_Lane is very interested in zuul, but can't use it08:29
openstackgerritFlavio Percoco proposed a change to openstack-infra/config: Don't enable oslo-incubator py33 for stable/havana  https://review.openstack.org/7646408:33
*** sergmelikyan has quit IRC08:33
mordredRyan_Lane: it offloads the merging workload from zuul itself into a set of gearman workers08:36
avishayanyone knows why jenkins has tried checked this patch like 20 times in a row?  https://review.openstack.org/#/c/49755/08:36
Ryan_Laneah. neat.08:37
mordredRyan_Lane: we reached the point where the work of performing merges was a scaling bottleneck08:37
Ryan_Lanethat's a good bottleneck to have ;)08:37
mordrednot aroudn feature freeze time it's not :)08:37
*** protux has joined #openstack-infra08:37
Ryan_Lane:D08:37
*** andreaf has joined #openstack-infra08:39
*** talluri_ has quit IRC08:47
*** talluri has joined #openstack-infra08:48
*** ociuhandu has joined #openstack-infra08:50
*** ociuhandu has quit IRC08:52
*** talluri has quit IRC08:52
openstackgerritNikita Konovalov proposed a change to openstack-infra/storyboard-webclient: Fix API launch  https://review.openstack.org/7646708:53
*** thuc has joined #openstack-infra08:56
*** dolphm_503 is now known as dolphm08:57
*** protux has quit IRC08:58
openstackgerritsahid proposed a change to openstack-infra/config: Adds new project Warm  https://review.openstack.org/7624708:59
*** vogxn has joined #openstack-infra09:01
*** jcooley_ has joined #openstack-infra09:03
*** jpich has joined #openstack-infra09:04
*** rossella_s has joined #openstack-infra09:07
*** shashank_ has quit IRC09:09
*** dolphm is now known as dolphm_50309:09
*** jcooley_ has quit IRC09:09
*** katyafervent_awa is now known as katyafervent09:11
lifelesshello infra! https://bugs.launchpad.net/tripleo/+bug/1284054 - the ticket was updated but not set to fixcommitted.09:12
lifelessclarkb: ^ food for ? tomorrow morning:)09:12
ttxlifeless: Partial-Bug: does comment but not change status09:13
lifelessoh nvm, 'partial bug'09:13
lifelessttx: hah, junix - thanks09:13
*** yassine has joined #openstack-infra09:14
*** derekh has joined #openstack-infra09:14
*** marun has quit IRC09:15
lifelessbah, jinx09:16
SpamapShrm...09:16
SpamapSdoes tox 1.7.0 require significant changes to tox.ini?09:17
SpamapStox.ConfigError: ConfigError: substitution key 'posargs' not found09:17
SpamapSappears so09:17
*** talluri has joined #openstack-infra09:18
*** thuc has quit IRC09:18
*** jcooley_ has joined #openstack-infra09:20
*** luqas has quit IRC09:22
*** locke105 has quit IRC09:22
*** hashar has joined #openstack-infra09:22
*** hashar has quit IRC09:22
*** gokrokve has joined #openstack-infra09:22
*** talluri has quit IRC09:23
openstackgerritNikita Konovalov proposed a change to openstack-infra/storyboard-webclient: Fix API launch  https://review.openstack.org/7646709:24
*** jcooley_ has quit IRC09:25
*** dmakogon_ is now known as denis_makogon09:25
*** rcarrillocruz1 has joined #openstack-infra09:25
*** hashar has joined #openstack-infra09:25
*** fbo_away is now known as fbo09:26
*** gokrokve has quit IRC09:27
*** rcarrillocruz has quit IRC09:27
lifelessSpamapS: 1.7.0 is broken09:30
lifelessSpamapS: AIUI09:30
*** talluri has joined #openstack-infra09:31
SpamapSof course09:32
SpamapSpycon is coming09:32
SpamapShave to release new features09:32
rcarrillocruz1heh09:32
*** valentinbud has quit IRC09:32
*** talluri has quit IRC09:33
*** talluri has joined #openstack-infra09:33
openstackgerritMartin Mágr proposed a change to openstack-infra/config: Watch also havana branch for packstack  https://review.openstack.org/7620609:34
*** alexpilotti has joined #openstack-infra09:34
*** luqas has joined #openstack-infra09:35
*** jamielennox has quit IRC09:36
*** CaptTofu has joined #openstack-infra09:37
*** talluri has quit IRC09:37
*** yaguang has quit IRC09:38
*** jamielennox has joined #openstack-infra09:39
*** CaptTofu has quit IRC09:42
*** ianw has quit IRC09:43
*** NikitaKonovalov is now known as NikitaKonovalov_09:45
*** johnthetubaguy has joined #openstack-infra09:47
*** ianw has joined #openstack-infra09:47
*** bogdando has quit IRC09:50
*** valentinbud has joined #openstack-infra09:50
*** rcarrillocruz has joined #openstack-infra09:51
*** bogdando has joined #openstack-infra09:52
*** rcarrillocruz1 has quit IRC09:52
*** jp_at_hp has joined #openstack-infra09:52
*** jcooley_ has joined #openstack-infra09:59
*** dolphm_503 is now known as dolphm10:00
*** rcarrillocruz1 has joined #openstack-infra10:00
*** rcarrillocruz has quit IRC10:03
*** shardy is now known as shardy_afk10:03
*** SergeyLukjanov has quit IRC10:04
*** SergeyLukjanov has joined #openstack-infra10:04
*** davidhadas has quit IRC10:06
*** ociuhandu has joined #openstack-infra10:06
*** jcooley_ has quit IRC10:06
*** jlibosva has quit IRC10:08
*** jlibosva has joined #openstack-infra10:08
*** dolphm is now known as dolphm_50310:09
*** che-arne has joined #openstack-infra10:12
*** johnthetubaguy has quit IRC10:12
*** johnthetubaguy has joined #openstack-infra10:12
*** luqas has quit IRC10:15
*** mkoderer has joined #openstack-infra10:18
*** david-lyle has quit IRC10:19
*** mrda is now known as mrda_away10:21
*** dizquierdo has joined #openstack-infra10:25
*** NikitaKonovalov_ is now known as NikitaKonovalov10:30
openstackgerritNikita Konovalov proposed a change to openstack-infra/storyboard-webclient: Add token header to requests  https://review.openstack.org/7596110:32
*** rcarrillocruz has joined #openstack-infra10:39
*** davidhadas has joined #openstack-infra10:41
*** rcarrillocruz1 has quit IRC10:42
openstackgerritNikita Konovalov proposed a change to openstack-infra/storyboard-webclient: Auth support  https://review.openstack.org/7321910:47
openstackgerritNikita Konovalov proposed a change to openstack-infra/storyboard-webclient: Add token header to requests  https://review.openstack.org/7596110:47
openstackgerritFlavio Percoco proposed a change to openstack-infra/devstack-gate: Archive config files along with logs  https://review.openstack.org/6934410:51
*** luqas has joined #openstack-infra10:52
*** thuc has joined #openstack-infra10:55
*** dolphm_503 is now known as dolphm11:00
*** thuc has quit IRC11:00
*** jaypipes has quit IRC11:03
*** ianw has quit IRC11:03
*** jamielennox has quit IRC11:04
*** jamielennox has joined #openstack-infra11:05
*** ianw has joined #openstack-infra11:05
*** rossella_s has quit IRC11:06
*** dolphm is now known as dolphm_50311:10
*** Xurong has quit IRC11:12
*** mfer has joined #openstack-infra11:12
*** ociuhandu has quit IRC11:19
*** jhesketh_ has quit IRC11:22
*** gokrokve has joined #openstack-infra11:22
*** jerryz has quit IRC11:22
*** rcarrillocruz1 has joined #openstack-infra11:22
katyaferventHi everybody!11:25
*** rcarrillocruz has quit IRC11:25
*** gokrokve has quit IRC11:27
katyaferventDoes anyone have experience in renaming repository? Will history be saved?11:27
SergeyLukjanovkatyafervent, hey11:30
SergeyLukjanovkatyafervent, are you talking about your murano repos at stackforge?11:30
katyaferventSergeyLukjanov, yes)11:30
katyaferventwe want to start repository reorganization but want history to be kept11:31
SergeyLukjanovkatyafervent, review history will be saved, but renaming requires gerrit shutdown, so, it'll be postponed and batched with other renaming requests11:31
SergeyLukjanovI think the next renaming batch will be for renaming savanna11:32
SergeyLukjanovkatyafervent, if you'd like to merge some repos and keep their review history than I think that it's impossible11:32
*** jhesketh has quit IRC11:32
ruhekatyafervent: another option would be to create new repo named "murano". merge all the other repos to the new repo, then remove/deprecate old repos11:33
SergeyLukjanovruhe, yup11:33
*** rcarrillocruz has joined #openstack-infra11:34
SergeyLukjanovkatyafervent, if you'd like to keep git history the only way is to manually merge git trees with help of git-filter-branch11:34
katyaferventand what about commit history?11:35
katyaferventOh, ok11:35
*** rcarrillocruz1 has quit IRC11:36
*** BobBallAway is now known as BobBall11:38
*** CaptTofu has joined #openstack-infra11:38
openstackgerritsahid proposed a change to openstack-infra/config: new-project: warm  https://review.openstack.org/7624711:40
*** sergmelikyan has joined #openstack-infra11:40
*** CaptTofu has quit IRC11:43
*** matsuhashi has quit IRC11:46
*** jcooley_ has joined #openstack-infra11:48
*** yamahata has quit IRC11:50
*** salv-orlando has quit IRC11:52
*** salv-orlando has joined #openstack-infra11:52
*** jgallard has quit IRC11:53
*** sergmelikyan has quit IRC11:54
*** jcooley_ has quit IRC11:54
*** sergmelikyan has joined #openstack-infra11:57
*** mfer has quit IRC11:57
openstackgerritIlya Sviridov proposed a change to openstack-infra/config: Added new MagnetoDB project to Stackforge  https://review.openstack.org/7130811:59
*** rfolco has joined #openstack-infra12:01
*** dolphm_503 is now known as dolphm12:01
*** CaptTofu has joined #openstack-infra12:02
sdagueSergeyLukjanov: so the rename thing, why do we need to shut down gerrit for that again? Why don't we just import it as a new repo?12:02
SergeyLukjanovsdague, in case of renaming we're update gerrit's db to keep all reviews12:02
SergeyLukjanovsdague, morning12:02
*** coolsvap has quit IRC12:02
sdagueSergeyLukjanov: ah, right.12:03
*** luqas has quit IRC12:05
*** luqas has joined #openstack-infra12:07
*** sergmelikyan has quit IRC12:08
*** dolphm is now known as dolphm_50312:11
*** rcarrillocruz1 has joined #openstack-infra12:12
*** rcarrillocruz has quit IRC12:14
dimssdague, Good morning. I have 2 +2's on https://review.openstack.org/#/c/75539/ but had to rebase, can you please approve when you get a chance?12:14
sdaguedone12:15
dimsthanks a ton12:17
lifelesserm12:17
lifelessI'm clearly up way to late12:17
dimslifeless, good night :)12:18
*** gokrokve has joined #openstack-infra12:22
*** cody-somerville has quit IRC12:24
*** hashar has quit IRC12:24
*** gokrokve has quit IRC12:27
*** tsufiev___ has joined #openstack-infra12:30
*** zhiyan_ is now known as zhiyan12:32
*** zul has joined #openstack-infra12:32
*** rossella_s has joined #openstack-infra12:36
*** luqas has quit IRC12:36
*** valentinbud has quit IRC12:42
*** CaptTofu has quit IRC12:43
*** CaptTofu has joined #openstack-infra12:43
*** mbacchi has joined #openstack-infra12:54
*** dolphm_503 is now known as dolphm12:57
*** lcostantino has joined #openstack-infra12:58
*** beagles has quit IRC12:58
*** dstanek has joined #openstack-infra13:00
*** smarcet has joined #openstack-infra13:02
*** vogxn has quit IRC13:03
*** johnthetubaguy has quit IRC13:05
*** johnthetubaguy1 has joined #openstack-infra13:05
*** marun has joined #openstack-infra13:08
*** yamahata has joined #openstack-infra13:09
*** rfolco has quit IRC13:13
*** valentinbud has joined #openstack-infra13:19
*** NikitaKonovalov is now known as NikitaKonovalov_13:20
anteayaavishay: thank you for asking about https://review.openstack.org/#/c/49755/ I am also curious as to why Jenkins was caught in a perpetual testing loop there.13:21
*** e0ne has quit IRC13:21
avishayanteaya: i hope it's over :)13:21
*** e0ne has joined #openstack-infra13:21
*** gokrokve has joined #openstack-infra13:22
anteayaI was just checking, I don't see it in the check queue now.13:22
anteayaAbandoning it might have stopped the cycle.13:22
anteayaThe -2 after 2 months of inactivity appears to have been the trigger.13:24
*** johnthetubaguy1 is now known as johnthetubaguy13:25
*** mrmartin has joined #openstack-infra13:26
openstackgerritJulien Vey proposed a change to openstack/requirements: Add Docker requirement  https://review.openstack.org/7653513:26
*** gokrokve has quit IRC13:27
*** sdake has quit IRC13:28
*** sdake has joined #openstack-infra13:28
*** rfolco has joined #openstack-infra13:29
*** ociuhandu has joined #openstack-infra13:29
*** hashar has joined #openstack-infra13:31
*** rfolco has quit IRC13:34
*** ArxCruz has joined #openstack-infra13:35
*** yamahata has quit IRC13:36
*** jcooley_ has joined #openstack-infra13:36
*** yamahata has joined #openstack-infra13:37
*** dprince has joined #openstack-infra13:37
*** NikitaKonovalov_ is now known as NikitaKonovalov13:38
*** jgallard has joined #openstack-infra13:40
*** sandywalsh has quit IRC13:40
*** thuc has joined #openstack-infra13:40
*** nosnos has quit IRC13:41
*** zul has quit IRC13:42
*** lcheng has joined #openstack-infra13:43
*** zul has joined #openstack-infra13:44
*** dkranz has joined #openstack-infra13:44
avishayanteaya: strange, but ok :)13:45
*** thuc has quit IRC13:45
*** ArxCruz has quit IRC13:46
anteayaavishay: when fungi is around he may have more13:47
anteayaat least I hope so13:47
*** dhellman_ has joined #openstack-infra13:49
*** dhellman_ has quit IRC13:49
avishayanteaya: ok cool, just thought i'd let you guys know13:49
*** rfolco has joined #openstack-infra13:49
anteayaglad you did thank you13:50
avishaysure13:51
*** shardy_afk is now known as shardy13:52
*** sandywalsh has joined #openstack-infra13:53
*** mfer has joined #openstack-infra13:53
*** rfolco has quit IRC13:54
*** mestery has quit IRC13:54
*** thuc_ has joined #openstack-infra13:55
fungisomething critical?13:56
*** weshay has joined #openstack-infra13:59
*** ArxCruz has joined #openstack-infra13:59
anteayano14:00
anteayanothing critical14:00
anteayatake your time, it can wait14:00
*** bknudson has quit IRC14:00
*** zehicle_at_dell has quit IRC14:03
fungiSpamapS: lifeless: https://bitbucket.org/hpk42/tox/pull-request/85/fix-command-expansion-and-parsing/activity14:03
*** ryanpetrello has joined #openstack-infra14:08
ArxCruzfungi: around ?14:10
fungiArxCruz: yeah, just checking some logs and coming up to speed for the morning... what's up?14:11
ArxCruzfungi: http://paste.openstack.org/show/69859/ I have this in my layout.yaml, I though it would report only success jobs, but it's reporting failures too14:11
ArxCruzDid I something wrong ?14:11
fungiArxCruz: do you have an example of a change it commented on?14:12
*** jeckersb_gone is now known as jeckersb14:12
ArxCruzfungi: https://review.openstack.org/#/c/7641114:12
*** lcheng has quit IRC14:14
fungiArxCruz: i think the "14:14
*** mestery has joined #openstack-infra14:14
fungier14:14
fungithe "Build succeeded." there implies that zuul reported it as a success14:15
fungibecause all the jobs were non-voting\14:15
*** dolphm is now known as dolphm_50314:15
ArxCruzfungi: so, there's no way to don't report?14:15
fungiArxCruz: i think if you changed one or more of those jobs to be a voting job, the way your layout.yaml is configured zuul will add a comment (but not a vote) on success and will do nothing on failure14:16
*** jcooley_ has quit IRC14:16
ArxCruzfungi: let me save this conversation :P14:16
ArxCruzI will do that :)14:17
fungiright now it's commenting because you've configured it to leave a comment on success, and with all jobs non-voting, any run will be interpreted by zuul as a success even if the jobs themselves failed (because they're non-voting jobs)14:17
ArxCruzgot it14:17
fungi"non-voting" for a job merely indicates whether it contributes to the completion result, but the actual verified: 1 or -1 is what causes zuul itself to leave a vote14:18
ArxCruzok, so, if I mark one job as voting, it will report, only if both jobs success, or it will report if the voting one succeed ?14:18
*** julim has joined #openstack-infra14:19
fungithe way you have it configured right now, any one voting job which fails will cause your zuul not to comment14:19
*** vponomaryov has joined #openstack-infra14:19
fungieven if one or more other voting jobs succeed14:19
fungibasically as long as at least one voting job fails, zuul considers the overall result to be a failure14:20
fungiif all voting jobs succeed, zuul considers that to be a success14:20
*** bknudson has joined #openstack-infra14:20
fungiif no jobs are voting, zuul interprets that as a success as well14:20
*** mriedem has joined #openstack-infra14:20
fungihopefully that makes sense14:21
fungiArxCruz: obligatory documentation link... http://ci.openstack.org/zuul/zuul.html#pipelines14:22
*** gokrokve has joined #openstack-infra14:22
*** dims has quit IRC14:22
ArxCruzfungi: okay, so I need to turn voting for both jobs :)14:22
ArxCruzfungi: thanks, I read that before, didn't pay attention for the votting/non-votting14:23
fungithat sounds like what you want, yes14:23
ArxCruzfungi: thanks!14:24
*** dolphm_503 is now known as dolphm14:25
fungiArxCruz: reviewing the documentation there, the "voting" parameter for jobs could definitely stand to have a more detailed description. feel free to contribute a patch to the documentation in the openstack-infra/zuul project or open a low-hanging-fruit bug against the zuul project in launchpad14:25
ArxCruzfungi: ok, cool14:25
anteayafungi: as an fyi I need to go to town sometime this week for a few hours and since jim will be away Thurs and Fri, I thought I would go today14:25
anteayaI'll probably head out around noon14:26
fungianteaya: okay, sounds good14:26
anteayathanks14:26
fungiArxCruz: we always love documentation improvements, and the people best positioned to point out where documentation is lacking are those who are encountering these systems for the first time14:26
*** gokrokve has quit IRC14:27
ArxCruzfungi: ;) okay I will work on that14:27
anteayaand my brain is starting to feel like my brain again, I'm so happy14:27
openstackgerritZiad Sawalha proposed a change to openstack-infra/config: Add satori as a new project  https://review.openstack.org/7366714:29
fungiArxCruz: so yeah, if you want to propose a patch for that, we can easily help correct any misstatements or improve the wording14:30
fungionce it's in review14:30
*** dolphm is now known as dolphm_50314:30
*** wchrisj has joined #openstack-infra14:30
*** rfolco has joined #openstack-infra14:31
*** e0ne_ has joined #openstack-infra14:32
*** thuc_ has quit IRC14:32
*** thuc has joined #openstack-infra14:33
*** dims has joined #openstack-infra14:33
openstackgerritSahdev Zala proposed a change to openstack-infra/config: New StackForge project heat-translator  https://review.openstack.org/7598814:34
*** esker has joined #openstack-infra14:34
vponomaryovjeblair, fungi, mordred: hello, I have little bugfix - https://review.openstack.org/#/c/76115/ , could you review it please?14:36
*** e0ne has quit IRC14:36
*** thuc has quit IRC14:37
*** dkliban has joined #openstack-infra14:37
*** prad_ has joined #openstack-infra14:37
anteayafungi: so the item I had was Jenkins seemed to be caught in a check testing loop on this patch: https://review.openstack.org/#/c/49755/14:37
anteayawas wondering what you thought would be the fix for this situation14:37
fungianteaya: yeah, i saw--i've been looking at the zuul logs trying to figure out why it thought it should recheck that14:38
anteayak thanks14:38
*** jpeeler has quit IRC14:40
fungii think we need the logs to start including more of the parameters zuul considers when deciding whether to enqueue a change14:41
anteayahmmmm14:43
*** alexpilotti_ has joined #openstack-infra14:44
anteayaso nothing useful14:44
anteayashould i file a bug?14:45
*** yamahata has quit IRC14:46
*** yamahata has joined #openstack-infra14:46
*** alexpilotti has quit IRC14:47
*** alexpilotti_ is now known as alexpilotti14:47
*** freyes has joined #openstack-infra14:47
fungii'm still digging14:47
anteayak14:47
*** alexpilotti has quit IRC14:51
*** krotscheck has joined #openstack-infra14:52
*** jcoufal has quit IRC14:52
fungiit definitely seems to have been responding to its own comment events... http://paste.openstack.org/show/69865/14:55
fungiwhen it gets gerrit information on the change, we should probably have debug log entries mentioning some of the things it looks for like age of its own most recent previous comment, existing votes on the change and so on14:57
*** wenlock has joined #openstack-infra14:59
anteayahmmmm14:59
anteayaI wonder if the fact that the check tests returned VRIF:1 sent it back to check15:00
anteayabut why would it send it back to check if that is the case?15:00
anteayayes, more logs would help15:01
*** e0ne has joined #openstack-infra15:01
*** saju_m has quit IRC15:02
*** gokrokve has joined #openstack-infra15:02
fungibecause while this could be caused by gerrit misreporting or zuul misinterpreting the re15:02
fungisults15:02
fungiwe don't really know what they were15:02
fungianteaya: if you want to open a bug against zuul on launchpad and paste that bit of logs in along with the observed behavior and link to the change, that would be a wonderful help15:03
anteayaI can do that15:04
*** e0ne_ has quit IRC15:05
*** jeckersb is now known as jeckersb_gone15:05
*** wenlock has quit IRC15:06
*** rossella_s has quit IRC15:09
*** jgrimm has joined #openstack-infra15:10
*** jeckersb_gone is now known as jeckersb15:11
SergeyLukjanovfungi, anteaya, could you, please, take a look on https://review.openstack.org/#/c/76312 when have some time15:12
*** pmathews has joined #openstack-infra15:13
*** luqas has joined #openstack-infra15:15
*** pblaho` has joined #openstack-infra15:15
*** pblaho has quit IRC15:16
anteayafungi: https://bugs.launchpad.net/zuul/+bug/128521015:16
anteayahi SergeyLukjanov15:16
*** Sukhdev has joined #openstack-infra15:16
fungianteaya: thanks!15:16
* anteaya clicks on 7631215:16
anteayanp15:16
SergeyLukjanovanteaya, hey, morning15:16
SergeyLukjanovanteaya, thx15:17
anteayafungi: thanks for teeing it up15:17
fungijeblair: you also may be interested in zuul bug https://launchpad.net/bugs/1285210 when you're up and about15:17
SergeyLukjanovI've shared my idea one (several?) days ago in the channel, so, that's the first patch15:17
*** rossella_s has joined #openstack-infra15:17
SergeyLukjanovand there three more on the road15:17
*** rwsu has joined #openstack-infra15:18
*** dkliban has quit IRC15:18
*** dkliban has joined #openstack-infra15:18
anteayaSergeyLukjanov: silly question time15:19
*** jpeeler has joined #openstack-infra15:19
*** jpeeler has quit IRC15:19
*** jpeeler has joined #openstack-infra15:19
anteayathis job is in the check queue and the post queue15:20
anteayawhat happens if it fails in post?15:20
SergeyLukjanovanteaya, in check and post?15:20
SergeyLukjanovanteaya, sounds strange15:20
anteayanext silly question: it is in check and post and called gate-project-requirements15:20
mordredanteaya: nice catch15:20
anteayahttps://review.openstack.org/#/c/76312/2/modules/openstack_project/files/zuul/layout.yaml15:20
anteayaline 25615:21
SergeyLukjanovshame on me, it should gate15:21
SergeyLukjanov:(15:21
anteayathank you15:21
anteayaI was wondering what the purpose was if we skipped the gate15:21
anteayamordred: thanks, taking my new brain for a test drive15:22
anteayaso far it seems to be working15:22
SergeyLukjanovanteaya, thanks for catching this15:23
*** mrodden has joined #openstack-infra15:23
openstackgerritSergey Lukjanov proposed a change to openstack-infra/config: Extract check/gate req jobs to check-requirements  https://review.openstack.org/7631215:23
anteayaSergeyLukjanov: np15:24
*** chuck__ has joined #openstack-infra15:24
SergeyLukjanovbtw the next step is to extract tarballs + pypi-release and then to extract common integrated jobs15:25
*** johnthetubaguy has quit IRC15:26
openstackgerritDavid Ripton proposed a change to openstack-dev/hacking: Add H308 rule.  https://review.openstack.org/7655915:26
*** pblaho` is now known as pblaho15:27
*** dcramer__ has joined #openstack-infra15:30
openstackgerritNikita Konovalov proposed a change to openstack-infra/storyboard-webclient: Add token header to requests  https://review.openstack.org/7596115:30
*** changbl has quit IRC15:30
*** dolphm_503 is now known as dolphm15:31
*** VINOD has joined #openstack-infra15:32
*** pdmars has joined #openstack-infra15:33
*** rossella_s has quit IRC15:34
openstackgerritDavid Ripton proposed a change to openstack-dev/hacking: Add H308 rule  https://review.openstack.org/7655915:34
mordredSergeyLukjanov: +215:34
SergeyLukjanovmordred, thx15:35
fungiwow--yeah, i totally missed that said "post" and not "gate" (my brain simply corrected what my eyeballs read i guess)15:35
fungigreat catch, anteaya15:35
*** mgagne has joined #openstack-infra15:36
anteayafungi: thanks15:36
anteayaSergeyLukjanov: no requirements for oslo.vmware?15:36
SergeyLukjanovanteaya, they have only non-voting job atm15:37
SergeyLukjanovand we prefere to not add non-voting jobs to the gate pipeline - so15:38
openstackgerritA change was merged to openstack-infra/config: Extract check/gate req jobs to check-requirements  https://review.openstack.org/7631215:38
anteayaokay15:38
anteayaSergeyLukjanov: what about pbr?15:41
*** dcramer__ has quit IRC15:41
anteayait is a different requirements job15:41
anteayarequirements integration15:42
*** zhiyan is now known as zhiyan_15:42
*** mgagne has quit IRC15:43
SergeyLukjanovanteaya, it's not synced now with requirements15:43
SergeyLukjanovAFAIR there was an intention to sync it and setup requirements jobs15:43
anteayaah okay15:43
SergeyLukjanovI've already fixed sync script in os/req repo to not brake pbr15:43
anteayayeah the job is named differently15:43
anteayayou'd be the only thing that doesn't15:44
SergeyLukjanovbecause previously it was breaking pbr by overriding seyup.py15:44
anteayaah15:44
VINODHi anteaya...15:44
*** lnxnut has joined #openstack-infra15:44
SergeyLukjanovanteaya, I mean https://review.openstack.org/#/c/73431/15:45
VINODCould you able to check the logs of https://review.openstack.org/#/c/75967/15:45
*** chuck__ has quit IRC15:45
anteayaSergeyLukjanov: climate is the only stackforge project with a requirements job?15:45
*** zhiyan_ is now known as zhiyan15:45
anteayaVINOD: hi, are you also in #openstack-nova channel?15:46
VINODyes15:46
SergeyLukjanovanteaya, savanna was too, when it was on stackforge ;)15:46
anteayaVINOD: if not can you join please15:46
anteayaVINOD: great, stand by I will be with you in a minute15:46
anteayaSergeyLukjanov: okay15:46
fungiVINOD: that's an extremely large change (over 3000 lines). it's unlikely you'll find anyone actually reviewing it unless you break it up into smaller, more manageable patches15:46
VINODthanks15:46
VINODfungi: ok15:46
*** wenlock has joined #openstack-infra15:47
fungigenerally once a change gets past about 200-300 lines, people tend not to review it because it's difficult to reason about its implications15:47
VINODok15:47
anteayaoh and rally and solum and openstacksdk15:47
*** prad_ has quit IRC15:48
anteayaSergeyLukjanov: so much for my review, by the time I was done it was merged15:48
anteayabut it was fun to look at it15:48
*** zhiyan is now known as zhiyan_15:48
fungianteaya: if you discover something's wrong with a change after it's merged, it can always be solved with a new change, so no review is really a waste15:49
SergeyLukjanovyup, agreed15:49
SergeyLukjanovconfig-compare-xml Jenkins XML output is unchanged. in 1m 35s (non-voting)15:49
SergeyLukjanovfungi, mordred, does it mean that such review is correct?15:49
openstackgerritA change was merged to openstack-infra/config: Fix manila-tempest-job to use exported env var  https://review.openstack.org/7611515:49
*** e0ne has quit IRC15:50
SergeyLukjanovthat nothing changed15:50
*** e0ne has joined #openstack-infra15:50
jeblairSergeyLukjanov: sometimes we want the xml to change, and sometimes we don't...15:50
*** mrmartin has quit IRC15:50
fungiSergeyLukjanov: if the change didn't modify any jjb configuration for existing job definitions, then there should be no difference reported by that test15:50
SergeyLukjanovI mean for the change like mine15:51
*** yolanda_ has joined #openstack-infra15:51
fungiSergeyLukjanov: you only modified zuul configuration15:51
SergeyLukjanovwhen expect only jobs to templates extraction w/o adding new ones15:51
SergeyLukjanovoh, yup15:51
SergeyLukjanovit's a zuul layout change15:51
SergeyLukjanovneed to have some coffee15:51
anteayafungi: true15:51
jeblairSergeyLukjanov: oh, for the zuul change -- here's something you might want to do....15:51
fungiSergeyLukjanov: the zuul layout test will, however, do some sanity checks on changes like yours15:51
jeblairSergeyLukjanov: if you look at the output of the layout test, it prints zuul's final configuration15:52
*** chandan_kumar has quit IRC15:52
jeblairSergeyLukjanov: you might want to compare old and new output to make sure that it's the same15:52
SergeyLukjanovjeblair, awesome tip15:52
SergeyLukjanovI'll investigate how to make the same job for checking layout unchanged15:52
SergeyLukjanovto be sure that my job2tmpl extractions will not miss jobs15:52
jeblairSergeyLukjanov: probably not a bad idea15:52
*** dizquierdo has quit IRC15:53
*** dprince has quit IRC15:53
fungiSergeyLukjanov: that's also an easy one to run locally with tox on master and your topic branch15:53
SergeyLukjanovfungi, yup15:53
*** dcramer__ has joined #openstack-infra15:53
*** dizquierdo has joined #openstack-infra15:53
*** dprince has joined #openstack-infra15:53
SergeyLukjanovsorry, gtg, time to go home ;)15:54
fungiSergeyLukjanov: have a great evening!15:54
SergeyLukjanovthx!15:54
anteayathanks SergeyLukjanov15:56
*** chandan_kumar has joined #openstack-infra15:56
*** mgagne has joined #openstack-infra15:57
*** david-lyle has joined #openstack-infra15:57
*** rlandy has quit IRC15:58
*** rcleere has quit IRC15:58
*** thedodd has joined #openstack-infra15:59
dansmithArxCruz: https://review.openstack.org/#/c/70533/15:59
dansmithArxCruz: build failed (others succeeded) and the link to the test report is 40415:59
ArxCruzdansmith: we are working on that :)16:00
*** jlibosva has quit IRC16:00
dansmithArxCruz: okay, just noticed, wanted to make sure16:00
ArxCruzdansmith: yeah, if possible, just ignore :)16:00
dansmithArxCruz: I cannot ignore. I will be unable to do any more work until you fix it.16:01
*** amotoki_ has joined #openstack-infra16:01
ArxCruzdansmith: seriously ? O.o16:02
*** david_lyle_ has joined #openstack-infra16:02
dansmithArxCruz: no, of course not :)16:02
* ArxCruz didn't knew dansmith have sense of humor16:03
dansmithpfft16:03
ArxCruzmaurosr: did you saw this? ^16:03
maurosrhahah16:03
krtaylorhehheh16:04
*** julim_ has joined #openstack-infra16:05
*** julim has quit IRC16:05
*** dcramer__ has quit IRC16:05
anteayaArxCruz: well actually asking someone to ignore your output is not the best approach16:05
*** david-lyle has quit IRC16:06
ArxCruzanteaya: I mean, the logs were 404 was an error in our swift script16:06
anteayaArxCruz: can you make an internal change at least identifying you are aware of this issue while you fix it16:06
ArxCruzwe already fix it, but I'm not sure if we will be able to upload the logs, since we don't have it16:06
anteayaArxCruz: can you add that to your comment message?16:06
ArxCruzanteaya: for the next?16:07
dansmithanteaya: it was their first day of reporting, nobody expects it to be perfect yet16:07
ArxCruzanteaya: it's already fixed :)16:07
dansmithanteaya: I was just making sure they had noticed16:07
anteayadansmith: okay16:07
anteayaArxCruz: okay16:07
anteayawe are just asking CI accounts to address issues for devs rather than continue to send out incorrect info while they are fixing things16:08
*** prad_ has joined #openstack-infra16:08
*** david-lyle has joined #openstack-infra16:08
dansmithgiven that a simple error could report incorrectly on a hundred patches before it goes noticed, I don't think making them comment on each one is very reasonable16:09
dansmithespecially on day 116:09
*** oubiwann_ has joined #openstack-infra16:09
anteayadansmith: glad to have your input on this16:10
anteayathanks16:10
ArxCruzdansmith: anteaya we are monitoring everything to ensure we will fill all the gaps, don't worry :)16:11
openstackgerritTravis Plummer proposed a change to openstack-infra/config: new-project  https://review.openstack.org/7195616:11
*** david_lyle_ has quit IRC16:12
anteayaArxCruz: I know you have been great at it16:12
ArxCruzAlso, I will stay here pratically 24x7 these days, please, feel free to contact-me if something goes wrong and our team didn't catch up16:12
*** NikitaKonovalov is now known as NikitaKonovalov_16:12
anteayabut I gave salv-orlando a hard time recently for something very similar, so I have to be consistent16:12
*** e0ne_ has joined #openstack-infra16:12
*** pcrews has joined #openstack-infra16:12
*** david_lyle_ has joined #openstack-infra16:13
anteayaand to salv-orlando's credit, he responded promptly16:13
anteayait isn't your system that concerns me, it is the 40 other accounts who look to you as a role model16:13
anteayaand who will do everything you do, except be as vigilant16:13
*** KurtMartin has joined #openstack-infra16:14
anteayaArxCruz: thanks16:14
openstackgerritAndreas Rehn proposed a change to openstack-infra/jenkins-job-builder: Added support for Delivery Pipeline Plugin  https://review.openstack.org/7165816:15
*** dcramer__ has joined #openstack-infra16:16
*** david-lyle has quit IRC16:16
*** e0ne has quit IRC16:16
krtayloranteaya, point taken, we are trying to make sure everything is working very well, as ArxCruz said, we are all watching it very closely right now16:17
anteayakrtaylor: thanks, yes you two are doing a wonderful job16:17
*** kmartin has quit IRC16:17
*** changbl has joined #openstack-infra16:17
anteayaand as I told salvatore, if it was just you, I wouldn't even worry about it16:17
anteayasince the work you are doing is outstanding16:18
*** kmartin has joined #openstack-infra16:18
*** stevebaker has quit IRC16:18
anteayait is unfortunate that the people that are doing the best work that are limited by the people that do not16:19
krtayloranteaya, this will all get smooth as more and more 3rd party comes online, more things get documented and fixed16:19
krtayloragreed16:19
*** david-lyle has joined #openstack-infra16:19
anteayabut after not doing that we realized the mess we were creating16:19
anteayayes, fortunately there is more documentaion happening and a wider knowledge base16:19
*** andre__ has quit IRC16:20
krtaylorwhich, reminds me, I want to help organize a self-help group for 3rd party CI testing, maybe a summit BOF16:20
*** rcleere has joined #openstack-infra16:20
anteayaeveryone I have talked to wants to do a good job they just lack knowledge in how to do that16:20
anteayakrtaylor: I think that would be well attended16:20
krtaylorwe can help each other, share best practices, maybe even weekly irc meetings16:21
anteayakrtaylor: did you know that jaypipes is offering a meeting/workshop for 3rd party ci on Monday?16:21
krtayloryes, I'll be there for sure16:21
krtayloralthough I was looking forward to the hangout :)16:21
*** VINOD has quit IRC16:21
*** david_lyle_ has quit IRC16:21
*** KurtMartin has quit IRC16:22
anteayakrtaylor: http://lists.openstack.org/pipermail/openstack-dev/2014-February/028124.html16:22
anteayakrtaylor: easier to log in irc16:22
*** david_lyle_ has joined #openstack-infra16:23
anteayaone of the really big problems is that a large part of this group has no experience with opensource16:23
*** andre__ has joined #openstack-infra16:23
anteayaand is unfamiliar with the tool set16:23
krtayloryeah, I saw that, and it is true, scales better, translates better too16:23
fungiand helps get them into the flow of how the rest of the project contributors participate in discussion16:24
anteayasince opensource is a philosphy as much as code, consistent action helps to inform the philosphy16:24
*** cody-somerville has joined #openstack-infra16:24
*** cody-somerville has quit IRC16:24
*** cody-somerville has joined #openstack-infra16:24
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Add new elasticsearch cluster members to cacti  https://review.openstack.org/7657316:24
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Update logstash doc for an elasticsearch cluster  https://review.openstack.org/7657416:24
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Move primary elasticsearch discover node  https://review.openstack.org/7657516:24
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Remove old elasticsearch cluster members  https://review.openstack.org/7605116:24
* anteaya ducks16:24
anteayakrtaylor: and I am so glad you would like to organize something at the summit16:25
anteayaI must admit I am all organized out currently16:25
anteayabut I can certainly inform people16:25
krtaylorsure, I'll be happy to help, I was thinking about starting an etherpad to help get organized16:25
anteayakrtaylor: great idea16:26
krtaylorthen maillist, etc16:26
anteayado post the link to the etherpad once you have it16:26
anteayakrtaylor: on the -dev ml right?16:26
krtaylorok, I'll get that moving16:26
krtayloryes16:26
anteayagreat16:26
anteayathank you16:26
krtayloralthough, my mail filtering is horrible16:26
anteayamine too16:27
krtaylorI'd like to see us standardize a tag16:27
*** david-lyle has quit IRC16:27
anteayaI have such respect for those who can figure it out16:27
*** dcramer___ has joined #openstack-infra16:27
krtaylorlike 3rd party or third-party16:27
*** prad__ has joined #openstack-infra16:27
anteayakrtaylor: yes, I like 3rd party16:27
anteayafewer characters16:27
*** david_lyle_ is now known as david-lyle16:27
krtaylortrue, anyway, good etherpad fodder16:27
openstackgerritA change was merged to openstack-infra/elastic-recheck: Remove resolved_at mention in readme  https://review.openstack.org/7639316:27
anteayabetter chance of reading teh subject line16:27
anteayayes16:28
*** david_lyle_ has joined #openstack-infra16:28
*** prad_ has quit IRC16:28
*** dizquierdo has quit IRC16:28
*** cody-somerville has quit IRC16:29
*** johnthetubaguy has joined #openstack-infra16:29
*** alexpilotti has joined #openstack-infra16:30
jeblairkrtaylor: i expect we'll devote some of the infra track to 3rd party testing16:30
dansmithcan someone kick the meeting bot in openstack-meeting-3 ?16:30
*** dcramer__ has quit IRC16:30
jeblairkrtaylor: and i believe there is going to be a collaboration space with "project pods"16:30
krtaylorjeblair, project pods?16:31
jeblairkrtaylor: so that might be a natural place to organize something; if there's an infra "pod", 3rd party ci would be a welcome topic there16:31
*** jcooley_ has joined #openstack-infra16:31
jeblairkrtaylor: i'm not entirely sure what it will look like, but i'm imagining a big area with discrete locations (tables?  groups of chairs?) designated with different projects (nova, etc)16:31
*** david-lyle has quit IRC16:31
krtaylorah, like a developers lounge area by project? cool16:32
jeblairkrtaylor: so we could potentially organize a formal session to cover some things and then use the collaboration area for the kind of self-help get different groups together activity you were suggestiong16:32
jeblairdansmith: ack16:32
dansmithjeblair: did you do it? seems to be working now16:33
jeblairdansmith: nope.  that was easy.  :)16:33
dansmithjeblair: okay, weird.. anyway, thanks :)16:33
*** amotoki_ has quit IRC16:33
*** thuc has joined #openstack-infra16:34
russellbfyi, just pushed a novaclient release, ping me if it breaks the world16:34
krtaylorjeblair, sounds good, I was just thinking that there is a lot of experience growing here that we can share between ourselves and offload some of the load from the cores16:34
jeblairdansmith: if it was just the previous meeting running over, we added a feature to meetbot that lets anyone end the meeting after 1 hour16:34
dansmithjeblair: right, we were trying that and it wasn't working, hence my concern16:34
jeblairkrtaylor: yay!16:34
*** thuc has quit IRC16:35
jeblairdansmith: maybe it wasn't quite one hour to the second.  :)  because of course, being programmers, i'm pretty sure that's what we did.  :)16:35
dansmithjeblair: heh16:35
*** thuc has joined #openstack-infra16:35
*** yolanda_ has quit IRC16:35
*** thuc has quit IRC16:36
anteayarussellb: thanks for the heads up16:36
*** thuc has joined #openstack-infra16:36
*** david_lyle_ has quit IRC16:36
*** thuc has quit IRC16:37
*** thuc_ has joined #openstack-infra16:38
* mordred wants a pod of ice cream16:38
jeblairttx: ^ excellent suggestion for design summit pods16:38
krtaylormordred, +116:39
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Remove old elasticsearch cluster members  https://review.openstack.org/7605116:39
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Update logstash doc for an elasticsearch cluster  https://review.openstack.org/7657416:39
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Move primary elasticsearch discover node  https://review.openstack.org/7657516:39
*** geekinutah1 is now known as geekinutah16:40
*** atiwari has joined #openstack-infra16:40
russellbmordred: +116:41
russellbbrilliant16:41
*** rossella_s has joined #openstack-infra16:41
*** chuck__ has joined #openstack-infra16:43
*** shardy is now known as shardy_afk16:43
*** chuck__ has quit IRC16:44
*** CaptTofu has quit IRC16:45
*** david-lyle has joined #openstack-infra16:46
*** CaptTofu has joined #openstack-infra16:51
jeblairfungi: you filed a zuul bug yesterday didn't you?  i'm having trouble finding it16:52
jeblairfungi: (i see the one from anteaya you pointed me to this morning)16:52
fungijeblair: i think someone else filed it against openstack-ci, then i triaged, added log details and moved it to zuul... finding16:53
fungijeblair: https://launchpad.net/bugs/128484216:54
jeblairfungi: thanks16:54
jeblairfungi: cool, i think they are related.16:55
*** hashar has quit IRC16:56
*** cadenzajon has joined #openstack-infra16:56
*** coolsvap has joined #openstack-infra16:56
fungijeblair: all's the better!16:57
*** andre__ has quit IRC16:57
fungiin completely unrelated news, any bets on how long before someone proposes a new stackforge project for an openstack haxe sdk? http://haxe.org/16:58
fungiit looks surprisingly organized for a language i've never heard of16:59
jeblairfungi: it's a trap17:00
davidlenwellSo here is a question .. how do I change the topic on an already commited review without changing something? to satisfy this? http://lists.openstack.org/pipermail/openstack-dev/2014-February/028130.html17:00
davidlenwelltrying to avoid the annoyance that i submited that patch 5 days before monty wrote that.. and just get things the way you need them so we can move on.17:01
fungidavidlenwell: i think we can probably grandfather in any reviews to add new projects which are already in good shape this week, but if it ends up needing a new patchset for any reason then please try to remember to update the topic when you do that17:02
*** gokrokve has quit IRC17:02
davidlenwellfungi: will do17:03
*** gokrokve has joined #openstack-infra17:03
davidlenwellSergeyLukjanov -1'd it already17:03
davidlenwellwhat do I need to do from this point?17:03
*** NikitaKonovalov_ is now known as NikitaKonovalov17:04
davidlenwellI fully respect the backlog and the load you guys are under.. that is why I am working to hire you another hand.. but I do need to tell my team something17:04
*** gyee has joined #openstack-infra17:04
*** e0ne_ has quit IRC17:05
jeblairdavidlenwell: what's the review?17:06
*** saju_m has joined #openstack-infra17:06
davidlenwellhttps://review.openstack.org/#/c/75226/17:06
davidlenwelljeblair: this is the one with the irc problems last week17:07
anteayaokay time for me to run away for several hours17:07
*** gokrokve has quit IRC17:08
anteayatwo failures in the gate, both on grenade both for different reasons, not sure exactly what is taking down the cinder patch but horizon has a cliff dependency issue17:08
anteayaso for my eyes, no explosions from novaclient17:08
*** freyes has quit IRC17:08
*** sabari_ has joined #openstack-infra17:09
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Add puppetdb server to cacti  https://review.openstack.org/7658317:09
jeblairdavidlenwell: +2 from me17:09
davidlenwellthanks jeblair..17:09
davidlenwellwho can +a it? or do I have to wait till new project friday ?17:10
fungidavidlenwell: waits until friday17:10
davidlenwellokay .. putting my patience hat on17:10
*** markmcclain has joined #openstack-infra17:10
*** changbl has quit IRC17:10
fungidavidlenwell: i suspect SergeyLukjanov will rescind his -1 from 75226 the next time he's around, so i wouldn't worry about that17:11
openstackgerritDavanum Srinivas (dims) proposed a change to openstack-infra/config: Add Eavesdrop bot to #openstack-oslo  https://review.openstack.org/7658417:11
davidlenwellwe should really put something on the wiki .. explaining the irc stuff and now this stuff..17:12
*** oubiwann_ has quit IRC17:12
davidlenwellwould save review cycles17:12
fungiwe might also want to start batching up changes like https://review.openstack.org/76584 there so as to minimize impact of meetbot restarts during heavy meeting periods17:12
*** sabari_ has quit IRC17:13
fungiat least until someone manages to tackle the open bug about implementing graceful restarts for it17:13
jeblairdavidlenwell: it should go in the stackforge documentation17:14
jeblairthe stackforge HOWTO in the ci docs17:14
fungidavidlenwell: there is already a mention in openstack-infra/config:docs/source/irc.rst but i agree copying that bit of detail to stackforge.rst would be helpful17:15
*** Sukhdev has quit IRC17:16
*** moted has joined #openstack-infra17:19
jeblairjgriffith: https://review.openstack.org/#/c/49755/ exposed a zuul bug; do you mind if i unabandon it for about 10 minutes for some testing?17:20
openstackgerritDerek Higgins proposed a change to openstack-infra/gear: Close poll loop pipes on cleanup  https://review.openstack.org/7658817:22
openstackgerritDerek Higgins proposed a change to openstack-infra/gear: Throttle geard connect loop  https://review.openstack.org/7658917:22
*** afazekas has quit IRC17:24
sdaguewhat's up with the whole queue hanging on an unallocated pep8 node?17:24
*** coolsvap has quit IRC17:25
jeblairsdague: my guess would be it's a re-enqueued job17:25
sdaguejeblair: so we have 18 changes that are ready to merge17:25
jeblairsdague: i see that17:25
*** coolsvap has joined #openstack-infra17:25
sdagueok17:25
*** alexpilotti has quit IRC17:25
jeblairi will look further into it.  just throwing out a guess because that's the usual answer in these situations17:26
sdagueyeh, seems odd that other nodes are getting pep8 allocations17:26
sdaguebut not that job17:26
*** NikitaKonovalov is now known as NikitaKonovalov_17:27
*** NikitaKonovalov_ is now known as NikitaKonovalov17:29
openstackgerritMichael Krotscheck proposed a change to openstack-infra/config: Add NPM mirror  https://review.openstack.org/6881817:30
sdagueand it's a grizzly change at that :)17:31
*** cody-somerville has joined #openstack-infra17:31
*** dcramer___ has quit IRC17:31
*** vkozhukalov_ has quit IRC17:32
*** SumitNaiksatam has joined #openstack-infra17:33
wendaranyone: I found a pattern yesterday in gate issues, and wondered if it had already been found? A specific recurring case of failure to connect to nova's git repo.17:34
*** coolsvap has quit IRC17:34
*** coolsvap has joined #openstack-infra17:35
jeblairsdague: i think gearman lost track of that job, and unfortunately i don't think the log level is low enough for us to why17:37
jeblairwendar: do you have a link to a specific failure?17:37
sdaguejeblair: is there anyway to manually inject the job back in?17:37
sdagueor do we just have to reset the queue manually to get anything moving17:37
*** mgagne has quit IRC17:38
jeblairsdague: not in a way that will cause zuul to recognize it, no.  i think we have to dequeue that patch.17:38
openstackgerritTravis Plummer proposed a change to openstack-infra/config: New project request: OpenStack Powershell CLI and SDK  https://review.openstack.org/7195617:38
wendarjeblair: http://logs.openstack.org/62/74662/1/gate/gate-devstack-dsvm-cells/b7878e9/logs/devstack-gate-setup-workspace-new.txt.gz17:38
sdaguejeblair: ok, so before we do that, what do we need to change on gearman to catch this the next time?17:38
jeblairsdague: working on that17:39
sdaguebecause I feel like we've had some much more inconsistent allocations recently, so it would be good to nail that down17:39
jeblairwhat do you mean?17:39
sdaguelike head of gate queue not getting resources until most of the rest of it did17:40
*** krotscheck has quit IRC17:40
sdaguebasically I'd assume allocation should be top down in the gate, and I don't always see that happening17:40
*** pmathews has quit IRC17:41
jeblairsdague: so remember when i threw out my incorrect guess about why this job was stuck?  i said i thought it was a re-enqueued job17:41
*** pmathews has joined #openstack-infra17:41
sdagueyeh17:41
openstackgerritChris Johnson proposed a change to openstack-infra/config: add sdks irc room to eavesdrop bot config  https://review.openstack.org/7659417:42
sdagueI'm sniping out the top change now17:42
jeblairsdague: as you know, jenkins fails to complete builds for reasons of its own quite a lot17:42
sdagueyeh17:42
jeblairsdague: some of those are detectable by zuul.  in those cases, zuul will re-enqueue the job to give jenkins a chance at another go.17:43
jeblairsdague: unfortunatly, that means the jobs go to the back of the queue.  we can probably fix that, but it's non-trivial17:43
jeblairsdague: (whereas just having it retry at the back of the queue was trivial, so we did it)17:43
*** luqas has quit IRC17:43
jeblairsdague: at any rate, that has happened 224 times today (since 00:00 UTC)17:44
jeblairsdague: so it's not at all suprising that you would actually see it in action17:44
jeblairsdague: but look at the bright side -- that's 224 false failures that we didn't have to deal with.  :)17:44
sdaguesure17:45
*** luqas has joined #openstack-infra17:45
sdagueok, that at least explains that :)17:45
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Set Zuul gear server logs to debug  https://review.openstack.org/7659617:45
jeblairand that ^ will hopefully explain the other if it happens again17:45
sdagueany chance we could got the other direction and pop then on top of queue?17:45
sdagueor does the datastructure not allow us to manip it that way17:46
*** amcrn has joined #openstack-infra17:46
jeblairsdague: i think the easiest think is to submit it with a high priority, but that's still a nontrivial zuul change.  it's not too hard, it's just more than a one-liner.17:47
*** ildikov_ has quit IRC17:48
*** dcramer___ has joined #openstack-infra17:48
*** oubiwann_ has joined #openstack-infra17:49
sdaguejeblair: ok. Well that's beyond my plate at the moment17:49
jeblairsdague: me too.  i'll get to it, but i think we're in a pretty good position (we're way ahead of where we would be) so it's not at the top of my lis17:50
jeblairt17:50
sdagueyeh, though it will be interesting to see how it will impact us during the i3 rush17:51
*** bhuvan has joined #openstack-infra17:51
*** NikitaKonovalov is now known as NikitaKonovalov_17:51
jeblairfungi, anteaya: the loop was caused because jenkins had previously submitted the patchset.  the submit counts as an approval that sticks around (it isn't reset by the 0 vote)17:51
*** jcooley_ has quit IRC17:52
fungioh...17:52
jeblairso it always matches "any jenkins approval older than 72 hours"17:52
sdaguewendar: so one of the problems with that issue you found, is it's in a log that we weren't indexing at the time17:53
*** apevec has joined #openstack-infra17:53
wendarsdague: ah-ha17:53
sdagueso we can't build a pattern out of it to see it's likeliness17:53
sdagueit should be in the index now, though elastic search is still in a world of hurt17:53
*** NikitaKonovalov_ is now known as NikitaKonovalov17:53
wendarsdague: yeah, the console.html was exceedingly un-unique on those failures17:53
apevecsdague, https://review.openstack.org/#/c/76020/2..3//COMMIT_MSG,unified - "gearman lost the job here" what does that mean?17:53
sdagueapevec: read backscroll17:54
sdaguewendar: yeh17:54
sdagueso new jobs will both timestamp and index that log file17:54
sdagueso we can use it in elastic recheck17:54
wendarcool17:54
* apevec scrolls to eavesdrop17:54
openstackgerritDavanum Srinivas (dims) proposed a change to openstack-infra/config: Add notifications to #openstack-oslo channel  https://review.openstack.org/7659817:55
fungiapevec: in summary, for some reason as of yet unidentified (due to insufficient logging) zuul's gearman server lost track of the fact that it needed to request for a worker on the pep8 test for that change, so it was sitting at the top of the gate not clearing, and this preventing anything else from merging17:56
*** jcooley_ has joined #openstack-infra17:56
apevecfungi, thanks, so yet another gate mystery :(17:57
*** oubiwann_ has quit IRC17:58
fungiapevec: one which we'll have sufficient logging in place to diagnose when it reoccurs, in all probability17:58
fungieach time we add new ways to scale this system, we add complexity and expose (or sometimes introduce) new and exciting bugs17:59
*** jpich has quit IRC17:59
fungias with any software17:59
*** oubiwann_ has joined #openstack-infra18:00
*** jcooley_ has quit IRC18:01
*** oubiwann_ has quit IRC18:01
openstackgerritDavanum Srinivas (dims) proposed a change to openstack-infra/config: Add notifications to #openstack-oslo channel  https://review.openstack.org/7659818:01
*** khyati has joined #openstack-infra18:02
clarkbmorning18:02
fungithis month we've averaged 50 changes merged per day, including weekends (for projects reflected in the openstack/openstack meta-repo)18:03
*** rossella_s has quit IRC18:03
clarkbfungi: wow is it that high?18:03
fungigit log --pretty=fuller --date=iso | grep ^CommitDate: | cut -d' ' -f2 | cut -d- -f-2 | uniq -c18:04
sdagueI still think our i3 capacity target is 200 merges / day.18:04
*** mgagne has joined #openstack-infra18:04
fungisdague: agreed, though the highest day this month was "only" 145 changes merged18:05
sdagueyeh, but we backlogged during i218:05
fungithough i2 was before some of our new scaling improvements, and in the face of additional bugs, so hopefully we can jump the hurdle this time18:06
*** shashank__ has joined #openstack-infra18:06
fungiclarkb: we have no purple shards now, and every cluster member has 11 or 12 assigned18:06
*** tkorochka has joined #openstack-infra18:06
clarkbfungi: great. I am pulling that stuff up now but I am going to guess we can go ahead and stop es on es618:07
fungiclarkb: current pending patch series is https://review.openstack.org/#/q/status:open+project:openstack-infra/config+branch:master+topic:elasticsearch,n,z18:07
clarkblooking now18:07
fungiclarkb: i'm pretty confident in the cacti one, less so in the discover node and doc update patches18:07
*** johnthetubaguy has quit IRC18:08
fungiand i think the delete change is comprehensive enough now, but i'm obviously leaving my -2 on it until we're all clear18:08
*** andreaf has quit IRC18:08
*** andreaf has joined #openstack-infra18:10
fungialso, xkcd.com/now is my new favorite tool for planning meetings ;)18:12
clarkbfungi: commented on the doc change responding to inline comments18:13
fungiclarkb: perfect. i'll update that asap18:13
*** lcheng_ has joined #openstack-infra18:14
*** alexpilotti has joined #openstack-infra18:14
fungithe docs had fallen a bit behind reality, but i was still a little unsure of my understanding there18:14
*** dcramer___ has quit IRC18:15
*** luqas has quit IRC18:16
*** e0ne has joined #openstack-infra18:17
*** skraynev is now known as skraynev_afk18:18
clarkband commented on the change primary node change. Whcih needs a small tweak18:18
*** gokrokve has joined #openstack-infra18:19
*** alexpilotti_ has joined #openstack-infra18:19
*** alexpilotti has quit IRC18:20
*** alexpilotti_ is now known as alexpilotti18:20
fungik18:20
*** Ryan_Lane1 has joined #openstack-infra18:20
*** nati_ueno has joined #openstack-infra18:20
*** cody-somerville has quit IRC18:20
*** jcooley_ has joined #openstack-infra18:20
*** pdmars has quit IRC18:20
*** changbl has joined #openstack-infra18:20
*** derekh has quit IRC18:21
*** Ryan_Lane1 has quit IRC18:21
*** reed has joined #openstack-infra18:22
*** andreaf has quit IRC18:22
*** e0ne has quit IRC18:22
*** e0ne has joined #openstack-infra18:22
*** reed has quit IRC18:23
*** prad__ has quit IRC18:24
*** khyati has quit IRC18:24
openstackgerritJames E. Blair proposed a change to openstack-infra/zuul: Ignore approvals without descriptions  https://review.openstack.org/7661018:25
openstackgerritJames E. Blair proposed a change to openstack-infra/zuul: Handle builds without gearman jobs  https://review.openstack.org/7661118:25
openstackgerritJames E. Blair proposed a change to openstack-infra/zuul: Handle builds without gearman jobs  https://review.openstack.org/7661118:26
openstackgerritJames E. Blair proposed a change to openstack-infra/zuul: Ignore approvals without descriptions  https://review.openstack.org/7661018:26
*** jgallard has quit IRC18:26
fungiclarkb: so while i'm tweaking those, should i go ahead and stop elasticsearch on elasticsearch6 and wait for replication to settle again?18:27
jeblairfungi, clarkb: ^ can you speedy-review those?  I'd like to go aheand and put into production since they (hopefully) fix two production problems we're seeing18:27
clarkbsure18:27
fungijeblair: was already in the middle of doing so ;)18:27
clarkbfungi: I think that sounds like a great idea18:27
*** oubiwann_ has joined #openstack-infra18:27
clarkbjust stopping the service should be fine, but if you are really lazy you can do it via elasticsearch-head18:28
clarkbI typically am on the host anyways so don't use the web gui thing18:28
jeblairstupid flake818:28
*** oubiwann_ has quit IRC18:28
*** Ryan_Lane1 has joined #openstack-infra18:29
openstackgerritJames E. Blair proposed a change to openstack-infra/zuul: Handle builds without gearman jobs  https://review.openstack.org/7661118:29
*** pdmars has joined #openstack-infra18:29
fungiclarkb: the "yellow" shards are ones which currently lack redundancy now that their counterparts on es6 are unavailable?18:31
*** prad_ has joined #openstack-infra18:31
clarkbfungi: correct, once they have been replicated from the other copy the cluster will go back to green18:31
openstackgerritJames E. Blair proposed a change to openstack-infra/zuul: Handle builds without gearman jobs  https://review.openstack.org/7661118:32
clarkbfungi: those yellow ones are the ones that are recovering18:32
jeblairfungi: so sorry ^ :(18:32
fungijeblair: was flake8 complaining about directly accessing a private method from another module?18:32
jeblairfungi: no it was complaining that job was never used18:32
fungioh!18:32
jeblairfungi: but __ is extra magic and you can't test it with hasattr18:33
fungiright18:33
jeblairOH!18:33
*** harlowja_away is now known as harlowja18:34
jeblairjhesketh added worker information to the data model though, and i think that would probably be a good place to put that information so we can stop using __18:34
jeblairbut that's for another day18:34
clarkb76610 has been approved onto 11 no118:34
*** saju_m has quit IRC18:35
*** reed has joined #openstack-infra18:35
*** morganfainberg_Z is now known as morganfainberg18:36
*** Ryan_Lane1 has quit IRC18:36
*** Ryan_Lane1 has joined #openstack-infra18:36
openstackgerritA change was merged to openstack-infra/zuul: Ignore approvals without descriptions  https://review.openstack.org/7661018:37
*** nicedice has joined #openstack-infra18:38
jesusaurusclarkb: i just had a thought: if i push a change to gerrit, then quickly make a small change and push a second changeset to the change, will zuul abort the tests on the first changeset?18:39
jesusaurusthat probably doesnt happen /too/ often...18:39
clarkband 76611 approved18:40
clarkbjesusaurus: it will18:40
jesusaurusah, cool18:40
openstackgerritElizabeth Krumbach Joseph proposed a change to openstack-infra/config: Add the bugdaystats to openstack-infra  https://review.openstack.org/6948918:40
*** jpeeler has quit IRC18:40
*** jpeeler has joined #openstack-infra18:40
* SergeyLukjanov is to slow to review Jim's patches to zuul :)18:41
clarkbSergeyLukjanov: oh sorry18:42
clarkbjeblair put it on the asap list so I didn't wait to approve18:42
SergeyLukjanovclarkb, nope, everything is ok ;)18:42
openstackgerritA change was merged to openstack-infra/zuul: Handle builds without gearman jobs  https://review.openstack.org/7661118:43
Ryan_Lane1so it seems you're slowly replacing jenkins with zuul18:43
Ryan_Lane1any idea on when that'll be completely replaced? I have basically no love for jenkins :)18:44
jeblairRyan_Lane1: yep.  the end is in sight.18:44
clarkbjeblair: log gearman client has a cpu pegged onlogstash.o.o is that possibly related to the stats reporting?18:44
Ryan_Lane1I'd love to have a python alternative18:44
*** davidhadas has quit IRC18:44
clarkbRyan_Lane1: if turbohipster meets your needs you can do it today18:44
jeblairRyan_Lane1: jhesketh wrote turbo-hipster which is python-based job runner for zuul18:45
clarkbRyan_Lane1: mikal and jhesketh and mattoliverau are doing it with zuul in rax land for third part db tests18:45
Ryan_Lane1ah. is this what openstack will go with in the future?18:45
jeblairRyan_Lane1: close but not exactly as it is now...18:45
jeblairRyan_Lane1: we still want to use jenkins-job-builder18:45
openstackgerritA change was merged to openstack-infra/config: Set Zuul gear server logs to debug  https://review.openstack.org/7659618:45
jeblairRyan_Lane1: (because of the templating, not because we love xml)18:45
*** chandan_kumar has quit IRC18:46
Ryan_Lane1heh. docs for turbohipster are basically non-existent18:46
* Ryan_Lane1 nods18:46
*** valentinbud has quit IRC18:46
jeblairRyan_Lane1: so we'll have something like turbo-hipster (maybe turbo-hipster if it wants to grow these features) that will read jjb-style yaml and do simple shell-based builds18:46
*** esker has quit IRC18:47
jeblairRyan_Lane1: but before we get there, we're working on swift-based artifact archiving18:47
Ryan_Lane1I think I'll wait a while and take a look again. I'd like to replace jenkins with something better, but I'm also not using gerrit18:47
jeblairRyan_Lane1: so that our logs and tarballs can be uploaded to swift, replacing the jenkins scp stuff.18:47
Ryan_Lane1so it'll be quite a bit of work for me18:47
Ryan_Lane1ah. nice18:47
*** esker has joined #openstack-infra18:48
*** krotscheck has joined #openstack-infra18:48
fungiRyan_Lane1: the solum devs are also eyeballing zuul as a basic work engine, potentially completely detached from gerrit scenarios, so there's a good chance they're be incented to start hanging new sorts of triggers in there18:48
*** khyati has joined #openstack-infra18:49
jeblairRyan_Lane1: also some python devs are considering attaching it to rietveld18:49
jeblairRyan_Lane1: http://legacy.python.org/dev/peps/pep-0462/18:50
*** Mithrandir has quit IRC18:50
*** Mithrandir has joined #openstack-infra18:50
*** mpanetta has joined #openstack-infra18:51
Ryan_Lane1I'd need github, but hopefully in the future phabricator18:52
*** dcramer___ has joined #openstack-infra18:52
Ryan_Lane1there's no chance I'll introduce gerrit to another organization18:52
*** esker has quit IRC18:52
Ryan_Lane1that was painful enough at wikimedia18:52
mpanettaHey guys, anyone that can help me?  I am having issues signing the contributor agreement.  It is giving me this error:18:52
*** julim_ has quit IRC18:53
clarkbRyan_Lane1: :P adding a github callback trigger to zuul shouldn't be terrible18:53
mpanettaThe request could not be completed <blah blah> Make sure you have joined the foundation...18:53
mpanettaBut I am logged in, so I must have joined no?18:53
clarkbmpanetta: no, login is via launchpad openid18:53
mpanettaAh ok18:53
fungimpanetta: https://wiki.openstack.org/wiki/CLA-FAQ#When_trying_to_sign_the_new_ICLA_and_include_contact_information.2C_why_am_I.27m_getting_an_error_message_saying_that_my_E-mail_address_doesn.27t_correspond_to_a_Foundation_membership.3F18:53
mpanettafungi: Thanks!18:54
clarkbmpanetta: https://www.openstack.org/join/ is where you can join the foundation18:54
mpanettaThank you clarkb :)18:54
SergeyLukjanovfolks, could you, please, take a look at https://review.openstack.org/#/c/74309/? it's direct-release hard-coded list removal from jeepyb18:54
*** pmathews has quit IRC18:55
sdaguewhat do we need to get an elastic-recheck-core made?18:55
sdaguein gerrit18:55
*** shashank__ has quit IRC18:55
fungisdague: is it mentioned in a gerrit acl?18:55
SergeyLukjanovsdague, fix acls in infra/config and ask infra-root to add you to the new group18:56
*** pblaho has quit IRC18:56
sdaguefungi: I don't think so18:56
fungisdague: groups get automagically created in gerrit if they're part of an acl18:56
fungisdague: if they're not used by any gerrit acl, what is the use case?18:56
jeblairsdague: it already exists18:56
jeblairsdague: https://review.openstack.org/#/admin/groups/218,members18:56
SergeyLukjanovsdague, and acls are correct18:56
SergeyLukjanov        label-Approved = +0..+1 group elastic-recheck-core18:57
SergeyLukjanov        label-Code-Review = -2..+2 group elastic-recheck-core18:57
openstackgerritA change was merged to openstack-infra/jeepyb: Remove hardcoded direct-release project list  https://review.openstack.org/7430918:57
SergeyLukjanov+ refs/tags for elastic-recheck-ptl18:57
sdagueoh, right, so we just need to add folks18:57
sdaguemy bad18:57
*** mpanetta has left #openstack-infra18:57
*** CaptTofu has quit IRC18:57
jeblairsdague: i expect that you should have a field on that page to add people; let me know if that's not the case.18:58
*** thomasem has joined #openstack-infra18:58
*** CaptTofu has joined #openstack-infra18:58
*** melwitt has joined #openstack-infra18:58
sdagueyep, I do18:58
sdagueit was just all group oriented before18:58
*** shashank__ has joined #openstack-infra19:00
*** bhuvan_ has joined #openstack-infra19:01
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Remove old elasticsearch cluster members  https://review.openstack.org/7605119:01
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Update logstash doc for an elasticsearch cluster  https://review.openstack.org/7657419:01
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Move primary elasticsearch discover node  https://review.openstack.org/7657519:01
jeblairi plan on restarting zuul when the current gate job finishes19:02
Ryan_Lane1clarkb: yeah, but I don't necessarily want to put the work in for github if we're going to move away from it19:02
*** CaptTofu has quit IRC19:02
clarkbah19:02
jeblairRyan_Lane1: if a github trigger existed, we would use it so that we could CI the repos of our dependencies19:03
Ryan_Lane1oh, interesting.19:03
jeblairRyan_Lane1: i don't know if that interests you, but ther's certainly use for a github trigger even for orgs that don't use it as their primary19:03
fungiRyan_Lane1: the solum devs were actually discussing that as possibly one of their first patches to zuul anyway19:03
Ryan_Lane1that would be awesome19:03
*** fbo is now known as fbo_away19:03
* Ryan_Lane1 wants19:04
clarkbfungi: they actually wrote an amqp trigger but haven't upstremaed19:04
Ryan_Lane1:(19:04
openstackgerritA change was merged to openstack/requirements: Allow projects to use oslo.vmware  https://review.openstack.org/7553919:04
Ryan_Lane1heh19:04
jeblairthe queue is clearing, i'll be restarting zuul in a second19:04
* jeblair watches in #openstack-merges19:05
Ryan_Lane1well, that does interest me, but I don't want to lick that cookie19:05
Ryan_Lane1since I may not have time to actually do it19:05
*** bhuvan has quit IRC19:05
Ryan_Lane1man, there's someone I really need to hire, because he'd be able to do this in a matter of hours19:05
jeblairRyan_Lane1: maybe an openstack company can hire him to do this.  :)19:06
Ryan_Lane1notice I didn't mention who it was ;)19:06
*** tkorochka has quit IRC19:06
jeblairRyan_Lane1: as long as they are hired by someone to work on zuul, i don't care who.  :)19:06
*** ociuhandu has quit IRC19:07
Ryan_Lane1:D19:07
*** dcramer___ has quit IRC19:07
dhellmannjeblair, SergeyLukjanov: I posted a couple of questions on https://review.openstack.org/#/c/76381/19:07
dkranzThe new log links in jenkins to tempest.conf.gz are not setting the uncompress headers. Should I file a bug?19:07
jeblairZuul is restarted19:07
jeblairdkranz: in devstack-gate they should be moved to 'tempest.conf.txt' and then gzipped19:08
dkranzjeblair: ok, I'll take a look.19:09
*** mfer has quit IRC19:09
fungijeblair: you said mergers still need restarting after the main zuul process, right?19:12
jeblairfungi: so i did19:12
fungiif so, they don't seem to have been yet19:12
jeblairthat should really be next on my list19:12
fungior i can restart them if you like19:12
clarkbjeblair: fungi: why is that the case? because the mergers don't rejoin the gaerd?19:13
jeblairfungi: i'm on it19:13
jeblairclarkb: they only re-register one of their functions19:13
openstackgerritDavid Kranz proposed a change to openstack-infra/devstack-gate: Rename tempest.conf so it is gz'ed properly  https://review.openstack.org/7662219:14
jeblairalso, the pidfile is wrong so the init script doesn't work.19:14
clarkbgotcha19:14
jeblairthey are both running now19:14
*** freyes has joined #openstack-infra19:14
*** esker has joined #openstack-infra19:15
fungiwe have changes with completion estimates now, so looks good19:15
*** esker has quit IRC19:15
apevecfungi, your blessing was asked for the evil patch https://review.openstack.org/76058 (workaround for grenade job in stable/havana)19:16
*** prad_ has quit IRC19:16
*** andreaf has joined #openstack-infra19:16
fungiapevec: i proposed https://review.openstack.org/76280 instead19:16
apevecah thanks, I missed that19:17
fungiapevec: we've got a week left on grizzly support, so it didn't seem worthwhile for people to waste any more time fixing upgrade testing on it19:17
wendarjeblair/clarkb: who has access to update the "official tags" list for the Nova project? Only the general OpenStack admins?19:17
apevecI'll add link and abandon evil patch then19:17
fungiapevec: well, mine hasn't exactly been +2'd either19:17
fungiit's an alternate approach, admittedly distasteful, but pragmatic19:17
*** sarob has joined #openstack-infra19:18
fungiapevec: i'm fine with 76058 if it works19:18
clarkbwendar: git tags? usually project ptls and possibly a small group of folks delegated with that responsibility19:18
sdaguewendar: I think nova-drivers19:18
*** sdake_ has quit IRC19:18
jeblairdhellmann: responded.  i think we have two choices: we can proceed with the devstack-gate plan but that only works with py27 for now (until we make some infra improvements on test nodes)19:18
wendarclarckb: Launchpad bug tags.19:18
*** sdake_ has joined #openstack-infra19:18
*** prad has joined #openstack-infra19:18
dhellmannjeblair: that's acceptable19:18
wendarI mean clarkb ^ :)19:19
clarkbwendar: I think sdague is correct19:19
jeblairdhellmann: or if py26-33 testing is more important, then we can go with an asymmetric gate (or even just non-voting tests) on the regular unit test nodes and forego the devstack-gate git repo setup19:19
wendarsdague: In Launchpad, the only "Adminstrator" in the Nova Bugs team is "OpenStack Administrators".19:19
apevecfungi, that's a good question, not sure that after 76058 is fixed, it won't break in some other place19:20
dhellmannjeblair: the idea is to prevent breaking the other projects, which really means API changes. We do have unit tests for the lib by itself to verify we don't break py26 and py33, I was just being overly cautious19:20
wendarBut then, maybe any team member has access to update that list.19:20
apevece.g. this popped after skipping swift devstack exercises19:20
*** davidhadas has joined #openstack-infra19:21
dhellmannjeblair: and if we'll have the git repo checkout stuff working on unit test nodes at some point, we can add more test jobs then19:21
fungiapevec: yep, i put that previous attempt through, which was what then prompted me to propose just dropping that job a week ahead of schedule19:21
*** dcramer___ has joined #openstack-infra19:21
jeblairdhellmann: okay.  i'll redo that change appropriately.  i'll just hard-code py27 for now.19:22
fungiapevec: i say we merge 76058 first, since it's already written, and then if it still doesn't help we go forward with 76280 and not waste more developer time on it19:22
dhellmannjeblair: passing it to run_cross_tests.sh?19:22
jeblairdhellmann: yeah, so the arg will already be there, and we can plumb it into jjb when we're actually ready to use it.19:22
dhellmannjeblair: makes sense19:22
krotscheckclarkb: Would you mind pinging me once that permissions patch lands?19:22
*** bhuvan has joined #openstack-infra19:23
dhellmannjeblair: I'll rework run_ross_tests.sh to take those args19:23
krotscheckclarkb: I'm about to go eyeballs deep into OpenID, so...19:23
apevecfungi, ok19:23
jeblairdhellmann: oh, do you need the name of the project or just the path?19:23
dhellmannjeblair: oh, can  you pass the root for the repos, too? that way we can use run_cross_tests.sh locally19:23
dhellmannheh19:23
apevecsdague, ^^^ that was fungi re. 7605819:23
dhellmannpaths are probably better19:23
jeblaircool19:23
clarkbkrotscheck: sure, it hasn't even been proposed yet, but I suppose I should do that really quick since everyone wants it19:23
*** thuc_ has quit IRC19:24
fungiapevec: sdague: i +1'd and updated the change accordingly19:24
dhellmannjeblair: does my hacky two-calls-to-tox approach still make sense?19:24
openstackgerritA change was merged to openstack-infra/config: Add new elasticsearch cluster members to cacti  https://review.openstack.org/7657319:24
fungiclarkb: did you do something earlier to put logstash-worker05 and 09 back into service? (i noticed last night they were missing in the status monitors)19:25
jeblairdhellmann: yes.  one downside to that is we're missing some of the sanity checks that the run-unittests script gives us, but since these are essentially duplicate tests as far as that goes, we're probably okay.19:25
jeblairdhellmann: (sanity checks like did it oom, or sudo, or run zero tests)19:25
jeblairdhellmann: actually, that last one might be a good one to have in run_cross_tests19:25
*** thuc has joined #openstack-infra19:26
clarkbfungi: I did `sudo restart logstash-indexer`. https://review.openstack.org/#/c/75966/ should automate that for us19:26
*** CaptTofu has joined #openstack-infra19:26
*** vkozhukalov_ has joined #openstack-infra19:26
*** bhuvan_ has quit IRC19:27
clarkbfungi: that does a simple curl query against the cluster and runs jq over it to determine if the local node is a member of the cluster. If it isn't the service is restarted19:27
fungiclarkb: is there an upstream bug which 75966 is working around?19:27
clarkbfungi: I haven't gone digging in logstash and elasticsearch's jiras but either logstash or the es client lib is at fault. It should try reconnecting imo19:28
fungijust curious whether we've been good citizens and reported it upstream, at least19:28
clarkbI haven't had time yet, but should do that19:28
clarkbbut jira :P19:28
fungiyuck, agreed19:28
fungiclarkb: you should ask olaph to help19:28
dhellmannjeblair: I would hate to complicate that script by making it know how to do this dance19:28
clarkbha19:28
clarkbI might be able to con someon in their irc channels to do it for me too19:29
dhellmannjeblair: I could put a tox target in the remote project that knows what to do, but that also feels gross19:29
jeblairdhellmann: yeah.  maybe we can refactor run-unittests in the future to have phases we can call externally19:29
dhellmannjeblair: what if we refactor run-unittestes.sh into run-unittests.sh and check-unittest-issues.sh or something19:29
dhellmannhaha19:30
dhellmannjeblair: great minds...19:30
jeblairwe're on a roll today19:30
dhellmannindeed19:30
*** oubiwann_ has joined #openstack-infra19:30
jeblairi think we may need more git servers19:31
dhellmannso should I go ahead and refactor now, or copy the relevant bits into my new script for the time being?19:31
fungijeblair: we can make more19:31
fungijeblair: or do we just need bigger git servers?19:31
jeblairdhellmann: i'd recommend copying for now.  that one is is a fairly easy code block that you can just run after the real tox run.19:31
dhellmannjeblair: ack19:31
clarkbjeblair: more git servers for the mirror?19:32
jeblairfungi: i think we found the ram/cpu sweet-spot last time with benchmarking.  of course, they are probably standard, not performance nodes.19:32
jeblairclarkb: yes.19:32
clarkbthey are standard19:32
fungiyeah we're spiking up to 50mbps on them, looks like19:32
jeblairwe should probably add one then take each of the other 4 out and replace them.19:33
fungioh, and we are tapping out cpu on them occasionally as well19:33
fungiwith load averages spiking up to ~40 on zuul restart19:33
*** dcramer___ has quit IRC19:34
clarkbare there pvhvm cenots images?19:34
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Add oslo.test integration test  https://review.openstack.org/7638119:34
clarkbif so we can go that route too19:34
openstackgerritMichael Krotscheck proposed a change to openstack-infra/storyboard-webclient: MVP Storyboard Client  https://review.openstack.org/7089719:34
fungiclarkb: ckecking now19:34
dhellmannjeblair: looks good19:35
fungiclarkb: yep... CentOS 6.5 (PVHVM)19:36
fungiclarkb: i'll whip up a change to add a new pvhvm performance node, and then a second change to put it into the haproxy pool19:36
fungijeblair: ^19:36
jeblairfungi: awesome, thx19:36
clarkbsounds good19:36
*** jp_at_hp has quit IRC19:37
*** bhuvan has quit IRC19:37
jeblairso the new graph tells us that zuul would be running 1000 simultaneous jobs now if it could, but is currently only able to run a bit more than 500 of them.19:37
fungiclarkb: on the es front, we're down to 2 remaining shards being replicated i think19:38
fungiclarkb: once those finish, we stop elasticsearch on es5?19:38
*** coolsvap1 has joined #openstack-infra19:38
*** dstanek_afk has joined #openstack-infra19:38
*** unicell1 has joined #openstack-infra19:38
clarkbfungi: once those finish and the cluste rshows green19:39
clarkbbigdesk will update its color19:39
fungiyeah19:40
fungii assumed as much19:40
fungifancy, fancy colors19:40
*** wchrisj_ has joined #openstack-infra19:40
*** stevebaker has joined #openstack-infra19:40
*** dprince_ has joined #openstack-infra19:41
*** vkozhukalov_ has quit IRC19:41
*** moted_ has joined #openstack-infra19:41
*** julim has joined #openstack-infra19:41
*** mbacchi_ has joined #openstack-infra19:42
*** julienvey1 has joined #openstack-infra19:42
*** miqui_ has joined #openstack-infra19:42
*** smarcet1 has joined #openstack-infra19:42
apevecfungi, mordred - why do have trove in stable/havana requirements-integration? http://logs.openstack.org/09/62209/1/check/check-requirements-integration-dsvm/aceddc0/console.html.gz#_2014-02-18_17_30_41_29419:43
*** dprince_ has quit IRC19:43
apevectrove stable/havana is not maintained https://github.com/openstack/trove/commits/stable/havana19:44
*** dstanek has quit IRC19:44
*** dstanek_afk is now known as dstanek19:44
fungiapevec: presumably they requested it as they were working toward incubation19:44
apevecit's not part of stable-maint in Havana19:44
apevecfungi, right, but they got integrate at Havana GA i.e. will be part of stable release in stable/icehouse cycle19:45
fungiapevec: if the goal is to only include projects which are under stable-maint in requirements integration, then that probably merits wider discussion19:45
*** dhellmann is now known as dhellmann_19:45
apeveclike Ceilo and Heat Grizzly were not19:45
clarkbfungi: you can also `curl -XGET http://localhost:9200/_cluster/status?pretty=true` if you want to query it directly19:45
fungiclarkb: oh, cool!19:45
openstackgerritPaul Michali proposed a change to openstack/requirements: Update requests to 2.1.0 and add httmock to tests  https://review.openstack.org/7529619:45
apevecfungi, somebody should actively work on stable/* branch and that's not happening for trove stable/havana19:46
apevece.g. ceilo and heat team did work on stable/grizzly19:46
fungiapevec: i believe currently the list of projects which are in requirements integration uses teh same list as the projects which wish to receive requirements sync updates, so there are a number of projects which are not official or even curently incubated in the list19:46
*** markmcclain has quit IRC19:47
*** krotscheck has quit IRC19:47
*** reed has quit IRC19:47
*** changbl has quit IRC19:47
*** coolsvap has quit IRC19:47
*** moted has quit IRC19:47
*** gyee has quit IRC19:47
*** dprince has quit IRC19:47
*** rfolco has quit IRC19:47
*** wchrisj has quit IRC19:47
*** ryanpetrello has quit IRC19:47
*** sdake has quit IRC19:47
*** smarcet has quit IRC19:47
*** mbacchi has quit IRC19:47
*** che-arne has quit IRC19:47
*** bogdando has quit IRC19:47
*** yassine has quit IRC19:47
*** miqui has quit IRC19:47
*** vishy has quit IRC19:47
*** unicell has quit IRC19:47
*** vponomaryov has quit IRC19:47
*** Hunner has quit IRC19:47
*** bradm has quit IRC19:47
*** akscram has quit IRC19:47
*** moted_ is now known as moted19:47
fungiapevec: but i recommend following up with hub_cap or SlickNik or one of the other trove heavies on this topic19:47
fungiapevec: if they don't want to support their stable/havana branch, then it's a one-line patch to remove them from the stable/havana branch of openstack/requirements:projects.txt19:48
*** vkozhukalov has joined #openstack-infra19:48
apevecok, I'll check w/ trove team19:49
openstackgerritClark Boylan proposed a change to openstack-infra/config: Optionally give mysql user all global privs.  https://review.openstack.org/7663419:49
sdaguejeblair: is it possible for you to update the er graph with background colors at warning and oh crap thresholds19:49
apevecthis is blocking requirements stable/havana so should be fixed one way or the other19:49
*** Ajaeger has joined #openstack-infra19:49
fungiapevec: if they're unresponsive, we can certainly make an in absentia call to just remove them anyway19:49
AjaegerInfra team, do we have some jenkins that has not been updated?19:50
fungiAjaeger: jobs which run on workers with names including "hpcloud-az2" tend to often be days behing on image updates because of a current problem in that provider19:51
fungiAjaeger: you can look at the top of the console log to see the name of the worker where a job ran19:51
apevecfungi, there was an attempt to fix their havana branch https://review.openstack.org/#/c/7538619:51
Ajaegerfungi, so  bare-precise-hpcloud-az2-1677190  is old?19:52
apevecbut ihrachys just gave up when issues started piling up19:52
fungiAjaeger: well, the vm is very new, but it was built from a nightly image which hasn't updated in a while. i'll find out how long19:52
*** cadenzajon_ has joined #openstack-infra19:52
*** gyee has joined #openstack-infra19:53
AjaegerThe change I'm missing is infra change Ia8f60c8a4b9d1b18583366d83ddb82dc61bff9f5 which was merged on Monday19:53
*** andreaf has quit IRC19:53
fungiAjaeger: just over 6 days oldf19:53
fungiold19:53
*** krotscheck has joined #openstack-infra19:53
*** sdake has joined #openstack-infra19:54
*** rfolco has joined #openstack-infra19:54
AjaegerIt changed the checkbuild link from logs to docs-draft19:54
*** reed has joined #openstack-infra19:54
openstackgerritMatthew Treinish proposed a change to openstack-infra/elastic-recheck: Add query for bug 1285323  https://review.openstack.org/7663519:54
*** yassine has joined #openstack-infra19:54
clarkbAjaeger: I don't think that depends on the slave19:54
clarkbya thats a zuul config. Do we have puppet running on zuul.o.o?19:54
Ajaegerclarkb: Ah. So, why does https://review.openstack.org/#/c/76096/ show as link for the checkbuild logs...19:55
fungiclarkb: yes, but also Last reconfigured: Wed Feb 26 2014 19:06:46 GMT+0000 (UTC)19:55
AjaegerOoops, I see that for most recent builds19:55
*** cadenzajon has quit IRC19:56
AjaegerIt's fine here: https://review.openstack.org/#/c/75978/19:56
fungiclarkb: and the success-pattern from that change appears in /etc/zuul/layout.yaml on zuul.o.o19:56
*** changbl has joined #openstack-infra19:57
*** jraim has quit IRC19:57
*** thuc has quit IRC19:57
*** mestery_ has joined #openstack-infra19:57
*** enikanorov__ has joined #openstack-infra19:57
AjaegerSo, working on operatoins-guide but failing in openstack-manuals19:57
* Ajaeger searches for a success for openstack-manuals...19:57
clarkbfungi: ya I think Ajaeger has indicated it is working on newer builds19:57
*** thuc has joined #openstack-infra19:57
clarkbanything run before zuul reloaded its config would have old links19:57
*** rossella_s has joined #openstack-infra19:58
*** unicell has joined #openstack-infra19:58
*** hogepodge is now known as 20WABDE5819:58
*** hogepodge has joined #openstack-infra19:58
*** pmathews has joined #openstack-infra19:58
*** che-arne has joined #openstack-infra19:58
*** vponomaryov has joined #openstack-infra19:58
*** bogdando has joined #openstack-infra19:58
*** vishy has joined #openstack-infra19:58
*** Hunner has joined #openstack-infra19:58
*** bradm has joined #openstack-infra19:58
*** akscram has joined #openstack-infra19:58
*** dkehn__ has joined #openstack-infra19:58
*** ArxCruz_ has joined #openstack-infra19:58
*** jraim has joined #openstack-infra19:58
Ajaegerclarkb: It seems to fail on gate-openstack-manuals-tox-doc-publish-checkbuild  but works on gate-operations-guide-tox-doc-publish-checkbuild19:58
*** jraim has quit IRC19:59
*** hogepodge has quit IRC19:59
*** pmathews has quit IRC19:59
*** che-arne has quit IRC19:59
*** bogdando has quit IRC19:59
*** vishy has quit IRC19:59
*** vponomaryov has quit IRC19:59
*** Hunner has quit IRC19:59
*** bradm has quit IRC19:59
*** akscram has quit IRC19:59
*** dripton_ has joined #openstack-infra19:59
openstackgerritMatthew Treinish proposed a change to openstack-infra/elastic-recheck: Add query for bug 1285323  https://review.openstack.org/7663519:59
clarkbAjaeger: are you sure it isn't just a timing thin?19:59
fungiperhaps we didn't reload the zuul config successfully between when that change merged and a few minutes ago when zuul was restarted?19:59
*** alexpilotti_ has joined #openstack-infra19:59
Ajaegerclarkb: I'm right now confused ;(19:59
*** thuc has quit IRC20:00
clarkboh hrm, the test that doesn't have the correct link reported after the one that does20:00
*** ruhe- has joined #openstack-infra20:00
*** thuc has joined #openstack-infra20:01
clarkbAjaeger: I bet it has something to do with order of matching20:01
AjaegerIs my regex somehow wrong?20:01
*** unicell1 has quit IRC20:01
*** mestery has quit IRC20:01
*** yassine has quit IRC20:01
*** alexpilotti has quit IRC20:01
*** ArxCruz has quit IRC20:01
*** 20WABDE58 has quit IRC20:01
*** ruhe has quit IRC20:01
*** branen has quit IRC20:01
*** ilyashakhat has quit IRC20:01
*** enikanorov_ has quit IRC20:01
*** dripton has quit IRC20:01
*** yassine has joined #openstack-infra20:01
*** miqui_ has quit IRC20:01
*** dims has quit IRC20:01
*** dkehn_ has quit IRC20:01
*** alexpilotti_ is now known as alexpilotti20:01
*** ruhe- is now known as ruhe20:01
*** mestery_ has quit IRC20:01
*** mestery_ has joined #openstack-infra20:01
*** thuc has quit IRC20:01
clarkbhttps://review.openstack.org/#/c/73185/2/modules/openstack_project/files/zuul/layout.yaml just above where you added the new lines is an entry for the manuals20:01
clarkbwhich may be winning20:01
*** ilyashakhat has joined #openstack-infra20:01
* clarkb reads zuul code20:01
*** dims has joined #openstack-infra20:01
*** thuc has joined #openstack-infra20:02
Ajaegerclarkb: That would explain it!20:02
*** SumitNaiksatam has quit IRC20:02
AjaegerI might have not noticed that it worked everywhere except in openstack-manuals...20:02
*** miqui has joined #openstack-infra20:02
Ajaegerclarkb: So, fix would be to copy the success-pattern to the entry above?20:03
*** sabari_ has joined #openstack-infra20:03
clarkbAjaeger: maybe, I am trying to udnerstand how that is coalesced in zuul now20:03
*** amcrn has quit IRC20:03
*** sabari_ is now known as sabari20:03
fungiyeah, because we definitely have a variety of parameters combined from multiple matches onto a given job20:03
*** kiall has quit IRC20:04
fungiunless it's because one is a regex and the other is a literal job name?20:04
fungidoes the literal cause it to short-circuit subsequent matches?20:04
*** sabari is now known as 92AAAAI7R20:05
*** branen has joined #openstack-infra20:05
*** jraim has joined #openstack-infra20:05
*** hogepodge has joined #openstack-infra20:05
*** pmathews has joined #openstack-infra20:05
*** che-arne has joined #openstack-infra20:05
*** vponomaryov has joined #openstack-infra20:05
*** bogdando has joined #openstack-infra20:05
*** vishy has joined #openstack-infra20:05
*** Hunner has joined #openstack-infra20:05
*** bradm has joined #openstack-infra20:05
*** akscram has joined #openstack-infra20:05
Ajaegerfungi, clarkb: Thanks a lot for looking into this.20:05
*** kiall has joined #openstack-infra20:06
fungiAjaeger: thanks for bringing it to our attention... it does look like it's probably an unintentional corner-case20:06
*** hashar has joined #openstack-infra20:07
openstackgerritAndreas Jaeger proposed a change to openstack-infra/config: Use success-pattern for openstack-manuals  https://review.openstack.org/7663920:07
Ajaegerfungi: ;)20:07
*** freyes has quit IRC20:07
*** cadenzajon has joined #openstack-infra20:07
clarkbso the way it works is that regexes are 'metajobs' these are applied to non regex names when they are declared20:07
clarkbso you can just put the regex lines above the non regex lines and that should work20:07
Ajaegerclarkb: ok, will update the change.20:08
openstackgerritafazekas proposed a change to openstack-infra/devstack-gate: Allow tempest to dump the ovs db  https://review.openstack.org/7664020:08
*** dstufft_ has joined #openstack-infra20:09
*** dkehn__ is now known as dkehn_20:09
*** SumitNaiksatam has joined #openstack-infra20:09
*** nati_uen_ has joined #openstack-infra20:10
openstackgerritAndreas Jaeger proposed a change to openstack-infra/config: Fix success-pattern usage for openstack-manuals  https://review.openstack.org/7663920:10
clarkbfungi: so it doesn't short circuit but when normal job is evaluated ti can't apply the meta job stuff20:10
Ajaegerclarkb: here's the updated patch - hope it does the right think.20:10
*** dripton_ is now known as dripton20:10
Ajaegers/think/thing/20:11
*** doddstack has joined #openstack-infra20:11
*** oubiwann_ has quit IRC20:11
*** salv-orlando_ has joined #openstack-infra20:11
*** Ryan_Lane2 has joined #openstack-infra20:12
*** rlandy has joined #openstack-infra20:12
*** wenlock has quit IRC20:12
*** CaptTofu_ has joined #openstack-infra20:12
*** Sukhdev has joined #openstack-infra20:13
*** SumitNaiksatam has quit IRC20:13
*** smarcet has joined #openstack-infra20:13
*** dkehn__ has joined #openstack-infra20:13
clarkbAjaeger: yup lgtm20:13
clarkbfungi: cluster looks happy now20:14
Ajaegerclarkb and fungi: thanks a lot!20:15
*** ildikov_ has joined #openstack-infra20:15
*** dstufft has quit IRC20:15
*** dstufft_ is now known as dstufft20:15
*** amrith_ has joined #openstack-infra20:15
*** oubiwann_ has joined #openstack-infra20:17
fungiclarkb: yep, all clear to stop services on es5 then?20:18
*** dims has quit IRC20:18
*** yassine has quit IRC20:18
*** cadenzajon_ has quit IRC20:18
*** smarcet1 has quit IRC20:18
*** CaptTofu has quit IRC20:18
*** nati_ueno has quit IRC20:18
*** thedodd has quit IRC20:18
*** dkliban has quit IRC20:18
*** sandywalsh has quit IRC20:18
*** salv-orlando has quit IRC20:18
*** Ryan_Lane has quit IRC20:18
*** dkehn has quit IRC20:18
*** amrith has quit IRC20:18
*** salv-orlando_ is now known as salv-orlando20:18
*** amrith_ is now known as amrith20:18
clarkbfungi: yup20:19
fungidone20:19
*** dims has joined #openstack-infra20:19
*** sandywalsh has joined #openstack-infra20:19
*** flaper87 has quit IRC20:20
*** flaper87 has joined #openstack-infra20:20
*** afazekas has joined #openstack-infra20:20
fungiclarkb: later when es4 get to the chopping block, is there anything we need to do to force a graceful master election first, or just let it sort that out on its own?20:20
*** yassine has joined #openstack-infra20:20
*** cadenzajon has quit IRC20:21
*** cadenzajon has joined #openstack-infra20:21
*** ildikov_ has quit IRC20:21
*** ildikov_ has joined #openstack-infra20:21
*** dkliban has joined #openstack-infra20:21
*** dkliban has quit IRC20:21
*** dkliban has joined #openstack-infra20:21
clarkbit will sort that out on its own, but let me read es docs to see if we can force a different master to be elected (I have never worried about it in the past and it has been fine as long as the network doesn't partition)20:21
*** shashank__ has quit IRC20:21
clarkbya doesn't look like we can force a different master, its fine20:21
*** yolanda_ has joined #openstack-infra20:21
*** lcheng_ has quit IRC20:22
*** SumitNaiksatam has joined #openstack-infra20:22
*** wenlock has joined #openstack-infra20:23
*** oubiwann_ has quit IRC20:23
* clarkb is going to grab lunch now that cluster seems to be happily recovering20:24
*** oubiwann_ has joined #openstack-infra20:24
mtreinishclarkb, fungi, jeblair: shouldn't that be pointed at the infra pypi mirror?: http://logs.openstack.org/35/76635/2/check/gate-elastic-recheck-pep8/b5a7e8c/console.html#_2014-02-26_20_17_58_34620:26
jeblairmtreinish: it doesn't track openstack/requirements so it doesn't use the mirror20:27
mtreinishahh ok that makes sense20:27
*** mspreitz has joined #openstack-infra20:27
mspreitzHelp, I made a procedural mistake.  I submitting a patch for review without making a branch for it.  How do I recover?20:28
fungimspreitz: how do you want to recover? (what end state do you seek?)20:29
mspreitzFirst and foremost, no harm to anything else.  Second, eventually a patch submitted correctly.  The edits are few and easily reproduced.20:30
openstackgerritA change was merged to openstack-infra/config: Fix success-pattern usage for openstack-manuals  https://review.openstack.org/7663920:30
fungimspreitz: so, as long as the state of your work is captured in the review, you can always pull it back to your local system from gerrit20:31
jeblairfungi, clarkb, sdague: i don't think we have the space to hold zuul gearman server debug logs.  :(  i think 3 days will take like 97G.20:31
jeblairso i think we're going to have to revert the logging config patch for now, then rework gear's logging so we can get useful information like that with less verbosity, then try again.20:31
jeblaireither that or attach a cinder volume for logs20:31
mspreitzLet me start with the first part.  Having submitted something on master, does that harm anything besides my submission?20:32
fungimspreitz: nope, other than it probably has a blank topic in review.openstack.org, but that's not at all uncommon20:32
fungimspreitz: in which case you can reset your local master branch to the same commit as the remote master with 'git reset --hard origin/master' and then you can retrieve your change into a local topic branch with 'git review -d NNNNN' (where NNNNN is the change number in gerrit)20:32
openstackgerritA change was merged to openstack-infra/os-loganalyze: Remove tox locale overrides  https://review.openstack.org/7221820:33
*** jamielennox is now known as jamielennox|away20:34
mspreitzfungi: thanks.  I'll try that when I get a chance.20:34
fungimspreitz: git review -l will also list open reviews for the current project, in case you're looking for an easy way to spot the review number for yours20:35
mspreitzBTW, I noticed one other harm to my submission, which is a bug fix: it is not listed in the bug.20:36
fungimspreitz: right, you'll want to make sure at the end of your commit message you include a separate line like Closes-Bug: #XXXXXX20:37
*** Ryan_Lane1 has quit IRC20:38
fungimspreitz: however, if that wasn't present on the initial patchset for your change, the bug doesn't get automatically updated until your change gets merged, so you may want to set the bug to in-progress, assign it to yourself, and leave a comment with the url to your proposed fix20:38
*** mestery_ is now known as mestery20:38
mspreitzI had the Closes-Bug line in my original commit message.20:38
*** lcostantino has quit IRC20:39
*** dhellmann_ is now known as dhellmann20:39
fungimspreitz: what is the url to your review and your bug?20:39
clarkbjeblair wow20:39
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Add git05 to cacti and gerrit replication  https://review.openstack.org/7664920:42
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Add git05 to the git.openstack.org haproxy farm  https://review.openstack.org/7665020:42
fungiclarkb: jeblair: git05 is up (centos 6.5 8g performance pvhvm) and those ^ do what we need next20:42
*** yolanda_ has quit IRC20:43
*** alexpilotti has quit IRC20:43
*** mattymo has quit IRC20:43
*** mattymo has joined #openstack-infra20:44
jeblairfungi: neato.  aprvd20:44
jeblair(the first)20:44
jeblairfungi: has create_cgit_repos run on 05?20:44
*** mspreitz has quit IRC20:45
*** jhesketh has joined #openstack-infra20:45
fungijeblair: yep, looks like it20:45
fungijeblair: thanks. i'll keep tabs on it and then we can add it with the second patch and start cycling the other 4 out and replacing them20:46
jheskethMorning20:46
clarkbyou can disable nodes with the admin socket to nicely remove nodes20:46
fungiright now /var/lib/git has empty repos and the interface is up at http://git05.openstack.org:8080/cgit20:47
fungiclarkb: in haproxy?20:47
*** rhsu has joined #openstack-infra20:47
fungiclarkb: yeah, will do when the time comes20:47
clarkbyup20:47
*** 92AAAAI7R is now known as sabari20:48
*** lcheng_ has joined #openstack-infra20:50
openstackgerritDavanum Srinivas (dims) proposed a change to openstack-infra/config: Support filtering by review id(s)  https://review.openstack.org/7244620:53
*** mfer has joined #openstack-infra20:54
openstackgerritBhuvan Arumugam proposed a change to openstack-infra/config: log analyzer for openstack IRC logs  https://review.openstack.org/7244520:55
*** bhuvan has joined #openstack-infra20:55
*** markmcclain has joined #openstack-infra20:56
*** markmcclain1 has joined #openstack-infra20:59
*** ryanpetrello has joined #openstack-infra20:59
*** mrodden1 has joined #openstack-infra20:59
dhellmannjeblair: updated https://review.openstack.org/#/c/7440821:00
*** markmcclain has quit IRC21:00
*** mrodden has quit IRC21:01
*** oubiwann_ has quit IRC21:02
*** denis_makogon_ has joined #openstack-infra21:03
openstackgerritA change was merged to openstack-infra/config: Add git05 to cacti and gerrit replication  https://review.openstack.org/7664921:04
openstackgerritDoug Hellmann proposed a change to openstack-infra/config: Add oslotest jobs for oslo.config  https://review.openstack.org/7665421:06
openstackgerritChad Lung proposed a change to openstack-infra/config: Add DevStack job for Barbican  https://review.openstack.org/7453021:06
Ajaegerfungi, clarkb: The docs-draft change works as expected - just tested with https://review.openstack.org/76647 ! Thanks again!21:07
*** yassine has quit IRC21:09
*** Ajaeger has quit IRC21:12
*** CaptTofu_ has quit IRC21:14
*** shashank_ has joined #openstack-infra21:14
fungiclarkb: jeblair: do you recall whether we need a gerrit restart to reread the replication config? i want to say so...21:14
*** CaptTofu has joined #openstack-infra21:14
*** CaptTofu has quit IRC21:14
*** CaptTofu has joined #openstack-infra21:14
jeblairfungi: hrm, i don't recall.  you could try replicating one project and see...21:15
fungiyeah, that's where i was headed next. thanks21:15
fungiassuming it doesn't just magically start working once puppet updates the config here in a bit21:16
*** bhuvan has quit IRC21:16
*** sandywalsh has quit IRC21:16
jeblairi'm self-approving https://review.openstack.org/#/c/76657/ and will restart zuul so we don't run out of disk21:16
*** dhellmann_ has joined #openstack-infra21:17
*** NikitaKonovalov is now known as NikitaKonovalov_21:17
*** jd__` has joined #openstack-infra21:17
*** dims_ has joined #openstack-infra21:17
fungidid we lose openstackgerrit? or is it just slow to the party on that change?21:18
*** bookwar1 has joined #openstack-infra21:18
jeblairfungi: i'm thinking it's lost.  i'll kick it.21:20
*** openstackgerrit has quit IRC21:20
*** dhellmann has quit IRC21:20
*** dims has quit IRC21:20
*** jd__ has quit IRC21:20
*** bookwar has quit IRC21:20
*** sbadia has quit IRC21:20
*** samalba has quit IRC21:20
*** shortstop has quit IRC21:20
*** sirushti has joined #openstack-infra21:20
*** jd__` is now known as jd__21:20
*** Ryan_Lane has joined #openstack-infra21:20
jeblair    sender = self.ssl.write if self.ssl else self.socket.send21:20
jeblairAttributeError: 'NoneType' object has no attribute 'send'21:20
*** bhuvan_ has joined #openstack-infra21:20
jeblairyeah ^ that again21:20
fungiyep, ping timeout finally registered21:20
*** sbadia has joined #openstack-infra21:21
jog0sdague: so I think I introduced instability into the voting grenade job yesterday21:21
*** dhellmann_ is now known as dhellmann21:21
sdaguejog0: ok, how?21:21
jog0sdague jeblair: also https://review.openstack.org/#/c/76658/21:21
*** openstackgerrit has joined #openstack-infra21:21
jog0sdague: so my grenade patch resulted in n-cpu not being killed when we expect it to21:21
*** samalba has joined #openstack-infra21:22
jog0sdague:I think we will see this failurein regular grenade: http://logs.openstack.org/82/76582/2/check/check-grenade-dsvm-partial-ncpu/53ed49c21:22
fungiyeah, it looks like there's definitely the potential for other settings to get wiped without the append operator21:22
sdaguejog0: so https://review.openstack.org/#/c/76658/ doesn't change anything21:23
*** david_lyle_ has joined #openstack-infra21:23
*** coolsvap_ has joined #openstack-infra21:23
*** mestery_ has joined #openstack-infra21:23
sdaguebecause there is no other place that localrc for grenade should be written21:23
jog0look up21:23
*** gokrokve_ has joined #openstack-infra21:24
jog0line 5321:24
*** davidhadas_ has joined #openstack-infra21:24
*** e0ne has quit IRC21:24
jog0I am getting 500 errors from logstash21:24
fungijog0: on every query?21:25
jog0fungi:  just most21:25
*** Ryan_Lane has quit IRC21:25
*** bhuvan has joined #openstack-infra21:25
jeblairjog0: line 53 is writing the devstack localrc21:25
sdagueyeh, what jeblair said21:25
jeblairjog0: line 251 is writing the grenade localrc21:25
jog0jeblair: ohh21:25
fungi$BASE/new/grenade/localrc specifically21:25
*** jeckersb is now known as jeckersb_gone21:25
jog0thanks21:25
jeblairi think gerritbot is happy gain21:26
*** rcleere_ has joined #openstack-infra21:26
sdaguebut, if there is a different grenade race, we should revert the patch yuo think is an issue21:26
fungiseems like a safe enough change in case we decide to start writing to $BASE/new/grenade/localrc prior to that line eventually21:26
sdaguethen come back to it21:26
jeblairfungi: hrm.  i don't think we should do that unless we really mean to.21:26
* anteaya is back, after nearly getting hit by a car and getting off the phone with the police21:27
fungijeblair: fair enough21:27
jeblairfungi: because it means a change in grenade could surprise us21:27
jeblairanteaya: oh no!  are you okay?21:27
clarkbok food has been consumed21:27
clarkbanteaya: :/21:27
openstackgerritJoe Gordon proposed a change to openstack-infra/devstack-gate: Write DO_NOT_UPGRADE_SERVICES to grenade's localrc not devstacks  https://review.openstack.org/7665821:27
jog0sdague: that should be the fixI was looking for21:27
sdagueoh... so you were *ALWAYS* doing it :)21:27
sdaguejog0: so that's still wrong21:28
jog0sdague: doh21:28
*** rcleere has quit IRC21:28
*** rcleere_ is now known as rcleere21:28
*** Mithrandir has quit IRC21:28
jog0sdague: ohh I see21:28
jog0derp21:28
jog0$BASE/new/grenade/localrc21:28
sdaguedo the translation to the line in question you care about in -wrap21:28
anteayajeblair: yes I am fine, missed be a whole 12 inches21:28
*** sandywalsh has joined #openstack-infra21:29
sdaguethen put that variable in the existing block21:29
sdaguethat's how we do everything else here21:29
anteayaotherwise I would be dead or no longer able to use the lower half of my body21:29
*** dhellman_ has joined #openstack-infra21:29
jeblairanteaya: i'm glad you're okay!21:29
anteayaI figure it was some high school good ole boy impressing his friends, passing a car with a pedestrian on teh other side21:29
anteayajeblair: thanks, just in shock a bit21:29
anteayaclarkb: thanks21:30
sdagueso actually what you are saying is you didn't actually change the partial upgrade case21:30
* anteaya will get something to eat and read scrollback21:30
*** ildikov_ has quit IRC21:31
*** ildikov_ has joined #openstack-infra21:31
*** arborism has joined #openstack-infra21:31
openstackgerritJoe Gordon proposed a change to openstack-infra/devstack-gate: Write DO_NOT_UPGRADE_SERVICES to grenade's localrc not devstacks  https://review.openstack.org/7665821:32
jog0sdague: third times a charm?21:32
*** SumitNaiksatam has quit IRC21:32
*** coolsvap has joined #openstack-infra21:32
sdaguejog0: no, really, do the translation elsewhere, and lets only write to the grenade log once21:33
jog0fungi: so logstash.o.o is unusable21:33
jeblairsdague: i think i need you to explain to me what was wrong with ps221:33
openstackgerritElizabeth Krumbach Joseph proposed a change to openstack-infra/config: Add support for Fedora 20 to nodepool  https://review.openstack.org/6951021:33
jog0sdague: not sure what you mean by do the translation elsewhere21:33
jeblairsdague, jog0: because i think it was correct and ps3 looks very wrong to me.21:33
sdaguejeblair: it was not writing to grenade localrc21:33
jeblairsdague: it was writing to $BASE/new/grenade/localrc21:34
jeblairsdague: where should it have been writing?21:34
fungiclarkb: "If it helps, I received a 500 Internal Server Error from: api/search"21:34
*** Mithrand1r has joined #openstack-infra21:34
sdaguejeblair: no, it was writing to >>localrc21:34
*** gokrokve has quit IRC21:34
clarkbfungi: ?21:34
jeblairsdague: ah you are right21:34
*** dhellmann has quit IRC21:34
*** Mithrandir has joined #openstack-infra21:34
*** dhellmann has joined #openstack-infra21:34
*** mestery has quit IRC21:34
fungiclarkb: that's the message kibana gives for any search at the moment21:34
clarkboh scrollback21:34
jog0https://review.openstack.org/#/c/76658/2/devstack-vm-gate.sh21:34
*** _sirushti has joined #openstack-infra21:34
*** jhesketh has quit IRC21:34
*** dhellman_ has quit IRC21:34
jog0so the only issue is wrong file21:35
*** coolsvap1 has quit IRC21:35
*** davidhadas has quit IRC21:35
*** jcooley_ has quit IRC21:35
*** david-lyle has quit IRC21:35
*** rossella_s has quit IRC21:35
*** apevec has quit IRC21:35
*** Mithrandir has quit IRC21:35
jog0but right block?21:35
jeblairsdague: but the correction to that should have just been to change "localrc" to "$BASE/new/grenade/localrc" on line 25921:35
jog0I thought sdague said move it somewhere else21:35
jeblairor...21:35
*** rossella_s has joined #openstack-infra21:35
*** SumitNaiksatam has joined #openstack-infra21:35
sdaguejeblair: so I think we should be doing the translation in -wrap21:35
jeblairhehe, move the cd above the first heredoc and drop the long path from that one too.21:35
sdagueand there should be21:35
*** sirushti has quit IRC21:35
*** _sirushti is now known as sirushti21:35
*** zhiyan_ has quit IRC21:35
*** coolsvap_ has quit IRC21:35
sdagueDO_NOT_UPGRADE_SERVICES=$DO_NOT_UPGRADE_SERVICES21:35
clarkbfungi: logstash.o.o:/opt/kibana/kibana/KibanaConfig.rb21:35
sdaguein the grenade block21:36
fungijeblair: jog0: it sounded like the suggestion was to always put that bariable in the grenade localrc, but define the value of that variable in an earlier conditional prior to the heredoc21:36
*** jcooley_ has joined #openstack-infra21:36
*** sandywalsh has quit IRC21:36
clarkbfungi: notice it still has all of the nodes in that list. We may want to manually restrict it to the new ones for now21:36
jog0sdague: ahh21:36
jog0so even if its set to nothing we should include it?21:36
*** dkliban has quit IRC21:36
*** zhiyan_ has joined #openstack-infra21:36
*** sandywalsh has joined #openstack-infra21:36
openstackgerritA change was merged to openstack-infra/config: Revert "Set Zuul gear server logs to debug"  https://review.openstack.org/7665721:36
sdagueso all the grenade stuff happens at once, just to keep it easy to not run around realizing we modify it somewhere else21:36
openstackgerritElizabeth Krumbach Joseph proposed a change to openstack-infra/config: Add support for Fedora 20 to nodepool  https://review.openstack.org/6951021:36
jeblairsdague: wfm21:36
sdaguejog0: yes, it should work21:36
sdagueand if it doesn't, we have an issue21:36
jog0thats the part I was missing21:37
*** ildikov_ has quit IRC21:37
clarkbfungi: did you want to stab at that? you will need to restart the kibana service afterwards too21:37
fungiclarkb: oh, it won't dynamically select just the cluster members which are up and responding?21:37
*** ildikov_ has joined #openstack-infra21:37
clarkbfungi: apparently not21:37
sdaguethe conditional write is logically fine, but I'm really concerned with the number of localrc files being written out that we'll miss something in review if grenade localrc get's written to more than once21:37
*** oubiwann_ has joined #openstack-infra21:37
*** mestery_ is now known as mestery21:38
fungiclarkb: should i slip in a change to remove all the old servers from the kibana config earlier than we remove them from everywhere else in that case?21:38
sdaguehonestly, bonus points for making the grenade_localrc write into a function, just to keep that all isolated21:38
jog0sdague: thats why I put it right under the other block  in https://review.openstack.org/#/c/76658/2/devstack-vm-gate.sh21:38
clarkbfungi: ya probably21:38
clarkbfungi: or, remove it by hand for now21:38
fungiclarkb: or is there a chance that it will try to reach a shard for which all replicas are on the old servers?21:38
*** derekh has joined #openstack-infra21:38
clarkbfungi: yes, but elasticsearch should do internal routing for us21:39
sdaguejog0: well, the fact that jeblair missed the issue, means I want to be extra careful here, because it's easy to miss21:39
openstackgerritafazekas proposed a change to openstack/requirements: Remove incompatible boto versions  https://review.openstack.org/7666321:39
fungioh, got it21:39
clarkbso you can hit any node regardless of the shards it contains and it will figure it out for you21:39
fungiclarkb: in that case i'll stop puppet on logstash.o.o and make that config change manually on the server for the moment21:39
clarkbok21:39
*** rossella_s has quit IRC21:40
fungisudo vi /opt/kibana/kibana/KibanaConfig.rb21:40
fungiha21:40
fungiwaiting for the inevitable "fungi is not allowed to sudo on this host" joke21:40
*** rossella_s has joined #openstack-infra21:40
jog0sdague: where should the translation be done?21:41
fungiclarkb: jog0: logstash queries seem to be working again21:42
jog0fungi: yup21:42
jog0thanks21:42
sdaguejog0: I assume over in the -wrap script somewhere, that's where we do all the conditional transitions21:44
*** rossella_ has joined #openstack-infra21:44
openstackgerritBhuvan Arumugam proposed a change to openstack-infra/config: log analyzer for openstack IRC logs  https://review.openstack.org/7244521:46
*** rossella_ has quit IRC21:47
*** rossella_s has quit IRC21:47
openstackgerritJoe Gordon proposed a change to openstack-infra/devstack-gate: Write DO_NOT_UPGRADE_SERVICES to grenade's localrc not devstacks  https://review.openstack.org/7665821:48
jog0sdague: ^21:48
*** rossella_s has joined #openstack-infra21:48
sdaguejog0: tabs21:48
*** prad has quit IRC21:48
jog0ohh pastemode21:49
* sdague wonders how interested jeblair would be in bash8 running on d-g21:49
*** rossella_s has quit IRC21:50
*** sarob has quit IRC21:50
*** rossella_s has joined #openstack-infra21:50
*** prad has joined #openstack-infra21:51
*** sarob has joined #openstack-infra21:51
openstackgerritJoe Gordon proposed a change to openstack-infra/devstack-gate: Write DO_NOT_UPGRADE_SERVICES to grenade's localrc not devstacks  https://review.openstack.org/7665821:51
jeblairsdague: probably ok21:51
jog0sdague: working back down the stack - filename:"console.html" AND message:"+ echo \'The following services are still running:  nova-compute\'" AND NOT build_name:"check-grenade-dsvm-partial-ncpu"21:51
jog0that is the issue21:52
*** oubiwann_ has quit IRC21:52
harlowjahi guys, for the following, i think a reverify would be correct right21:52
harlowjahttp://logs.openstack.org/20/54220/77/check/gate-taskflow-tox-py27-sa8-mysql/374be3e/console.html21:52
jog0that I introduced21:52
harlowja"git.openstack.org: Temporary failure in name resolution" ?21:52
sdaguejog0: sure, but that's not voting, right?21:52
jog0sdague: not the AND NOT21:52
jeblairharlowja: yeah, i think there's already a bug for that21:52
harlowjakk21:52
harlowjathx jamespage21:53
jog0I only see two hits so far21:53
harlowjaoops jeblair21:53
jog0so not sure21:53
sdagueoh21:53
sdaguewait, why is that not shutting down NCPU21:53
jeblairharlowja: 127038221:53
jog0sdague: exactly21:54
harlowjayup thx jeblair found it21:54
jog0sdague: for some reason that bug happens a lot more on the current partial-ncpu job21:54
jog0which wasn't running partial-ncpu21:54
sdagueheh21:55
sdagueoh, well, remember es is still in world of hurt21:55
*** e0ne has joined #openstack-infra21:55
jog0sdague: true21:55
sdagueso yuo just might be missing data21:55
*** sarob has quit IRC21:55
jog0but look at filename:"console.html" AND message:"+ echo \'The following services are still running:  nova-compute\'" AND  build_name:"check-grenade-dsvm-partial-ncpu"21:55
jog0(without the NOT)21:55
sdaguewhich makes me feel like we might want to revert the patch you think caused this21:56
*** mfer has quit IRC21:56
*** jhesketh has joined #openstack-infra21:56
*** jhesketh has quit IRC21:56
jog0anyway I will continue to babysit this, and if it is bad we revert until understand what went wrong21:56
*** jhesketh has joined #openstack-infra21:56
*** sarob has joined #openstack-infra21:56
*** e0ne has quit IRC21:56
*** jhesketh__ has joined #openstack-infra21:56
sdaguesure21:56
sdagueI'm going to wait for results on your patch before voting on it21:56
*** e0ne has joined #openstack-infra21:57
sdagueto make sure that's working in the base grenade case21:57
jeblairfungi, clarkb: i'm trying a new thing with this zuul restart21:57
*** markwash has joined #openstack-infra21:57
jeblairfungi, clarkb: i've shut down the mergers already so that no new builds will start21:57
fungioh! neat idea21:57
jeblairfungi, clarkb: i'm thinking that will help stabilize things a little bit during the actual restart so that, say, the gate queue doesn't launch a bunch of jobs right before i kill it21:58
*** e0ne has quit IRC21:58
*** e0ne has joined #openstack-infra21:58
*** CaptTofu has quit IRC21:59
clarkbjeblair: will that cause those builds to fail instead because the merger can't merge anything?21:59
jeblairi'm also going to "update node set state=4 where state=3;" to speed things along a bit.21:59
clarkbif it works \o/21:59
jeblairclarkb: no because gerrit's the thing that merges things.22:00
SukhdevFolks we are running devstack (stack.sh) and are hitting following issue - wonder if somebody could shed some light22:00
Sukhdev2014-02-26 13:46:05.451 | + git clone git://git.openstack.org/openstack/nova.git /opt/stack/nova22:00
Sukhdev2014-02-26 13:46:26.358 | Cloning into '/opt/stack/nova'...22:00
Sukhdev2014-02-26 13:47:26.365 | fatal: The remote end hung up unexpectedly22:00
Sukhdev2014-02-26 13:47:26.369 | fatal: early EOF22:00
sdagueso does anyone know if git.openstack.org is getting overloaded, because that's been happening a bit in logs22:01
*** sdake_ has quit IRC22:01
fungisdague: it's pretty loaded. i'm adding a fifth git server to the farm and rebuilding the other 4 on performance+pvhvm servers22:01
*** sarob has quit IRC22:01
sdagueok22:01
clarkbsdague: are you behind a proxy?22:01
sdaguenot I22:02
clarkbgah Sukhdev ^22:02
sdagueI mean I've seen a bunch of people reporting the issue22:02
jeblairrestarting zuul22:02
clarkbsdague: right there are two incarnations and one is be behind a proxy and hit the problem 100% of the time22:02
*** julim has quit IRC22:02
clarkbor not be behind a proxy and hit it occasionally22:02
*** mgagne has quit IRC22:02
sdaguethis was proposed - https://review.openstack.org/#/c/74910/22:02
sdagueto address it22:02
*** mgagne has joined #openstack-infra22:03
*** melwitt1 has joined #openstack-infra22:03
*** e0ne has quit IRC22:03
sdagueI guess it just makes me wonder if we are doing ourselves a service by pointing most people to git.o.o vs. github here, if we are hitting load issues.22:03
Sukhdevclarkb: ?22:04
jeblairsdague: we are doing people a service because we are able to expand the capacity, which we're doing now.22:04
clarkbSukhdev: are you behind a proxy?22:04
*** mrmartin has joined #openstack-infra22:04
Sukhdevclarkb: what do you mean by that? - firewal?22:04
clarkbSukhdev: yes firewall or proxy22:05
*** mrmartin has quit IRC22:05
*** melwitt has quit IRC22:05
Sukhdevclarkb: yes, we are - but, we checked with the firewall folks, they claim that they do not see any drops of connection resets22:06
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Remove deprecated elasticsearch nodes from kibana  https://review.openstack.org/7666922:06
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Remove old elasticsearch cluster members  https://review.openstack.org/7605122:06
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Update logstash doc for an elasticsearch cluster  https://review.openstack.org/7657422:06
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Move primary elasticsearch discover node  https://review.openstack.org/7657522:06
fungiclarkb: is https://review.openstack.org/76051 there safe?22:06
clarkbSukhdev: is port 9418 allowed through?22:06
sdaguejeblair: ok22:06
jeblairzuul is restarted22:07
fungiclarkb: er, sorry, meant https://review.openstack.org/7666922:08
Sukhdevclarkb: I believe so. I ran a script doing git clone the same repository for over 100 times - no failures. But, when I am running devstack (stack.sh) I see the ocassional failures22:08
jeblairSukhdev: that's really interesting22:08
*** CaptTofu has joined #openstack-infra22:08
Sukhdevjeblair: yes it is I am so confused22:09
*** Mithrand1r is now known as Mithrandir22:09
*** smarcet has quit IRC22:10
*** mgagne has quit IRC22:10
fungiclarkb: jeblair: after a bit of playing around with gerrit replication, it does indeed seem that adding a new replication target is going to require a gerrit restart22:11
*** rossella_s has quit IRC22:11
jeblairfungi: i think we should go ahead and restart it whenever you are ready22:11
jeblairit's fast so i think we can just announce it in irc22:12
clarkbwfm22:12
*** openstackstatus has joined #openstack-infra22:12
*** Sukhdev has quit IRC22:12
fungi#status alert gerrit service on review.openstack.org will be down momentarily for a restart to add an additional git server22:13
openstackstatusNOTICE: gerrit service on review.openstack.org will be down momentarily for a restart to add an additional git server22:13
*** ChanServ changes topic to "gerrit service on review.openstack.org will be down momentarily for a restart to add an additional git server"22:13
*** mgagne has joined #openstack-infra22:13
*** wchrisj has joined #openstack-infra22:14
*** DuncanT has joined #openstack-infra22:14
fungi#status ok22:14
*** ChanServ changes topic to "Discussion of OpenStack Project Infrastructure | Docs http://ci.openstack.org/ | Bugs https://launchpad.net/openstack-ci | Code https://git.openstack.org/cgit/openstack-infra/"22:14
*** wchrisj is now known as Guest3560922:14
*** wchrisj_ has quit IRC22:17
clarkbfungi: es cluster looks green again. I think you can stop es on 04 whenever you are ready22:19
fungiwe're closer--now it's at least trying to replicate, though we never puppeted the known_hosts for the gerrit2 user on review.o.o22:20
jeblairfungi: interesting problem since each of the git servers has a different key22:21
fungijeblair: agreed. we'd presumably need to puppet each in advance of adding to replication configuration22:21
*** sarob has joined #openstack-infra22:21
*** Ryan_Lane has joined #openstack-infra22:22
clarkbwe could cave and use exported resources now that we have puppetdb22:22
jeblairfungi: yeah.  it's not ideal.  you'll just 'ssh' by hand now?22:22
fungihowever, i wonder whether gerrit caches the known_hosts file...22:22
fungialready did that part22:22
clarkbexported resources can copy the key from gitXX to puppetdb when puppet runs on gitXX then when puppet runs on gerrit grab that value from puppetdb and put it in place22:22
fungibut it still seems to be logging Feb 26 22:21:23 git05 sshd[18001]: Received disconnect from 198.101.231.251: 3: com.jcraft.jsch.JSchException: reject HostKey: git05.openstack.org22:22
jeblairclarkb: that actually sounds like the correct solution.22:22
clarkbexported resources tend to be a bit ugly because you ahve to double tap puppet22:22
clarkband require puppetdb but we have one of those now22:23
jeblairclarkb: but are eventually consistent22:23
clarkbjeblair: correct22:23
fungigrr! https://review.openstack.org/Documentation/config-replication.html "Host keys for any destination SSH servers must appear in the user’s ~/.ssh/known_hosts file, and must be added in advance, before Gerrit starts. If a host key is not listed, Gerrit will be unable to connect to that destination, and replication to that URL will fail."22:24
* fungi should memorize all of gerrit's documentation, clearly22:24
fungi_must be added in advance, before Gerrit starts_22:24
jeblairoh well.  let's kick it again.22:25
clarkbthat caching22:25
fungiyep22:25
*** Guest35609 has quit IRC22:25
fungi#status alert gerrit service on review.openstack.org will be down momentarily for a another brief restart--apologies for the disruption22:25
openstackstatusNOTICE: gerrit service on review.openstack.org will be down momentarily for a another brief restart--apologies for the disruption22:25
*** ChanServ changes topic to "gerrit service on review.openstack.org will be down momentarily for a another brief restart--apologies for the disruption"22:25
*** pdmars has quit IRC22:26
*** mrodden has joined #openstack-infra22:26
*** Ryan_Lane has quit IRC22:26
*** sarob has quit IRC22:26
*** wchrisj_ has joined #openstack-infra22:26
fungi#status ok22:27
*** ChanServ changes topic to "Discussion of OpenStack Project Infrastructure | Docs http://ci.openstack.org/ | Bugs https://launchpad.net/openstack-ci | Code https://git.openstack.org/cgit/openstack-infra/"22:27
*** mrodden1 has quit IRC22:27
dhellmannis the status notice supposed to go out to all of the channels where the logging bot runs, or are those separate bots?22:27
fungiFeb 26 22:27:43 git05 sshd[20928]: Accepted publickey for cgit from 198.101.231.251 port 37644 ssh222:27
fungidhellmann: only where the status bot (openstackstatus) runs22:28
dhellmannfungi: ok, thanks22:28
fungidhellmann: we originally had it notifying in here and in #-dev since, you know, everyone was in there ;)22:28
*** sarob has joined #openstack-infra22:28
dhellmannfungi: now it's a ghost town over there22:28
*** hashar has quit IRC22:28
fungitoo bad people flee channels where we notify important things ;)22:29
jeblairwe'll put it everywhere when https://bugs.launchpad.net/openstack-ci/+bug/1190296 is complete22:29
*** sarob_ has joined #openstack-infra22:29
*** weshay has quit IRC22:29
fungihey, lookit! http://git05.openstack.org:8080/cgit/openstack-infra/zuul/22:29
dhellmannok, I was just trying to figure out if something was broken or if I was just in a channel that didn't see the notices, so not a problem22:29
*** markwash has quit IRC22:29
jeblairfungi: yay!  so we can trigger a full run and then approve that other change now22:30
fungiyep22:30
clarkbthat other change?22:30
clarkbthe one that adds 05?22:30
jeblairt22:30
fungithough rebuilding the other git servers will be touchy. i may want to backup and replace their ssh host keys22:30
clarkbfungi: good idea22:30
jeblairgood point22:31
fungiso as to prevent needing multiple gerrit restarts22:31
*** CaptTofu has quit IRC22:31
fungithis one was unavoidable, but the others we can solve i think22:31
fungigerrit seems to have done a full replicate on its own, maybe triggered by the restart22:32
fungibut i'll spot check anyway22:32
*** sarob__ has joined #openstack-infra22:33
*** sarob_ has quit IRC22:33
openstackgerritMatt Riedemann proposed a change to openstack-infra/elastic-recheck: Add query for bug 1248757  https://review.openstack.org/7667722:33
*** thomasem has quit IRC22:34
*** denis_makogon_ has quit IRC22:34
sarobGuys have moment to look at https://review.openstack.org/#/c/76419/22:35
fungilooks like all the replicate processes in gerrit's queue have finished22:35
*** changbl has quit IRC22:35
fungikicking off one manually now just to be sure22:35
sarobNew stackforge project with no ops for tests22:35
sarobBut running python tests anyway and failing22:35
fungisarob: is it intended to eventually have tests?22:36
*** pdmars has joined #openstack-infra22:36
sarobIn a few months22:37
fungiif so, adding a simple tox.ini that just runs /bin/true or something would be an easy way to make your test addition changes self-testing22:37
sarobNo ops for right now22:37
sarobOr at least I thought22:37
* fungi doesn't know what "ops" means in this context22:37
*** sarob__ has quit IRC22:37
*** pdmars_ has joined #openstack-infra22:38
clarkbgate-noop22:38
fungiswitching zuul to run gate-noop for that project in the place of actual jobs is an alternative, but does require a change to openstack-infra/config to enact that22:39
sarobNoop22:39
sarobYup22:39
sarobSpellchk22:39
*** thuc has quit IRC22:39
sarobFungi It's set that way now22:39
*** thuc has joined #openstack-infra22:40
fungisarob: okay, please let us know the name of the project, or provide a url to a change where you're seeing the broken behavior22:40
fungioh, you did22:41
*** mbacchi_ has quit IRC22:41
fungilooking at the layout.yaml to confirm22:41
*** pdmars has quit IRC22:41
*** sarob_ has joined #openstack-infra22:41
*** mrda_away is now known as mrda22:41
*** Sukhdev has joined #openstack-infra22:41
saroblayout has it as milk22:42
fungisarob: as i suspected, stackforge/mil is set in layout.yaml for a variety of jobs, like the ones you see failing on your change... http://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/zuul/layout.yaml#n355322:42
sarobWith noon22:42
fungier, stackforge/milk22:42
sarobArrg22:42
sarobRight stackforge/milk22:42
Sukhdevjeblair clarkb: any idea?22:43
fungisarob: so you'll want to submit a change to openstack-infra/config:modules/openstack_project/files/zuul/layout.yaml around line 3553 to change that so that it runs gate-noop in the check and gate pipelines, and probably get rid of the post pipeline entirely for now22:44
sarobI just did a pull and I show stackforge/milk22:44
*** thuc has quit IRC22:44
sarobSet for check and gate noop22:44
*** mriedem has quit IRC22:44
sarobProjects and python-jobs has no entry22:45
*** dangers is now known as dangers_away22:46
fungisarob: please show me what you're talking about on http://git.openstack.org/cgit/openstack-infra/config/tree/ (i think maybe you're not looking at the layout.yaml, or you have a locally-modified branch you've been pulling into for some reason)22:46
sarobI checked github and you are correct22:47
sarobI thought I was losing my mind22:47
fungisarob: you probably have a dirty master branch you modified at some point in the past and then pulled new remote changes into22:48
sarobYup22:48
fungisarob: you may want to reset it to origin/master22:48
sarobDirty is bad22:48
sarobWill check22:48
jeblairSukhdev: i don't.  if you are willing to do some serious debugging, you could to a tcpdump on your host and if that fails, compare it with a packet capture or session log from the firewall.  if you get that far and suspect that the problem is on the git.o.o side, we can schedule a time to do packet captures there as well.22:50
*** mayu_ has joined #openstack-infra22:51
jeblairSukhdev: but honestly, we know that git.o.o is slightly overloaded at this point (we're almost done adding a new server), so it could just be bad luck.22:51
*** dkranz has quit IRC22:51
mayu_jaypipes: hi, there22:51
jeblairclarkb, fungi: logstash queue seems on a decidedly downward trend.22:52
fungijeblair: agreed, it seems to have made up its mind about that22:52
jeblairfungi: is replication complete?22:53
*** hogepodge has quit IRC22:53
clarkbfungi: jeblair yup22:53
fungiclarkb: jeblair: okay, the re-run of a full replicate has completed all pending tasks, but i'm testing git/http/https direct cloning now just to make sure nothing's screwy22:53
mayu_clarkb: help, http://paste.openstack.org/show/69992/22:53
fungithen we can merge 76650 as soon as that's done22:53
clarkbfungi: roger22:54
jeblairfungi: cool.  i +2d, ready for your go on merging.22:54
mayu_clarb: Following jaypipes's blog to construct third-party ci, there is an error on the link http://paste.openstack.org/show/69992/22:55
*** lnxnut has quit IRC22:55
fungitesting by cloning nova is a bit slow, but if anything's going to break that's where we'd see it first i suspect, so good canary22:55
clarkbmayu_: it looks like you cut off the traceback, was there more to the traceback?22:55
mayu_clarkb: no, remote hung up22:56
jeblairmayu_: we think the git servers are overloaded, we're adding a new one now.  however, if you are behind a firewall or proxy, that could be the problem as well.22:56
jeblairthere has been a significant increase it git.o.o traffic at around 19:15 today.22:58
mayu_the other project code git clone sucessful, It is not firewall problem22:58
jeblairmayu_: just try again22:58
*** e0ne has joined #openstack-infra22:58
fungijeblair: clarkb: interestingly, i got this from git05, even though i'm probably the only client hitting it... http://paste.openstack.org/show/69998/22:59
clarkbhuh23:00
fungimaybe we have apache tuning we need to do, or does haproxy time out active sockets under some circumstances?23:00
clarkbfungi: does the apache error log say anything interesting?23:00
mayu_I did, exec install_slave.sh, but it is not trusted.23:00
fungii'm digging now23:00
clarkbalso the haproxy log should tell you stuff23:00
jeblairthat's not going through haproxy23:00
fungioh, wait, i'm also not going through haproxy in this case23:00
*** Ryan_Lane has joined #openstack-infra23:00
fungiright23:00
jeblair[Wed Feb 26 22:58:27 2014] [warn] [client 2001:470:8:d2f:96de:80ff:feec:f9e7] Timeout waiting for output from CGI script /usr/libexec/git-core/git-http-backend23:01
jeblair[Wed Feb 26 22:58:27 2014] [error] [client 2001:470:8:d2f:96de:80ff:feec:f9e7] (70007)The timeout specified has expired: ap_content_length_filter: apr_bucket_read() failed23:01
fungiyep23:01
mayu_clarkb: exec install_slave.sh again, this is the output http://paste.openstack.org/show/69999/. it seems not trusted.23:02
fungihttp://httpd.apache.org/docs/2.0/mod/core.html#timeout23:02
*** e0ne has quit IRC23:03
*** ociuhandu has joined #openstack-infra23:03
jeblairfungi: we should pack refs23:03
jeblairfungi: we should run the git repo maintenance crons manually on git0523:03
fungijeblair: running them now23:03
*** dims_ has quit IRC23:04
*** harlowja is now known as harlowja_away23:05
mayu_jeblair: exec install_slave.sh again, this is the output http://paste.openstack.org/show/69999/. it seems not trusted.23:05
clarkbmayu_: it looks like it worked23:05
*** bknudson has quit IRC23:05
*** yamahata has quit IRC23:06
*** ianw has quit IRC23:07
*** ianw has joined #openstack-infra23:07
fungiclarkb: that paste looks like it failed out on a git clone retry which didn't work23:08
fungiremember recently we tweaked that script to retry to clone once if the first attempt fails23:08
clarkbright the first half of the paste is the old failure23:08
clarkbthe second half was successful23:08
fungioic23:09
openstackgerritSean Roberts proposed a change to openstack-infra/config: modified stackforge/milk project in zuul/layout.yaml  https://review.openstack.org/7668423:09
mayu_clarkb: It seems that the downloading work not continue as the last exception.23:09
fungiyes, re-running does seem to have worked23:09
*** jamielennox|away is now known as jamielennox23:10
fungioh, actually no the install_slave.sh worked, the prep_node.sh did not23:10
fungiif i'm interpreting that correctly23:10
*** jgrimm has quit IRC23:11
jeblairi just spot checked a devstack-precise node from each region, and they are not cloning repos.  so i don't think the additional traffic is because we broke caching there.23:12
*** doddstack has quit IRC23:12
fungijeblair: i suspect if we do some log analysis on the haproxy server we'll find a recent uptick in utilization from some third-party ci23:13
mayu_fungi: I will launch dsvm-tempest-full, expecting it runs well23:14
clarkbfungi: jeblair 192.237.223.224 is persisntent23:14
clarkbwhich is haproxy23:15
clarkbso I don't think we are getting out of band requests23:15
Sukhdevjeblair: I did the tcpdump yesterday as well - Had the firewall guys look at the traffic as well - Did not see any issues23:15
clarkbgrep -v makes it seem like most of the traffic is from the load balancer23:15
jeblairSukhdev: i'd recommend seeing if the problem persists until after we add the new git server to the pool, and if so, let's dig into it more then.23:17
jeblairs/until//23:17
Sukhdevjeblair: when do you plan on adding additional server?23:17
jeblairSukhdev: moments from now23:18
*** pmathews has quit IRC23:18
*** pmathews1 has joined #openstack-infra23:18
openstackgerritA change was merged to openstack/requirements: Upgrade six to 1.5.2  https://review.openstack.org/6842423:18
Sukhdevjeblair: Ah then it makes sense to wait it out23:18
fungithe repack is almost done with nova, looks like. that one takes quite a while23:18
jeblairgrep 18:..:.. haproxy.log|wc -l  ==  9211823:19
jeblairgrep 20:..:.. haproxy.log|wc -l ==  15180823:19
jeblairthere definitely seems to be a jump in number of requests23:19
fungihttp://paste.openstack.org/show/70006/23:20
fungibluebird.ibm.com23:20
*** dims_ has joined #openstack-infra23:20
lifelessword23:20
jeblairfungi wins23:20
*** VijayT has joined #openstack-infra23:21
* fungi has had far too many years analyzing service logs to spot denial of service attacks23:21
fungii suppose we could correlate with gerrit logs to see whether there are any service accounts authenticating from ip addresses with similar reverse dns entries23:22
jeblairfungi: i think they only account for an increase of 600 hits from the 18 hour to 2023:23
fungiahh23:23
fungiwe can narrow that then23:23
clarkbvmware23:23
*** markwash has joined #openstack-infra23:23
jeblairfungi: btw, i think if this approach produces a negative result, then it's likely our own ddos that we call "the gate".23:24
jeblairsince we cycle through ips pretty quickly23:24
SpamapSUpdating cache of https://git.openstack.org/openstack/keystone.git in /root/.cache/image-create/repository-sources/6bd7a23:24
SpamapS6223fe50b4a267bc924641d331df94c833e with ref master23:24
SpamapSerror: RPC failed; result=7, HTTP code = 023:24
SpamapSIsolated incident or onging problem?23:24
clarkbjeblair: yes the log file is 35XMB of that 40MB appears to be not gate23:24
jeblairSpamapS: seems decreasingly isolated23:24
*** dstanek is now known as dstanek_afk23:25
openstackgerritJenkins proposed a change to openstack-dev/hacking: Updated from global requirements  https://review.openstack.org/7668723:25
*** shashank_ has quit IRC23:25
*** eharney has quit IRC23:26
*** VijayT has left #openstack-infra23:26
sdaguekrtaylor: is there any way to deal with mimetypes on softlayer's swift to make it so you can view those files directly in browser, instead of them pushing as attachment?23:26
SpamapSjeblair: would it be helpful if I report a bug on launchpad? Or just watch for another one?23:26
*** VijayT has joined #openstack-infra23:26
jeblairSpamapS: we're almost done adding another git server to the pool, that should help.23:27
SpamapSahh backscroll.. moar servers ... ok :)23:27
jeblairclarkb, fungi: i think 1915 is about when i started doing zuul restarts.  that makes me think this is mostly us.23:28
fungiand then hopefully identifying the recent uptick in utilization so we can thwappp it solidly23:28
fungithwappp ourselves then23:28
clarkbjeblair: just queued load that is hammering the gate?23:28
fungirepacks just finished. retesting real quick23:28
*** vkozhukalov has quit IRC23:28
anteaya what will it take for git05 to show up in the git farm cacti grouping? http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=223:29
jeblairclarkb: i think so.  i'm glad we found this now though since we're getting better at sustaining this kind of load in the zuul-nodepool-jenkins.  it's looking more like git.o.o is not overpowered anymore.23:29
jeblairanteaya: some gui clicking.  i'll do it23:30
anteayajeblair: k23:30
Sukhdevjeblair: can you ping me as soon additional server come on line - I would like to try it then and report back the findings23:30
anteayaSukhdev: I will let you know23:30
*** shashank_ has joined #openstack-infra23:30
Sukhdevanteaya: cool - thanks23:30
mayu_anteaya: hi23:30
anteayamayu_: hello23:30
*** bhuvan has quit IRC23:31
Sukhdevanteaya: BTW, in the morning discussion, you were correct.23:31
anteayaokay23:31
mayu_anteaya: things goes not as expected23:31
anteayaI haven't gotten back to that yet23:31
jeblairanteaya: done23:32
*** bhuvan_ has quit IRC23:32
anteayamayu_: waht are you expecting23:32
* anteaya refreshes cacti git farm page23:32
*** thuc has joined #openstack-infra23:32
anteayajeblair: thanks23:32
jeblairnp23:32
*** thuc has quit IRC23:32
reedin case you like to know how things end, I went and bought the Nexus 5 from a tmobile store (google store had a 2-3 weeks wait)23:33
*** bhuvan__ has joined #openstack-infra23:33
*** bhuvan_ has joined #openstack-infra23:33
clarkbreed: it was the correct choice23:33
*** thuc has joined #openstack-infra23:33
clarkbmoto Gs lack of LTE is annoying23:33
anteayareed: are you happy?23:33
reedclarkb, I follow your orders :)23:33
clarkbperfect cheap smartphone though23:33
reedanteaya, happy is a big word... no, I'm not: I am $396 poorer and some jerk has my perfectly find Nexus 423:34
*** lcheng__ has joined #openstack-infra23:34
reeds/find/fine23:34
anteayareed: well at least you are honest23:34
anteayataht scores big points with me23:34
*** lcheng__ has quit IRC23:35
fungijeblair: clarkb: successfully cloned nova via git and http directly, so approving 76650 now23:35
*** lcheng_ has quit IRC23:36
anteayagit03 is still being worked very hard23:36
anteayareed: and I understand how you feel23:36
*** ryanpetrello has quit IRC23:36
mayu_Sukhdev: how to judge third-party work well with openstack ci ?23:37
clarkbjeblair: when you have a second can you hop on logstash.o.o and see if the log-gearman-client behavior is geard related?23:37
*** bhuvan has joined #openstack-infra23:38
*** bhuvan___ has joined #openstack-infra23:38
clarkbit still has a cpu pegged23:38
mayu_anteaya: how to judge third-party work well with openstack ci ?23:39
jeblairclarkb: ack23:39
*** eharney has joined #openstack-infra23:39
*** rlandy has quit IRC23:39
mayu_anteaya: how to judge third-party ci work well with openstack ci ?23:39
jeblairclarkb: it could be.  it's sending a lot of data.23:40
*** bhuvan__ has quit IRC23:40
*** bhuvan_ has quit IRC23:40
mayu_jaypipes: how to judge third-party ci work well with openstack ci ?23:40
anteayamayu_: okay okay slow down23:40
anteayawe all can see the questions, sometimes it takes a minute to get to them23:41
mayu_sorry23:41
openstackgerritA change was merged to openstack-infra/config: Add git05 to the git.openstack.org haproxy farm  https://review.openstack.org/7665023:41
*** lcheng__ has joined #openstack-infra23:41
anteayait is okay, you are learning23:41
anteayamayu_: okay so right now your system can listen to changes to openstack/sandbox correct?23:42
mayu_yes23:42
anteayaokay so post a change to openstack/sandbox repo23:42
anteayaand post the url for that change23:42
anteayaand then we will watch what happens23:42
mayu_https://review.openstack.org/#/c/75953/23:43
*** bhuvan_ has joined #openstack-infra23:43
*** bhuvan__ has joined #openstack-infra23:43
anteayacan you post a new patchset to that change, please?23:43
anteayalet's see what your system is doing right now23:43
mayu_ok23:44
*** bhuvan___ has quit IRC23:44
jeblairclarkb: the stats run does a full iteration over all the jobs, so performance should improve as the queue goes down.  i'm kind of inclined to not apply any emergency fixes at the moment since i'm about to go afk for 2 days....23:44
clarkbjeblair: wfm23:45
jeblairclarkb: but i think we could switch geard to just using some internal counters so it doesn't have to do that23:45
*** bhuvan has quit IRC23:45
clarkbit appeared a bit odd so I asked23:45
anteayaSukhdev: you can try running .stack.sh again23:45
clarkbbut it iterating over the list each time makes sense about why it is slow23:45
jeblairclarkb: and if you decide you'd rather shut it off, removing the env vars from the defaults file and restarting should do the trick23:45
clarkbjeblair: ok23:45
clarkbthank you for looking23:45
jeblair"tcpdump port 8125 -A" on logstash is pretty cool, btw.  :)23:46
* clarkb tries that23:46
clarkboh neat you can watch the waiting metric go crazy23:47
dtroyerclarkb: I just left a response to your question in https://review.openstack.org/#/c/74910/, if you're happy it can go in.23:47
*** bhuvan has joined #openstack-infra23:48
*** bhuvan___ has joined #openstack-infra23:48
clarkbdtroyer: and responded I think I can live with it as is aftr you point that out23:48
*** bhuvan__ has quit IRC23:49
*** bhuvan_ has quit IRC23:49
dtroyercool, thanks23:50
jeblairclarkb: ?23:51
jeblairclarkb: devstack does fetches in the gate?23:51
jeblairclarkb: it shouldn't do _any_ git operations23:51
*** cadenzajon has quit IRC23:51
clarkbjeblair: I think at least one of those fetches was outside of the erroronclone condition23:51
clarkbI didn't look at it too long /em looks again23:51
dtroyerone is outside, but inside RECLONE=True23:52
clarkbah23:52
clarkbok then no fetches in the gate23:52
*** dolphm is now known as dolphm_50323:52
fungijeblair: clarkb: i think i agree that we're the ones ddosing ourselves... over 50% of the entries in the haproxy.log are from rackspace's ipv6 assignment23:53
fungifor the past hour23:53
*** bhuvan_ has joined #openstack-infra23:53
*** bhuvan__ has joined #openstack-infra23:53
*** nati_ueno has joined #openstack-infra23:53
*** mayu__ has joined #openstack-infra23:53
*** bhuvan has quit IRC23:53
jeblaircool, so this is the new normal.  :)23:53
mayu__anteaya: https://review.openstack.org/#/c/76711/23:54
*** bhuvan___ has quit IRC23:54
*** mrodden has quit IRC23:55
krtaylorsdague, I am looking into it, I agree it would be much nicer, it may be possible if I dig deeper23:55
mayu__anteaya:it is the new patch that submitted right now.23:55
anteayamayu__: okay so lets look at the comments your system leaves on that patch23:55
anteayafirst of all it comments, that is a big thing to notice, well done23:55
*** lcheng__ has quit IRC23:56
fungiclarkb: jeblair: another 25% is from hp's class-a ipv4 assignment (and i'm willing to bet most of the remaining 25% are from other smaller hp allocations)23:56
anteayathat is the hard part23:56
anteayamayu__: now you have to fine tune it so that it comments only when you want it to23:56
anteayaand the comments contain useful information23:56
mayu__ok23:56
anteayamayu__: so it leaves this comment Starting check jobs. http://10.250.201.20/zuul/status23:57
clarkbfungi: ya23:57
anteayayou need to ensure it does not leave that comment, mayu__23:57
anteayamayu__: yours is one of about 40 ci systems23:57
*** e0ne has joined #openstack-infra23:57
fungijeblair: clarkb: though it's worth pointing out that i'm only analyzing the number of requests, so it's entirely possible that these are tiny fetches and someone else is doing whopping large reclones over and over which account for far fewer actual calls but much more system load23:57
*** nati_uen_ has quit IRC23:57
*** nati_ueno has quit IRC23:57
clarkbfungi: good point23:57
anteayamayu__: and if all 40 ci systems left that message when they are starting jobs, that is just a log of noise23:58
*** bhuvan has joined #openstack-infra23:58
*** bhuvan___ has joined #openstack-infra23:58
clarkbfungi: there is an haproxy top thing we might want to look into using23:58
*** nati_ueno has joined #openstack-infra23:58
clarkbfungi: but I am pretty sure it isn't packaged anywhere23:58
anteayamayu__: the only system that can leave the starting jobs message is our system, jenkins23:58
clarkbwhich is why I ginored it in the past23:58
anteayamayu__: does that make sense?23:58
mayu__not clear23:59
fungiclarkb: hatop?23:59
*** bhuvan__ has quit IRC23:59
*** bhuvan_ has quit IRC23:59
fungiclarkb: http://feurix.org/projects/hatop/23:59
*** sarob has quit IRC23:59
clarkbfungi: yeah23:59
anteayamayu__: your system leaves a comment on the patch to say it is starting jobs23:59
anteayamayu__: you need to ensure it does not leave that message23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!