*** dina_belova has quit IRC | 00:00 | |
*** rfolco has quit IRC | 00:00 | |
clarkb | jeblair: mordred: do you want to review https://review.openstack.org/#/c/39992/1 checks that IPV6 flag is set in the launch node env files before checking that ipv6 works to accomodate providers without ipv6 | 00:00 |
---|---|---|
openstackgerrit | A change was merged to openstack-infra/config: pbx: update SIP config to help deal with NAT issues https://review.openstack.org/39616 | 00:03 |
bodepd | clarkb: let me get back to a recreat... | 00:03 |
jeblair | clarkb: has the flag been set in the rc files? | 00:03 |
clarkb | jeblair: yup, fungi set them according to IRC and the first comment on that change | 00:04 |
jeblair | clarkb: ah yes | 00:04 |
jeblair | just read that | 00:04 |
clarkb | I missed it too initially :) | 00:04 |
openstackgerrit | A change was merged to openstack-infra/config: More launch improvements https://review.openstack.org/39992 | 00:06 |
*** gyee has quit IRC | 00:06 | |
*** pabelanger has quit IRC | 00:11 | |
*** pcrews has joined #openstack-infra | 00:12 | |
*** nijaba_ has joined #openstack-infra | 00:12 | |
*** nijaba has quit IRC | 00:13 | |
* fungi needs to use bold+underline font decoration more | 00:13 | |
fungi | gerrit wishlist item: support <blink> tag in review comments ;) | 00:13 |
clarkb | fungi: I have colors turned off in my irc client config. I am glad now :) | 00:14 |
fungi | me too actually. too many years on monochrome displays to tolerate to much color assault | 00:14 |
notmyname | you say that, but do you use syntax coloring in source editors? | 00:15 |
*** sarob has joined #openstack-infra | 00:16 | |
clarkb | notmyname: thats different. vim doesn't give me rainbow text of random things | 00:16 |
clarkb | rainbow text in IRC is quite annoying | 00:16 |
fungi | tame, subdued highlighting, yes. controlled/structured colorization is good for me. randomuser-selected color highlighting not so much | 00:16 |
lifeless | paint all the things | 00:17 |
fungi | bikesheds first! | 00:17 |
notmyname | heh. kinda like how I have auto inlined images turned off in my irc client (who would do that?!), but I occasionally do like to see an animated gif. (openstackreactions notwithstanding) | 00:17 |
fungi | my irc client wouldn't do inlined images unless aalib counts | 00:18 |
jeblair | openstackreactions are animated? i don't do animated gifs in my browser, so i guess i've been "missing out". | 00:18 |
clarkb | jeblair: yes :) | 00:18 |
notmyname | jeblair: I wouldn't say you've been "missing" it | 00:18 |
jeblair | i am so sad. | 00:18 |
clarkb | a couple of them are really good. I like the crash test dummy one >_> | 00:19 |
notmyname | jeblair: also, if you want to stop using lynx, I hear netscape has a new browser you can try out | 00:19 |
clarkb | pleia2: I have left comments on some of your cgit related changes. Let me know if you have any qusetions because at least one of them is sad panda making | 00:19 |
jeblair | notmyname: it's harder to have it email me web pages. | 00:20 |
*** sarob has quit IRC | 00:20 | |
pleia2 | clarkb: the name one I knew would be trouble, trying to figure out how to do it in a way that makes lint happy (I really don't know yet) | 00:21 |
notmyname | jeblair: don't worry. all projects expand in scope until they include email. | 00:21 |
mgagne | pleia2: see inline comment, I'm sorry ^^' | 00:22 |
jeblair | notmyname: that has proven to be true even for openstack-infra. | 00:22 |
fungi | notmyname: s/netscape/ncsa mosaic/ | 00:22 |
pleia2 | mgagne :) | 00:22 |
*** wu_wenxiang has quit IRC | 00:23 | |
notmyname | fungi: well, yeah, if you want to old stuff. but navigator has support for HTML4 and even the marquis tag. I think it may even be better than that newfangled IE4 | 00:23 |
notmyname | hard to believe that the netscape IPO (ie start of first bubble) was 19 years ago | 00:24 |
fungi | my first impression of mosaic was "inline images will never catch on" | 00:24 |
*** jinkoo has joined #openstack-infra | 00:24 | |
openstackgerrit | A change was merged to openstack-infra/devstack-gate: Use Jenkins credentials store if specified https://review.openstack.org/40310 | 00:25 |
notmyname | fungi: "you mean like anyone can put an image on my screen? have you seen the kind of people who are on the internet?" | 00:25 |
fungi | basically. and then it only got worse | 00:25 |
notmyname | fungi: when life hands you lemons, throw a party ;-) | 00:26 |
* fungi shudders | 00:26 | |
clarkb | Now I have Cave Johnson talking in my head. Thankfully that is much much better than the alternative | 00:28 |
fungi | j.k. simmons was such a great choice of voice actor. i too hear him in my head all the time. telling me to do things. science things | 00:29 |
clarkb | What was the line "If life gives you lemons, make hand grenades!"? | 00:30 |
fungi | it was far more verbose | 00:30 |
fungi | a full on diatribe | 00:31 |
clarkb | ya just found it on the HL2 wiki | 00:31 |
clarkb | "All right, I've been thinking. When life gives you lemons, don't make lemonade. Make life take the lemons back! Get mad! I don't want your damn lemons, what am I supposed to do with these? Demand to see life's manager! Make life rue the day it thought it could give Cave Johnson lemons! Do you know who I am? I'm the man who's gonna burn your house down! With the lemons! I'm gonna get my engineers to | 00:31 |
clarkb | invent a combustible lemon that burns your house down!" | 00:31 |
clarkb | so much win | 00:32 |
clarkb | fungi: if you haven't seen kerbal space program you should play the demo | 00:32 |
fungi | i have not been finding sufficient time for video games of late. but i do mean to check that one out | 00:32 |
clarkb | I technically do not have time either, but my last Mun attempt ended with me on a Kerbin escape vector. It was awesome | 00:33 |
*** rcleere has joined #openstack-infra | 00:36 | |
*** rcleere has quit IRC | 00:37 | |
clarkb | I feel like I need to do something that isn't logstash or code review related. Anything in particular people would like to see get done? | 00:37 |
*** jinkoo_ has joined #openstack-infra | 00:43 | |
*** jinkoo has quit IRC | 00:44 | |
*** jinkoo_ is now known as jinkoo | 00:44 | |
lifeless | clarkb: do you really want to ask that :> | 00:47 |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Prepare to test git-review https://review.openstack.org/40319 | 00:48 |
fungi | clarkb: ^ | 00:48 |
fungi | clarkb: once we land that, we can try the pending integration tests live | 00:50 |
clarkb | lifeless: probably not, but I really feel like I need a change of scenery for tomorrow | 00:50 |
clarkb | fungi: reviewing | 00:50 |
lifeless | clarkb: so something that would be cool | 00:50 |
lifeless | clarkb: would be a zuul element for dib/tripleo - as part of the whole 'and you can CI as a downstream easily.' | 00:50 |
lifeless | clarkb: cleearly not on the -infra roadmap, but if you just want a change of pace... | 00:50 |
clarkb | fungi: reviewed | 00:51 |
clarkb | lifeless: dib is something I have been meaning to get into. I will probably start with mordred's kexec awesomesauce for d-g but can look at bigger thinsg too | 00:52 |
fungi | oops, forgot the 33 jobs aren't part of the job group | 00:52 |
mgagne | clarkb: try updating all puppet modules to the latest version :D | 00:52 |
clarkb | mgagne: uh that isn't a change of pace that is a death march :) | 00:52 |
mgagne | clarkb: well, now that I think of it... sorry ^^' | 00:53 |
*** jinkoo has quit IRC | 00:53 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Prepare to test git-review https://review.openstack.org/40319 | 00:53 |
fungi | should be better ^ | 00:53 |
jeblair | clarkb: there are also 153 open openstack-ci bugs :) | 00:54 |
clarkb | jeblair: ya, I was going to resort to looking at the list if no one had a pressing thing | 00:54 |
*** jinkoo has joined #openstack-infra | 00:54 | |
jog0 | is it possible to get py33 jobs of for all cleints? | 00:54 |
jog0 | as non gating | 00:55 |
jog0 | since those are some of the early targets for py33 compat | 00:55 |
jeblair | jenkins01 and jenkins02 are up again | 00:55 |
clarkb | yes, we may want more slaves. Not sure how much contention exists for those yet | 00:55 |
*** dina_belova has joined #openstack-infra | 00:56 | |
clarkb | jog0: the only concern I have enabling them all like that is whether or not we will see active development to correct the issues in all of them | 00:56 |
fungi | i'm not sure what volume of changes the clients get, but i get the impression it's only a fraction of the changes for server projects | 00:56 |
clarkb | jog0: maybe we should start with a few thatwe know will get attention? | 00:56 |
jog0 | clarkb: nova client is getting some | 00:58 |
*** jinkoo has quit IRC | 00:58 | |
jog0 | I was thinking if we have the p33 test people may try fixing them | 00:58 |
*** ^d has quit IRC | 00:59 | |
jog0 | there are some Canonical guys making sure things are py33 compat | 00:59 |
jeblair | I'm going to simplify the overview tab on the new jenkins servers; it's slow as-is. | 00:59 |
clarkb | jeblair: ++ I expect with mutliple masters jenkins will get even less direct viewership | 00:59 |
*** dina_belova has quit IRC | 01:00 | |
clarkb | jog0: did you want to propose the change? it should be pretty straightforward | 01:02 |
clarkb | edit openstack-infra/config/modules/openstack_project/files/jenkins_job_builder/config/projects.yaml to include gate-{name)-python33 under the job list for each client then add the new jobs to each cleint's check tests in modules/openstack_project/files/zuul/layout.yaml | 01:03 |
jeblair | clarkb: maybe we should add it to python-jobs in jjb? | 01:03 |
jeblair | the number of 33 jobs seems to be increasing at a rate that might be useful. | 01:04 |
clarkb | jeblair: we can do that too | 01:04 |
clarkb | I asked jd__ to leave it out of the group initially because I wasn't sure what demand would be like | 01:04 |
jeblair | yeah, i think at the point we're discussing adding it to "all clients" is probably the time to reconsider that | 01:05 |
fungi | so it might make sense to double the precisepy3k slave count to 8... or do we want to take a wait-and-see approach to contention over those for now? | 01:07 |
clarkb | fungi: we can probably wait and see on that. You are right in that their patchset load is low | 01:07 |
jog0 | jeblair: https://review.openstack.org/#/dashboard/24 | 01:08 |
jeblair | jog0: nice. i support this. it does seem like tests there would be useful. | 01:08 |
jog0 | zul: ^ has been doing py33 comapt | 01:08 |
jeblair | also, we should have git-review do something better with detached branches. | 01:09 |
jog0 | jeblair: heh yeah | 01:09 |
jog0 | the py33 stuff is mostly low hanging fruit | 01:09 |
jog0 | (fixing wise) | 01:09 |
* clarkb whips a change up really quick | 01:10 | |
zul | hmmm? | 01:11 |
jog0 | also a nie email to the ML would help get attention | 01:11 |
jog0 | zul: taking about getting py33 tests for all clients | 01:12 |
*** nijaba_ has quit IRC | 01:12 | |
zul | jog0: ah yeah i was going to do that this week, i had the day off today | 01:12 |
*** nijaba has joined #openstack-infra | 01:12 | |
*** jhesketh_ is now known as jhesketh | 01:12 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Make the python33 template part of python-jobs https://review.openstack.org/40321 | 01:13 |
fungi | jog0: clarkb: jeblair: ^ | 01:13 |
clarkb | fungi: cool one less thing I need to do in this change :) | 01:13 |
jog0 | zul: think I just talked clarkb / fungi into doing it for you | 01:13 |
zul | jog0: yay! | 01:13 |
* locke105 is off to blow up some rockets in kerbel space program. :) | 01:13 | |
fungi | well, i figured the "make py33 standard in the job group" deserved to be a separate change from "add to all clients" | 01:14 |
zul | better make it non-voting though | 01:14 |
fungi | zul: i assumed it would be for those, yes | 01:14 |
zul | fungi: ack | 01:14 |
clarkb | locke105: have fun | 01:15 |
fungi | separate non-voting settings for each of the clients, and then we can peel those back individually as each one reaches compliance | 01:15 |
locke105 | clarkb: i'm actually pretty good at this game... been playing since version 0.15 or so | 01:16 |
zul | fungi: i was going to make the tox.ini/requirements are kosher as well, keystoneclient needed a newer oslo.config | 01:16 |
locke105 | they redid all the SAS stuff in the latest update so I have to figure out how to fly again tough :( | 01:16 |
locke105 | s/tough/though/ | 01:16 |
fungi | zul: awesome | 01:17 |
jeblair | clarkb, fungi: devstack-gate updated to target 7 ready nodes from each az on each server (7*3*3=63) | 01:17 |
jeblair | so all three jenkins should equalize on 21 ready nodes each | 01:18 |
clarkb | jeblair: perfect | 01:18 |
openstackgerrit | Steve Baker proposed a change to openstack-infra/config: Enable pypi jobs for diskimage-builder https://review.openstack.org/40322 | 01:18 |
fungi | jeblair: sounds ideal | 01:19 |
jeblair | fungi, clarkb, mordred: if you add or edit a slave on the new nodes, be sure to re-use the existing jenkins ssh credential (in the dropdown) | 01:19 |
openstackgerrit | Clark Boylan proposed a change to openstack-infra/config: Add python33 tests to all openstack python clients https://review.openstack.org/40323 | 01:19 |
jeblair | see https://jenkins02.openstack.org/computer/centos6-2/configure for an example | 01:19 |
clarkb | jog0: ^ and that adds the jobs to the clients | 01:19 |
clarkb | jeblair: so by setting the credential in the config and using it in the jenkins API calls you don't have to create a new one for each host? | 01:20 |
jeblair | clarkb, fungi: puppet is running on all the jenkins and devstack-launch nodes. | 01:20 |
jeblair | clarkb: correct; the version of jenkins.o.o created and deleted one for every slave | 01:20 |
stevebaker | jeblair: hey, could you please take a look at my reply to your comments? https://review.openstack.org/#/c/38226/ | 01:20 |
jeblair | clarkb: the version on jenkins01/02 created but did not delete them, which was obviously problematic. | 01:20 |
jeblair | stevebaker: sure thing | 01:21 |
clarkb | jeblair: is there a manual step in there of creating a credential so that launch node can use htat? | 01:21 |
clarkb | zaro: jeblair the ZMQ event publisher plugin just showed up on http://updates.jenkins-ci.org/download/plugins/ | 01:22 |
jeblair | clarkb: sort-of -- yes this once, but it's in an xml file that can be copied into place (i have it in my tarball of boilerplate openstack jenkins config) | 01:22 |
jeblair | clarkb: it was created on 01, and copied to 02 that way. | 01:22 |
*** tianst20 has joined #openstack-infra | 01:24 | |
*** tian has quit IRC | 01:24 | |
clarkb | do you plan to have puppet untar that on jenkins masters? | 01:25 |
jeblair | clarkb: the tarball is for convenience, until the files are in puppet individually. | 01:26 |
clarkb | gotcha | 01:26 |
clarkb | ok time for me to head home | 01:26 |
clarkb | multi master jenkins is very exciting | 01:26 |
clarkb | I will need to remove my jenkins.o.o pinned tab now :) | 01:27 |
*** jrex_laptop has quit IRC | 01:27 | |
clarkb | sdague fwiw the devstack neutron unstableness affected non neutron too | 01:29 |
clarkb | if you look at nati's graphite graphs you see they spike together | 01:29 |
jeblair | stevebaker: i guess i still don't see why stackforge is the right place. it looks to me like you made an excellent argument that the repo shouldn't exist at all. | 01:31 |
jeblair | (which i agree with!) | 01:31 |
jeblair | as a project, we're clearly not averse to hosting deprecated code in the openstack org: https://github.com/openstack/melange | 01:32 |
jeblair | (though maybe we should be) | 01:32 |
*** markmcclain has quit IRC | 01:32 | |
clarkb | ++ | 01:32 |
stevebaker | quite probably it won't in the long term, but I thought doing one step at a time would have the best buy-in - otherwise the debates get bogged down in orthogonal issues | 01:33 |
*** lcestari has quit IRC | 01:34 | |
fungi | clarkb: i doubt that's neutron/devstack/tempest instability when they spike together. more likely changes which got approved but were broken and failing tests legitimately | 01:35 |
clarkb | fungi good point | 01:35 |
clarkb | also possibly d-g + jenkins unhappyness | 01:35 |
*** reed has quit IRC | 01:35 | |
stevebaker | jeblair: the dependencies and timeline for heat-cfn, heat-boto and heat-watch being deleted is different for each one | 01:35 |
*** pcrews has quit IRC | 01:37 | |
jeblair | stevebaker: i think it should go into stackforge if it really is not an openstack project at all. that means, different core group, its own ptl, its own bug tracker, and no support from infra, docs, qa, the tc, etc. | 01:38 |
jeblair | stevebaker: but if it's part of the heat project, even if it's a part you want to deprecate, it seems like it should go in the openstack org. | 01:38 |
*** pcrews has joined #openstack-infra | 01:38 | |
jeblair | stevebaker: or perhaps openstack-dev, if it's just a 'developer tool' as you suggest in your 4th point. | 01:38 |
stevebaker | openstack-dev doesn't seem like a great fit either | 01:39 |
jeblair | hacking, pbr and devstack are all in openstack-dev because they're not part of the finished product, but are used in developing it. | 01:40 |
lifeless | so the boto stuff is closer to heat-client than to devs-of-heat | 01:40 |
lifeless | isn't it? | 01:40 |
stevebaker | at this point it is only useful to heat developers who are debugging cfn api issues | 01:41 |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Prepare to test git-review https://review.openstack.org/40319 | 01:41 |
*** UtahDave has quit IRC | 01:43 | |
*** jrex_laptop has joined #openstack-infra | 01:44 | |
fungi | rebased that ^ on the change to add python33 tests to python-jobs | 01:44 |
*** pcrews has quit IRC | 01:45 | |
jeblair | stevebaker: i have to run to dinner now; we'll have to continue this later, sorry. | 01:45 |
stevebaker | jeblair: no problem | 01:45 |
*** yaguang has joined #openstack-infra | 01:45 | |
Alex_Gaynor | Following links to jenkins builds from http://status.openstack.org/zuul/ leads you to pages where the SSL cert isn't right, known? | 01:49 |
*** pcrews has joined #openstack-infra | 01:49 | |
Alex_Gaynor | err, or its just a self signed cert? | 01:49 |
clarkb | Alex_Gaynor: are they links to jenkins01.o.o or jenkins02.o.o? | 01:52 |
Alex_Gaynor | clarkb: 01 | 01:52 |
clarkb | Alex_Gaynor: I am guessing jeblair used self signed certs on those new hosts. Probably just missed it | 01:52 |
clarkb | Alex_Gaynor: we now have multi master jenkins :) | 01:52 |
Alex_Gaynor | clarkb: but... this means there won't be monotonically increasing job numbers. This is bad for my number fetish. | 01:53 |
bodepd | clarkb: just recreated | 01:53 |
clarkb | Alex_Gaynor: it is also bad for our old log dir format, which we thankfully fixed | 01:53 |
clarkb | Alex_Gaynor: but it is awesome in so many ways | 01:53 |
clarkb | Alex_Gaynor: zero downtime jenkins upgrades | 01:54 |
clarkb | Alex_Gaynor: jenkins can scale now | 01:54 |
clarkb | and so on | 01:54 |
Alex_Gaynor | Yeah that's cool I guess, but the numbers! | 01:54 |
dstufft | jenkins… scale..? Im not sure I believe you | 01:54 |
clarkb | dstufft: you just spin up more :) | 01:54 |
Alex_Gaynor | clarkb: how's it work, anything in jenkins itself, or does zuul/gerrit just distribute jobs amongst them? | 01:54 |
clarkb | Alex_Gaynor: zuul speaks gearman and there is a jenkins gearman plugin | 01:55 |
clarkb | so gearman distributes jobs among them | 01:55 |
clarkb | bodepd: now is the time to try the weird setuptools upgrade stuff | 01:55 |
clarkb | bodepd: does pip --version work? | 01:55 |
bodepd | I just did that unintall you recommended. | 01:56 |
*** dina_belova has joined #openstack-infra | 01:56 | |
clarkb | oh cool | 01:56 |
clarkb | bodepd: now you need to reinstall and before you do anything else use pip to upgrade setuptools | 01:56 |
*** nati_ueno has quit IRC | 01:57 | |
clarkb | then try puppet again | 01:58 |
harlowja_ | qq, have u guys seen this | 01:58 |
harlowja_ | https://review.openstack.org/#/c/29862/ | 01:58 |
harlowja_ | http://logs.openstack.org/62/29862/59/check/gate-grenade-devstack-vm/a0dbf6d : LOST in 11m 01s :-/ | 01:59 |
harlowja_ | someone lost it, ha | 01:59 |
*** thomasem has joined #openstack-infra | 01:59 | |
bodepd | clarkb: https://gist.github.com/bodepd/6161371 | 01:59 |
clarkb | bodepd: weird | 02:00 |
clarkb | harlowja_: yes, that is a bug in the current build system | 02:00 |
harlowja_ | kk | 02:00 |
harlowja_ | recheck should be ok then? | 02:00 |
clarkb | harlowja_: I think it means zuul was not updated on the progress of that job within its timeout period | 02:00 |
harlowja_ | kk | 02:00 |
clarkb | harlowja_: you will want jeblair to confirm that though | 02:00 |
harlowja_ | thx clarkb | 02:00 |
clarkb | and yes a recheck should be fine | 02:00 |
harlowja_ | sounds great | 02:01 |
bodepd | clarkb: I can re-run puppet and see if anything magical happens... | 02:01 |
clarkb | bodepd: worth a shot I guess | 02:01 |
*** dina_belova has quit IRC | 02:01 | |
fungi | clarkb: the self-signed nature of the jenkinsXX.o.o certs was discussed previously and jeblair settled on not getting separate ca-signed certs for them | 02:01 |
fungi | but maybe we revisit if people complain | 02:01 |
openstackgerrit | lifeless proposed a change to openstack-dev/hacking: Fix typo in HACKING.rst. https://review.openstack.org/40326 | 02:01 |
openstackgerrit | lifeless proposed a change to openstack-dev/hacking: Add editor files to .gitignore. https://review.openstack.org/40327 | 02:01 |
lifeless | jog0: I bet those ^ are going to error on H803. | 02:02 |
lifeless | jog0: any objections to making H803 be ignored in hacking itself ? | 02:02 |
clarkb | fungi: when was that? I seem to have completely missed it. Or I paid attention and simply don't remember | 02:02 |
fungi | clarkb: late last week i think? i'll grep my log | 02:04 |
clarkb | oh if it was on Friday then I probably did pay attention but then completely forgot as I had a cold and Friday was not good | 02:05 |
*** prad has joined #openstack-infra | 02:06 | |
fungi | clarkb: a couple weeks looks like. set wabac machine for 2013-07-29 16:21:16 | 02:07 |
fungi | er, about a week i guess | 02:07 |
clarkb | I am fine with it. We deemphasizing jenkins itself so shouldn't be a major issue | 02:09 |
*** nijaba has quit IRC | 02:12 | |
mordred | god scrollback | 02:14 |
mordred | jenkins is a bullshit | 02:14 |
mordred | having more than one jenkins lets us care less about jenkins | 02:14 |
openstackgerrit | Joe Gordon proposed a change to openstack-dev/hacking: Fix typo in HACKING.rst https://review.openstack.org/40329 | 02:14 |
clarkb | mordred: now tell us how you feel about buildbot | 02:14 |
mordred | clarkb: hah. not even real code | 02:15 |
*** nijaba has joined #openstack-infra | 02:15 | |
*** nijaba has quit IRC | 02:15 | |
*** nijaba has joined #openstack-infra | 02:15 | |
mordred | clarkb: total pile of garbage | 02:15 |
jog0 | lifeless: I would object on principle | 02:15 |
jog0 | hacking shouldn't ignore any of its own rules | 02:15 |
mordred | if you have an error in your config, you find out when the twisted python sends you error logs that tell you about a callback that didnt' work for unknown reasons | 02:15 |
*** jrex_laptop has quit IRC | 02:16 | |
* mordred agress with jog0 | 02:16 | |
*** lifeless has quit IRC | 02:16 | |
clarkb | me too | 02:18 |
mordred | before agreeing with jog0, I had lovely drinks on a rooftop | 02:19 |
clarkb | I know what I can do tomorrow to mix it up. I can fix BREACH | 02:19 |
clarkb | mordred: I am drinking a beer called "PigWar" | 02:19 |
mordred | clarkb: your beer is good | 02:20 |
clarkb | named after http://en.wikipedia.org/wiki/Pig_War | 02:20 |
*** melwitt has quit IRC | 02:21 | |
mordred | clarkb: goo.gl/5zo849 | 02:23 |
clarkb | mordred: that is much nicer than my apartment | 02:23 |
SpamapS | Whats the status on the Babel issue? | 02:25 |
Alex_Gaynor | clarkb: FWIW jenkins02 also seems to have a self-signed cert | 02:25 |
clarkb | Alex_Gaynor: ya, see above. Apparently jeblair decided not to pay for certs | 02:25 |
clarkb | SpamapS: I think it was resolved shortly after breaking unless there is a new babel issue | 02:26 |
SpamapS | File "/opt/stack/venvs/heat/local/lib/python2.7/site-packages/heat/openstack/common/gettextutils.py", line 34, in <module> | 02:26 |
clarkb | SpamapS: we pinned to the old version and after that upstream fixed the problem | 02:26 |
SpamapS | from babel import localedata | 02:26 |
mordred | SpamapS: there is a babel issue? | 02:26 |
SpamapS | ImportError: No module named babel | 02:26 |
SpamapS | clarkb: after pip installing heat in a virtualenv I get that.. | 02:26 |
mordred | SpamapS: I blame evil | 02:26 |
SpamapS | Checking why now. Just wanted to see if pypi resolved it or if we're all still carrying hacks. | 02:26 |
SpamapS | mordred: if you're into evil you're a friend of mine | 02:27 |
clarkb | SpamapS: we may still be carrying the hack and upstream fix didn't fix everything | 02:27 |
SpamapS | Heat does not have the >=0.9.6 | 02:27 |
clarkb | looks like we unpinned the upper bound https://github.com/openstack/requirements/blob/master/global-requirements.txt#L5 | 02:27 |
clarkb | SpamapS: you are probably installing 1.0.X | 02:27 |
clarkb | er 1.X | 02:28 |
clarkb | SpamapS: works locally. `virtualenv venv ; source venv/bin/activate ; pip install babel ; python -> from babel import localedata` | 02:29 |
clarkb | SpamapS: perhaps babel is not part of your requirements? | 02:30 |
clarkb | maybe it is only in test-requirements? | 02:30 |
*** prad has quit IRC | 02:36 | |
*** nijaba_ has joined #openstack-infra | 02:42 | |
*** changbl_ has joined #openstack-infra | 02:43 | |
*** jhesketh has quit IRC | 02:46 | |
*** jhesketh has joined #openstack-infra | 02:46 | |
*** nijaba has quit IRC | 02:47 | |
*** changbl has quit IRC | 02:47 | |
*** morganfainberg has quit IRC | 02:47 | |
*** adalbas has quit IRC | 02:47 | |
*** morganfainberg has joined #openstack-infra | 02:47 | |
SpamapS | clarkb: it is not, I think a sync was done from oslo without testing heat-manage | 02:47 |
*** morganfainberg has quit IRC | 02:48 | |
*** morganfainberg has joined #openstack-infra | 02:48 | |
*** pabelanger has joined #openstack-infra | 02:49 | |
mordred | SpamapS: where is heat-managee | 02:51 |
mordred | SpamapS: and is it's code path exercised in devstack-gate? | 02:52 |
mordred | SpamapS: we just landed comprehensive requirements gating today | 02:52 |
SpamapS | mordred: heat/bin ... and if heat is installed in devstack, heat-manage is run | 02:53 |
SpamapS | ./lib/heat: $HEAT_DIR/bin/heat-manage db_sync | 02:53 |
mordred | is heat enabled int he main gate? I thought it was? | 02:54 |
mordred | also - we're now forcing all projects to sync withopenstack/requirements inside of devstack | 02:54 |
mordred | but that just started today | 02:54 |
SpamapS | 2013-08-05 21:06:48.460 | 2013-08-05 21:06:48 + /opt/stack/new/heat/bin/heat-manage db_sync | 02:54 |
SpamapS | mordred: yeah it is run | 02:54 |
SpamapS | but, devstack puts everything on one yssystem | 02:55 |
SpamapS | system | 02:55 |
SpamapS | so you'd end up with babel | 02:55 |
SpamapS | but we're putting it in a venv | 02:55 |
* SpamapS files and fixes bug | 02:56 | |
*** yaguang has quit IRC | 02:56 | |
mordred | but would we wind up with the right or wrong version of bable? | 02:56 |
bodepd | clarkb: if I remove that setuptools package resource, it works | 02:56 |
*** xchu has joined #openstack-infra | 02:56 | |
*** dina_belova has joined #openstack-infra | 02:57 | |
*** adalbas has joined #openstack-infra | 02:57 | |
clarkb | bodepd: there is a setuptools package resource? | 02:57 |
clarkb | bodepd: we should've removed that a while back | 02:58 |
*** lifeless has joined #openstack-infra | 03:00 | |
SpamapS | mordred: right version | 03:01 |
*** dina_belova has quit IRC | 03:01 | |
lifeless | jog0: I don't follow what your link to the git ignore docs is for | 03:01 |
lifeless | jog0: Are you saying you'd like to remove all the in-repo rules ? | 03:02 |
*** yaguang has joined #openstack-infra | 03:08 | |
bodepd | clarkb: can I just remove it? | 03:13 |
*** nijaba_ has quit IRC | 03:14 | |
*** nijaba has joined #openstack-infra | 03:15 | |
*** dguitarbite has joined #openstack-infra | 03:19 | |
*** UtahDave has joined #openstack-infra | 03:36 | |
*** beagles has quit IRC | 03:39 | |
*** pcrews has quit IRC | 03:42 | |
*** jhesketh has quit IRC | 03:43 | |
bodepd | one other thing that I noticed, I was installing python-setuptools for jjb | 03:43 |
bodepd | and Puppet installs both | 03:43 |
*** jhesketh has joined #openstack-infra | 03:43 | |
zaro | clarkb: hey, the zmq plugin made it on the jenkins plugin manager. yeah! | 03:47 |
*** afazekas has quit IRC | 03:47 | |
fungi | trying to scrape branch-specific pypi mirroring out of my branes... i think we'll want to update all mirrors periodically and when a new release of a requirement under our control is uploaded to pypi | 03:50 |
fungi | the only situation where we might only need to update the mirror for a single branch is when a change is merged to openstack/requirements | 03:51 |
fungi | and i'm not sure it's worth the effort to optimize the jobs for that? | 03:51 |
*** yaguang has quit IRC | 03:55 | |
SpamapS | http://logs.openstack.org/34/40334/1/check/gate-heat-requirements/bf3d116/console.html .. is there a way to tell what reviews were included there? | 03:56 |
fungi | SpamapS: what do you mean by included? | 03:57 |
SpamapS | fungi: well zuul gathers all the pending things to merge right, its not just one merge. | 03:57 |
*** dina_belova has joined #openstack-infra | 03:57 | |
fungi | SpamapS: zuul tests your change against the tip of the target branch | 03:58 |
SpamapS | I think we got our swords crossed.. two approved changes did the same fix.. but.. can't find the one that wasn't mine. :-P | 03:58 |
SpamapS | fungi: but it doesn't just test one change at a time. | 03:59 |
SpamapS | n/m | 03:59 |
SpamapS | my fix is just borked | 03:59 |
fungi | SpamapS: it does, in fact. in an integration test your change is tested against the currently merged state of the relevant branches of all other projects being integrated | 03:59 |
*** dina_belova has quit IRC | 04:00 | |
fungi | you're probably thinking of in the gate pipeline, where tests are run merged on top of other changes being gated, but the end result you see is only testing on top of other changes which successfully merged | 04:01 |
fungi | so what you're probably interested in, in that case, is what the state of that project's branch was at the time your change was being tested merged onto it | 04:02 |
*** SergeyLukjanov has joined #openstack-infra | 04:03 | |
fungi | in that log the relevant lines are... | 04:04 |
fungi | HEAD is now at c8634c2 Move heat-cfn, heat-boto, heat-watch out of heat. | 04:04 |
fungi | HEAD is now at 019c878 Add Babel missing requirement | 04:05 |
fungi | the first is the change yours got merged onto | 04:05 |
*** rcleere has joined #openstack-infra | 04:10 | |
*** yaguang has joined #openstack-infra | 04:13 | |
*** jinkoo has joined #openstack-infra | 04:13 | |
*** nijaba has quit IRC | 04:14 | |
*** nijaba has joined #openstack-infra | 04:16 | |
*** vogxn has joined #openstack-infra | 04:16 | |
*** changbl_ has quit IRC | 04:22 | |
*** jinkoo has quit IRC | 04:25 | |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: Branch-specific PyPI mirrors https://review.openstack.org/40003 | 04:27 |
*** SergeyLukjanov has quit IRC | 04:30 | |
*** SergeyLukjanov has joined #openstack-infra | 04:31 | |
*** UtahDave has quit IRC | 04:33 | |
*** boris-42 has joined #openstack-infra | 04:44 | |
*** zul has quit IRC | 04:48 | |
*** rcleere has quit IRC | 04:52 | |
*** dina_belova has joined #openstack-infra | 04:58 | |
*** changbl_ has joined #openstack-infra | 05:00 | |
*** dina_belova has quit IRC | 05:02 | |
*** jjmb has joined #openstack-infra | 05:04 | |
*** jjmb has quit IRC | 05:04 | |
*** jhesketh has quit IRC | 05:05 | |
*** jhesketh has joined #openstack-infra | 05:05 | |
*** changbl_ has quit IRC | 05:09 | |
*** nijaba has quit IRC | 05:15 | |
*** nijaba has joined #openstack-infra | 05:16 | |
*** nijaba has quit IRC | 05:16 | |
*** nijaba has joined #openstack-infra | 05:16 | |
*** SergeyLukjanov has quit IRC | 05:24 | |
*** nicedice has quit IRC | 05:27 | |
*** amotoki has joined #openstack-infra | 05:34 | |
*** dguitarbite has quit IRC | 05:35 | |
*** vogxn has quit IRC | 05:52 | |
*** vogxn has joined #openstack-infra | 06:04 | |
*** dkliban_afk has quit IRC | 06:13 | |
*** nijaba has quit IRC | 06:13 | |
*** nijaba has joined #openstack-infra | 06:16 | |
*** nijaba has quit IRC | 06:16 | |
*** nijaba has joined #openstack-infra | 06:16 | |
*** Ryan_Lane has joined #openstack-infra | 06:31 | |
*** odyssey4me has joined #openstack-infra | 06:34 | |
*** yolanda has joined #openstack-infra | 06:43 | |
*** olaph has quit IRC | 06:58 | |
*** dina_belova has joined #openstack-infra | 06:59 | |
*** dina_belova has quit IRC | 07:03 | |
*** tianst20 has quit IRC | 07:07 | |
*** yaguang has quit IRC | 07:07 | |
*** dina_belova has joined #openstack-infra | 07:11 | |
*** olaph has joined #openstack-infra | 07:13 | |
*** nijaba has quit IRC | 07:15 | |
*** Ryan_Lane has quit IRC | 07:16 | |
*** nijaba has joined #openstack-infra | 07:16 | |
*** yaguang has joined #openstack-infra | 07:20 | |
*** dina_belova has quit IRC | 07:25 | |
amotoki | hi, i have a problem that devstack on Ubuntu 12.04 fails due to some version conflicts in python modules (boto, paramiko, cmd2). | 07:25 |
amotoki | devstack update requirements in each project based on global-requirements. | 07:26 |
amotoki | on the other hand, some python modules are installed from the distribution. It is defined in files/apts/*. This cause version conflict and nova-api fails to start. | 07:27 |
*** Ryan_Lane has joined #openstack-infra | 07:28 | |
amotoki | After removing euca2ools, python-boto, python-cmd2 from files/apts/*, I succeeded stack.sh. | 07:29 |
*** CliMz has joined #openstack-infra | 07:29 | |
*** yaguang has quit IRC | 07:31 | |
*** vogxn has quit IRC | 07:34 | |
*** jpich has joined #openstack-infra | 07:35 | |
ttx | amotoki: sounds like a devstack bug you should file | 07:36 |
amotoki | ttx: sure. | 07:36 |
amotoki | I am searching around devstack patches, but it is not filed so far. | 07:36 |
*** fbo_away is now known as fbo | 07:39 | |
*** yaguang has joined #openstack-infra | 07:44 | |
*** dina_belova has joined #openstack-infra | 07:46 | |
*** vogxn has joined #openstack-infra | 07:50 | |
*** Ryan_Lane has quit IRC | 07:54 | |
*** Ryan_Lane has joined #openstack-infra | 08:04 | |
CliMz | hi | 08:07 |
*** SergeyLukjanov has joined #openstack-infra | 08:08 | |
*** nijaba has quit IRC | 08:13 | |
*** nijaba has joined #openstack-infra | 08:17 | |
*** derekh has joined #openstack-infra | 08:23 | |
openstackgerrit | A change was merged to openstack-infra/odsreg: Allow multiple allocations for a topic https://review.openstack.org/40212 | 08:29 |
*** Ryan_Lane has quit IRC | 08:29 | |
*** sdake_ has quit IRC | 08:30 | |
*** jjmb has joined #openstack-infra | 09:11 | |
*** dina_belova has quit IRC | 09:13 | |
*** Ng is now known as Ng_holiday | 09:15 | |
*** nijaba has quit IRC | 09:16 | |
*** nijaba has joined #openstack-infra | 09:17 | |
*** jjmb has quit IRC | 09:19 | |
*** woodspa has joined #openstack-infra | 09:24 | |
*** woodspa has quit IRC | 09:24 | |
kiall | Any of the infra team online? | 09:26 |
*** dina_belova has joined #openstack-infra | 09:37 | |
amotoki | (FYI) I reported the devstack issue on Ubuntu in https://bugs.launchpad.net/devstack/+bug/1208718 | 09:57 |
uvirtbot | Launchpad bug 1208718 in devstack "n-api fails to start with the latest devstack on Ubuntu 12.04" [Undecided,New] | 09:57 |
*** giulivo has joined #openstack-infra | 09:58 | |
giulivo | guys, if I wanted to learn more (and maybe write a few lines) about which/how many gate configurations and periodic jobs we have, where should I start? | 09:59 |
*** dina_belova has quit IRC | 09:59 | |
kiall | giulivo: probably these links.. | 10:00 |
kiall | http://ci.openstack.org/jenkins-job-builder/ <-- The took used to describe and configure the jenkins jobs | 10:00 |
giulivo | oh kiall , thanks | 10:02 |
giulivo | so that is the tool and where is the YAML repo? | 10:02 |
*** vogxn has quit IRC | 10:04 | |
kiall | http://ci.openstack.org/zuul/ <-- the tool used to configure "Zuul" the gating and coordination of jenkins jobs with Gerrit revies | 10:04 |
kiall | (sorry - was AFK for a min) | 10:04 |
kiall | https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/zuul/layout.yaml | 10:04 |
kiall | and JJB config: https://github.com/openstack-infra/config/tree/master/modules/openstack_project/files/jenkins_job_builder/config | 10:04 |
kiall | That should keep you going for a while ;) | 10:05 |
giulivo | indeed, thanks a lot | 10:07 |
kiall | no problem | 10:07 |
*** xchu has quit IRC | 10:09 | |
*** vogxn has joined #openstack-infra | 10:10 | |
*** boris-42 has quit IRC | 10:15 | |
*** nijaba has quit IRC | 10:16 | |
*** nijaba has joined #openstack-infra | 10:20 | |
*** dina_belova has joined #openstack-infra | 10:20 | |
*** nayward has joined #openstack-infra | 10:30 | |
*** CliMz has quit IRC | 10:35 | |
*** yaguang has quit IRC | 10:43 | |
*** dina_belova has quit IRC | 10:46 | |
*** vogxn has quit IRC | 10:46 | |
*** dina_belova has joined #openstack-infra | 10:50 | |
*** nayward has quit IRC | 10:54 | |
*** lcestari has joined #openstack-infra | 11:00 | |
*** ruhe has joined #openstack-infra | 11:08 | |
*** CliMz has joined #openstack-infra | 11:11 | |
*** vogxn has joined #openstack-infra | 11:12 | |
*** nijaba has quit IRC | 11:17 | |
*** nijaba has joined #openstack-infra | 11:18 | |
ianw | BobBall: ping | 11:26 |
*** woodspa has joined #openstack-infra | 11:27 | |
*** zul has joined #openstack-infra | 11:33 | |
*** boris-42 has joined #openstack-infra | 11:34 | |
*** vogxn has quit IRC | 11:34 | |
*** Shrews has joined #openstack-infra | 11:41 | |
*** beagles has joined #openstack-infra | 11:50 | |
*** weshay has joined #openstack-infra | 11:52 | |
*** dina_belova has quit IRC | 11:53 | |
*** thomasem has quit IRC | 11:54 | |
*** ArxCruz has joined #openstack-infra | 12:03 | |
*** sandywalsh has quit IRC | 12:03 | |
*** nayward has joined #openstack-infra | 12:03 | |
BobBall | ianw: here now | 12:07 |
*** dkranz has joined #openstack-infra | 12:09 | |
ianw | BobBall: have you ever looked at Anvil? | 12:10 |
BobBall | only in the last couple of days | 12:10 |
BobBall | but not to any real degree | 12:10 |
BobBall | i.e. only very vaguely | 12:10 |
BobBall | do you think it's method for RPMs is the one we should adopt then? | 12:11 |
ianw | well, I've surely missed a lot of context being fairly new | 12:11 |
ianw | but it seems like a good idea to me :) | 12:11 |
BobBall | I think my main concerns are it solves the issue in the "same" sort of way that virtual environments do | 12:12 |
ianw | harlowja_ helped me this morning (au time) and we got it working fairly quickly | 12:12 |
BobBall | i.e. it side-steps what the distributions are providing and just compiling things ourselves | 12:12 |
BobBall | I'm not convinced I understand why we don't use venvs in devstack | 12:12 |
ianw | i'm not sure either, maybe daemons calling out to other things via various paths "breaks out" of the venv | 12:13 |
ianw | just a guess | 12:13 |
BobBall | http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2013-08-01.log - look for TRACK_DEPENDS for some discussion around venv | 12:15 |
BobBall | "the reason we can't use venv in the gate is because we need to be able to have a system that works without them" - installing the python libs globally | 12:16 |
*** sandywalsh has joined #openstack-infra | 12:17 | |
BobBall | so it seems that we don't want to use packages, but we don't want to use anything that means we can't use packages... hehe | 12:17 |
*** nijaba has quit IRC | 12:17 | |
*** nijaba has joined #openstack-infra | 12:18 | |
ianw | i see | 12:20 |
BobBall | I suppose if anvil is creating RPMs for everything that needs them automatically then that's okay | 12:21 |
BobBall | but isn't that effectively gating on packages | 12:21 |
*** dina_belova has joined #openstack-infra | 12:24 | |
ianw | BobBall: what do you mean by that (gating on packages?) | 12:28 |
BobBall | Having the OpenStack gate depending on packages and packaging. i.e. if something changes in the python dependencies then the package scripts need to be updated (or is that automatic in Anvil?) for the gate to pass - thus putting an extra burdon of package maintenance on the openstack developers | 12:29 |
ianw | I believe it's all automatic | 12:30 |
ianw | anvil scans the requirements.txt and downloads anything it can from yum | 12:30 |
*** rfolco has joined #openstack-infra | 12:30 | |
ianw | it then downloads the rest from pip ... which drags lots of deps back with it | 12:30 |
ianw | it then scans those deps again, and kicks out any of them that are packages | 12:30 |
ianw | finally it creates rpms of the remaning pip downloads | 12:30 |
ianw | then it installs everything in one transaction | 12:31 |
*** dina_belova has quit IRC | 12:32 | |
giulivo | ianw, I think sdague is a supporter of that approach; yesterday we discussed a bit about it | 12:33 |
giulivo | my proposal is that packaging should be a task for the distributors, not for the development | 12:33 |
giulivo | developers set and maintain the requirements; distributors eventually package those if they want to | 12:34 |
giulivo | not that my idea counts much, I'm just sparing my 2cents | 12:37 |
giulivo | why would you want to use some distro packages instead? | 12:37 |
ianw | giulivo: because the rest of the distro is (presumably) working well with them | 12:39 |
ianw | from a rh point of view, it would be nice to know that the RDO packages are being used for example | 12:40 |
giulivo | yeah so as per previous discussion with sdague , I think it really boils down to if we want to "gate" the actual capability of the new code to run on some particular distro | 12:40 |
giulivo | or if we want to "gate" the code changes and ensure they work in a vanilla environment | 12:40 |
BobBall | I agree ianw - and that's something that dan prince would also push for I'm sure - but the problem there is the RDO packages can't (by definition) keep up with the commits that update the requirements | 12:41 |
ianw | BobBall: no, but when they do update, we start using them automatically | 12:41 |
giulivo | ianw, I'm not sure what are the plans for RH, but I think we're not discussing the opportunity to run on RDO, just to use some pre-packaged python modules | 12:41 |
*** psedlak has joined #openstack-infra | 12:42 | |
ianw | giulivo: well, you get that for free. you add the RDO rpm, and if anvil finds packages there it uses them instead of building it itself | 12:42 |
BobBall | I guess the problem is that sometimes it will break - and when it does we don't want the gating tests to prevent people to commit to the OS repos | 12:42 |
ianw | s/rpm/repo/ | 12:42 |
*** psedlak has quit IRC | 12:44 | |
giulivo | ianw, as per BobBall comment, how could RDO keep up with the actual commit you are gating? | 12:44 |
ianw | BobBall: yes, possibly there is a bug in a distro package, which isn't your fault but stops you committing i guess. is that what you mean? | 12:44 |
ianw | giulivo: i'm not really talking about the gate. but you or i or anyone interested could run a CI job that runs anvil with the RDO repos | 12:45 |
BobBall | or we add a new dependency and if it's not in anvil (or the anvil auto-generator doesn't know how to cope with that new dependecy or auto generating based on a change) then things could break there | 12:45 |
giulivo | ianw, okay so I've probably brought up a different topic then because what has been partially discussed with sdague was the usage of packages vs. pip for the devstack requirements (and at gate); a discussion mostly related to this https://review.openstack.org/#/c/40019/ | 12:47 |
ianw | BobBall: harlowja_ should speak to that when it's a normal time for him; but to my understanding, it's building an rpm from a pip download. so if pip can get it, there isn't really much to go wrong; it's a glorified copy | 12:47 |
giulivo | and not the usage of distro packages for the openstack components theirselves | 12:48 |
*** dprince has joined #openstack-infra | 12:48 | |
ianw | giulivo: yes, well that change comes into it. that throws out all distro packages. but as my comment there says, it doesn't fix the case of, say python-setuptools the package being needed by xen | 12:49 |
*** ruhe has quit IRC | 12:50 | |
BobBall | actually it's python-lxml needed by xen :) | 12:52 |
ianw | oh, sorry, but yeah | 12:53 |
*** whayutin_ has joined #openstack-infra | 12:57 | |
*** weshay has quit IRC | 12:57 | |
*** anteaya has joined #openstack-infra | 12:58 | |
giulivo | ping kiall | 13:00 |
*** dkehn_ has joined #openstack-infra | 13:00 | |
*** dina_belova has joined #openstack-infra | 13:00 | |
giulivo | I see the gating jobs are in devstack-gate.yaml ; may I ask what is in python-jobs.yaml and what in python-bitrot-jobs.yaml ? | 13:01 |
*** dkehn has quit IRC | 13:01 | |
*** dkehn_ is now known as dkehn | 13:02 | |
*** vogxn has joined #openstack-infra | 13:04 | |
kiall | giulivo: I'm not totally familiar with the jobs :) But devstack-gate.yaml is not ALL the gating jobs, just the devstack ones | 13:06 |
kiall | python-jobs has things like the py27 / py26 / pep8 gate jobs, and some publication stuff etc | 13:06 |
openstackgerrit | A change was merged to openstack-dev/hacking: python3: Fix tracebacks while running testsuites https://review.openstack.org/40052 | 13:06 |
openstackgerrit | A change was merged to openstack-dev/hacking: Fix typo in HACKING.rst https://review.openstack.org/40329 | 13:07 |
openstackgerrit | A change was merged to openstack-dev/hacking: Import exceptions list is now configurable https://review.openstack.org/39140 | 13:09 |
*** fbo is now known as fbo_away | 13:11 | |
*** woodspa_ has joined #openstack-infra | 13:13 | |
*** dkliban_afk has joined #openstack-infra | 13:14 | |
*** psedlak has joined #openstack-infra | 13:14 | |
*** woodspa has quit IRC | 13:16 | |
*** nijaba has quit IRC | 13:17 | |
*** nijaba has joined #openstack-infra | 13:18 | |
*** nijaba has joined #openstack-infra | 13:18 | |
*** dina_belova has quit IRC | 13:19 | |
*** fbo_away is now known as fbo | 13:20 | |
*** mriedem has joined #openstack-infra | 13:22 | |
mordred | amotoki: I removed netaddr the other day for a similar reason | 13:24 |
*** dina_belova has joined #openstack-infra | 13:24 | |
*** zul has quit IRC | 13:29 | |
*** ruhe has joined #openstack-infra | 13:31 | |
*** krtaylor has quit IRC | 13:36 | |
*** changbl_ has joined #openstack-infra | 13:36 | |
*** pentameter has joined #openstack-infra | 13:36 | |
mordred | sdague: https://review.openstack.org/40418 | 13:42 |
mordred | sdague: and https://review.openstack.org/40417 | 13:42 |
fungi | giulivo: the bitrot jobs are periodic re-runs of unit/functional and integration tests on supported stable release branches, to confirm new dependency releases and tooling changes don't break them | 13:42 |
* mordred is happy - there is so much scrollback from all sorts of different people this morning! | 13:42 | |
BobBall | our aim in life is to make you spend the first half hour of the day reading scrollback! | 13:43 |
* fungi feels like he spends most of his day reading scrollback | 13:43 | |
giulivo | you guys indeed turn the lights on and off | 13:44 |
fungi | i prefer to work with the lights off | 13:44 |
fungi | makes the loud techno music feel even louder | 13:44 |
sdague | sorry folks, was getting car serviced... reading scrollback with lots of my name pinged | 13:45 |
pentameter | Hey mordred, is a package headed my way? | 13:45 |
mordred | giulivo, ianw, BobBall: the only reason I'd really be interested in auto-generating packages is if it made things easier | 13:46 |
mordred | pentameter: no. I went to ship it before I left and the store was closed. I really need a personal assistant for this sort of thing | 13:46 |
BobBall | well at the moment I'm struggling to see many alternatives mordred... | 13:47 |
sdague | mordred: so why - https://review.openstack.org/#/c/40418/1/functions ? | 13:47 |
sdague | we don't protect people from borking over their local changes elsewhere | 13:47 |
mordred | sdague: we don't change their source repos elsewhere | 13:47 |
sdague | reclone=yes | 13:48 |
sdague | we sure do | 13:48 |
mordred | but they ask us to do that, no? | 13:48 |
mordred | chmouel: can you chime in on this one? ^^ | 13:48 |
sdague | my feeling is that it's actually dangerous to run stack.sh on changed repos for lots of other reasons already | 13:48 |
*** woodspa__ has joined #openstack-infra | 13:49 | |
sdague | if you want to rerun stack.sh you really need to be running with local forks elsewhere that you reference in localrc | 13:49 |
* BobBall has been doing that lots in the last few days | 13:49 | |
chmouel | sdague: well i guess for most people you use devstack to dev | 13:49 |
chmouel | so you would modify your source code | 13:49 |
chmouel | rerun devstack to clean etc.. | 13:49 |
mordred | BobBall: I think the question, as you guys were discussing above, is what we want to test and and we want to solve by adding packaging in to the mix | 13:50 |
chmouel | test/restart the services etc... | 13:50 |
*** changbl_ has quit IRC | 13:50 | |
sdague | chmouel: right, so manually going into screen and stop / restarting a service, all good | 13:50 |
sdague | but stack.sh goes and clobbers all kinds of things | 13:51 |
chmouel | sdague: what about cleaning volumes and such? | 13:51 |
chmouel | clobering the repo before? | 13:51 |
mordred | BobBall: first and foremost, the most important thing we want to test is the code itself and how that interacts with the library versions _we_ have specified we require | 13:52 |
*** dina_belova has quit IRC | 13:52 | |
chmouel | sdague: we probably can't expect that a user is not going to rerun a stack.sh over and over again | 13:52 |
*** woodspa_ has quit IRC | 13:52 | |
BobBall | mordred: Perhaps - but I'm starting from the point of we've got a broken system at the moment because we're subverting the packaging system of the distro we're installing on. I think we either need to be fully independent of distro packaging (perhaps dropping support for RHEL or using venvs) or we have no real choice but to play in the packaging sandpit... | 13:52 |
sdague | chmouel: reclone does that in descrimintly | 13:52 |
*** vijendar has joined #openstack-infra | 13:52 | |
BobBall | chmouel, sdague: I run stack.sh many times - reclone is off (of course) and it's very useful | 13:53 |
BobBall | if we don't want people to run stack.sh multiple times, let's delete unstack.sh? | 13:53 |
sdague | BobBall: right, but there is actually a completely supported workflow for that | 13:53 |
mordred | BobBall: I've been arguing the opposite directoin - we should stop trying to instal _any_ of our depends via distro packages | 13:53 |
chmouel | sdague: so right i wasn't using reclone, so should we force people not be able to rerun stack.sh then only with reclone | 13:53 |
sdague | SWIFT_REPO=/home/myuser/code/swift | 13:53 |
sdague | in localrc | 13:53 |
mordred | BobBall: because we _know_ that the distros do not have the right requirements for us | 13:53 |
BobBall | mordred: I'm happy with that as long as we don't have a conflict like we do with RHEL - if we can install things independently then it works fine | 13:54 |
mordred | BobBall: there shouldn't be a conflict if we don't try to install the same thing with both packaging systems | 13:54 |
chmouel | sdague: what's the difference of having it with the default ? | 13:54 |
chmouel | sdague: we still are going to have the requirements files updated automatically there,right? | 13:55 |
BobBall | mordred: but the problem is we have to in the case of python-lxml and python-crypto isn't it? | 13:55 |
mordred | nope | 13:55 |
mordred | why? | 13:55 |
BobBall | mordred: or anyone might have installed _anything_ in their system | 13:55 |
mordred | _that_ I don't care about | 13:55 |
chmouel | I tought Alex_Gaynor was working on remove thos C deps ^ | 13:55 |
mordred | the second thing | 13:55 |
BobBall | well, ok, devstack doesn't have to install them, but if they exist already we have to tolerate them being there | 13:55 |
mordred | I cannot control how a person might have broken their system before running devstack | 13:55 |
mordred | if they are doing something more complex, they should use something that isn't devstack to install | 13:56 |
BobBall | why is it broken? They have just installed python-lxml | 13:56 |
mordred | devstack is not an arbitrary and rich deployment tool | 13:56 |
mordred | why have they installed _anything_ | 13:56 |
BobBall | In my case, xenserver-core depends on xen which depends on python-lxml | 13:56 |
BobBall | so that package exists in the system already | 13:56 |
mordred | right. so xenserver is a specific issue we need to solve | 13:56 |
BobBall | so my assertion is that devstack needs to tolerate python-lxml (or any other python-*) packages being installed | 13:57 |
sdague | chmouel: ok, I'm a soft -0 right now on the patch right now. | 13:57 |
BobBall | otherwise you might remove packages that the user is expecting to exist | 13:57 |
sdague | I'll let dtroyer weigh in later | 13:57 |
BobBall | I don't agree that devstack should break a system by removing packages a user has installed | 13:57 |
sdague | and think about it more today | 13:57 |
mordred | BobBall: I think we needto be more specific. I think devstack needs to engineer for python-lxml for sure, because xenserver is a thing we can be expected to know about | 13:57 |
chmouel | sdague: cool thanks | 13:57 |
chmouel | sdague: it does really feel like that isn't it http://openstackreactions.enovance.com/2013/08/understanding-global-requirements-in-the-gate/ | 13:58 |
mordred | BobBall: is the removing packages thing about lxml getting removed by us removing setuptools? | 13:58 |
sdague | :) | 13:58 |
BobBall | But what if the user has a differencing tool installed that uses python-foo which we want to install the pip version of... python-foo gets removed, and so does the tool they are using | 13:58 |
mordred | why would python-foo get removed? | 13:58 |
BobBall | because devstack force removes things that will conflict by installing them through pip | 13:59 |
BobBall | atm that's only python-lxml and python-crypto - but if we say we'll do everything through pip and ignore all packages, it's the same breakage problem | 13:59 |
mordred | actually, I think if we go my route, we should not remove things either | 14:00 |
BobBall | stack.sh line 602 | 14:00 |
mordred | I know - we're talking planning here - how do we fix it - ignore what it does right now | 14:00 |
*** _TheDodd_ has left #openstack-infra | 14:01 | |
BobBall | ok | 14:01 |
*** _TheDodd_ has joined #openstack-infra | 14:01 | |
BobBall | my point was that if we install things through pip (even if it's everything) then we're still overwriting files that might be installed by the packaging system and thus potentially breaking things | 14:02 |
mordred | we won't overwrite things if the system depend satisfies the requirement | 14:02 |
mordred | because pip won't install | 14:02 |
mordred | it's only if the python-lxml on the system installed by rpms is too old and does not meet the minimum version we assert we need, that we will take action | 14:03 |
BobBall | but if it doesn't then we'll upgrade the files underneath the RPM | 14:03 |
mordred | right | 14:03 |
mordred | so | 14:03 |
mordred | thing a) someone needs to make an rpm for lxml that is new enough and can be installed on xenserver boxes | 14:03 |
mordred | that, it seems, is a known important task | 14:03 |
BobBall | which violates what the packaging systems expect - perhaps I'm thinking too theoretically :) | 14:03 |
mordred | yes. you can't solve the full theory here | 14:03 |
mordred | because you try to solve a meta distribution that emerges magically without a distro team making it | 14:04 |
mordred | believe, I fought that battle for about 2 years | 14:04 |
BobBall | hehe | 14:04 |
mordred | it's a really tempting path to explore though | 14:04 |
mordred | on this specific issue, if you, or someone, made a python-lxml rpm of a more modern lxml | 14:05 |
mordred | and installed that on your xenserver node | 14:05 |
mordred | would that break xenserver? | 14:05 |
BobBall | we'd need to change stack.sh to not forceably remove it | 14:05 |
*** dina_belova has joined #openstack-infra | 14:05 | |
BobBall | just that change makes it work for me ;) | 14:05 |
*** whayutin_ has quit IRC | 14:05 | |
*** dkranz has quit IRC | 14:05 | |
mordred | right. I mean pre-devstack | 14:05 |
mordred | without devstack in the mix | 14:06 |
mordred | would that lxml package break the xenserver? | 14:06 |
BobBall | I'm sure a new python-lxml would work with xen, sure | 14:06 |
mordred | great. because if it wouldn't, there would be bigger fish to fry | 14:06 |
BobBall | indeed | 14:06 |
mordred | so what we need is to get one of those packages and get it into a location that can be trusted | 14:06 |
BobBall | I'm running a system with the old package but replaced via pip and that seems to work through most tests so far (other tests fail for known reasons that we're working on) | 14:07 |
mordred | so, removing lxml was done because not removing it was breaking someone else, no? | 14:07 |
BobBall | and assume that python-lxml won't need to be upgraded again (or assert that if it does then the gate fails until it's fixed?) | 14:07 |
mordred | wait - I think I donm't understand you there | 14:08 |
BobBall | tbh I got confused by the comment in devstack... | 14:08 |
BobBall | I mean that if we require another newer version of python-lxml then we shouldn't accept that until the package is built for that newer version - otherwise we have the same problem | 14:08 |
BobBall | I have a meeting now - can we resume this later? :) | 14:09 |
mordred | we can | 14:09 |
mordred | I don't think what you are suggesting will work | 14:09 |
BobBall | great | 14:09 |
mordred | because we used to do it that way :) | 14:09 |
BobBall | you're probably right! :D | 14:09 |
mordred | but come back after your meeting | 14:09 |
*** nayward has quit IRC | 14:10 | |
*** weshay has joined #openstack-infra | 14:10 | |
*** rnirmal has joined #openstack-infra | 14:14 | |
*** datsun180b has joined #openstack-infra | 14:15 | |
*** cppcabrera has joined #openstack-infra | 14:15 | |
*** nijaba has quit IRC | 14:15 | |
*** psedlak has quit IRC | 14:15 | |
*** krtaylor has joined #openstack-infra | 14:16 | |
*** yaguang has joined #openstack-infra | 14:17 | |
*** markmcclain has joined #openstack-infra | 14:18 | |
*** nijaba has joined #openstack-infra | 14:19 | |
sdague | clarkb: did I make logstash sad? | 14:21 |
*** ^d has joined #openstack-infra | 14:24 | |
*** ^d has joined #openstack-infra | 14:24 | |
clarkb | sdague: maybe, did you ask it for info across a big chunk of time? | 14:24 |
*** burt has joined #openstack-infra | 14:24 | |
clarkb | it seems to not like that but should recover if the cache settings work like they should | 14:25 |
dkehn | clarkb, I was bitching on Sat that neutron devstack was all f-ed up, which it was, seems mo better now, FYI, built 2 VMs and all is working as it should | 14:26 |
*** zul has joined #openstack-infra | 14:26 | |
clarkb | dkehn yup it is passing the gate now too | 14:27 |
sdague | yes | 14:27 |
*** thomasbiege has joined #openstack-infra | 14:27 | |
dkehn | clarkb, I love how changes make it in as working | 14:27 |
sdague | I was trying to figure out the floating ip fail occurance | 14:27 |
*** yaguang has quit IRC | 14:28 | |
clarkb | sdague: I believe the issue here is elasticsearch needs to load more stuff into memory than it is capable of to perform the query | 14:29 |
clarkb | because our test logs are too big... more restricted queries by test time and other fields should help | 14:29 |
*** pabelanger_ has joined #openstack-infra | 14:31 | |
fungi | sdague: are you still working with grenade on that temporary 166.78.161.26 or should i tear it back down now? | 14:32 |
*** dolphm has joined #openstack-infra | 14:33 | |
*** pabelanger_ has quit IRC | 14:33 | |
*** pabelanger_ has joined #openstack-infra | 14:33 | |
*** pabelanger has quit IRC | 14:33 | |
anteaya | can this storyboard patch get some eyes on it please? https://review.openstack.org/#/c/40014/ | 14:33 |
*** pabelanger_ is now known as pabelanger | 14:33 | |
*** pabelanger_ has joined #openstack-infra | 14:34 | |
*** pabelanger has quit IRC | 14:34 | |
*** pabelanger has joined #openstack-infra | 14:34 | |
jeblair | fungi: know anything about "Fatal error: puppet-3.1.1-1.fc18.noarch requires hiera >= 1.0.0 : Success - empty transaction | 14:37 |
sdague | fungi: oh, that's solved | 14:37 |
sdague | sorry | 14:37 |
jeblair | fungi: fedora18-1.slave is sending emails with that | 14:37 |
sdague | fungi: so kill it | 14:37 |
mordred | jeblair: wow. that looks amazing | 14:37 |
mordred | and no | 14:37 |
fungi | jeblair: i thought i'd downed fedora18-1's puppetry | 14:37 |
fungi | i'll go ahead and tear that slave down | 14:38 |
*** cody-somerville_ is now known as cody-somerville | 14:38 | |
LinuxJedi | mordred: any way to convert a draft review into a non-draft review if the person who created it is away on vacation? | 14:39 |
fungi | LinuxJedi: by editing a row in the database | 14:39 |
LinuxJedi | ouch, ok | 14:40 |
*** yaguang has joined #openstack-infra | 14:40 | |
fungi | LinuxJedi: "draft" is a bool column in the patchsets table. just toggle it | 14:40 |
mordred | LinuxJedi: or - grab it, and recommit it removing the change-id line | 14:40 |
mordred | LinuxJedi: and upload it s a new changeset | 14:40 |
fungi | LinuxJedi: your gerrit or openstack's? | 14:40 |
LinuxJedi | mordred: I've already pushed up a changeset on top of the draft one, so that will make a mess | 14:41 |
LinuxJedi | fungi: openstack | 14:41 |
fungi | i can un-draft something in review.o.o, just let me know the change number | 14:41 |
LinuxJedi | fungi: 39608 | 14:41 |
LinuxJedi | normally it wouldn't matter, but I need to deploy this feature before he gets back from vacation :) | 14:42 |
clarkb | sdague: I am able to perform queries over the last hour. I think you are safe search 12 hour chunks or so | 14:44 |
clarkb | jeblair: I think the zuul status page isn't properly accounting for d-g nodes belonging to the new jenkins servers | 14:44 |
jeblair | clarkb: that's correct; the 3 dg systems are overwriting each other | 14:45 |
*** tianst has joined #openstack-infra | 14:45 | |
jeblair | LinuxJedi: tell him not to use drafts next time. only unhappiness can result. | 14:45 |
LinuxJedi | why do we support them if they are bad? :) | 14:46 |
mordred | BobBall, sdague: https://review.openstack.org/40431 | 14:46 |
*** edleafe has joined #openstack-infra | 14:46 | |
mordred | jeblair: do we need to turn d-g into a system driven by gearman with a single reporting entity? :) | 14:47 |
*** ruhe has quit IRC | 14:48 | |
*** changbl_ has joined #openstack-infra | 14:51 | |
fungi | LinuxJedi: turns out i had to update the draft column for that patchset from Y to N in the patch_sets table but also update the status column for the change from d to w in the changes table | 14:53 |
edleafe | mordred: I'm getting a pbr installation on travis-ci: https://travis-ci.org/rackspace/pyrax/jobs/9901960 | 14:53 |
edleafe | mordred: alex gaynor suggested pinging you about it | 14:53 |
LinuxJedi | fungi: eek... sorry to be a pain. Many thanks for sorting that :) | 14:53 |
mordred | edleafe: wow. that's the weirdest problem anyone has come to me with in this channel :) | 14:53 |
mordred | I see travis-ci.org and pyrax in that question ;) | 14:53 |
fungi | LinuxJedi: no worries--i wanted to have a better understanding of how that worked underneath anyway | 14:53 |
edleafe | mordred: well, I maintain pyrax | 14:54 |
mordred | edleafe: yes yes. I'm being snarky. one sec | 14:54 |
mordred | edleafe: I believe you have hit the distribute upgrade problem ... | 14:55 |
edleafe | mordred: fwiw, it installs without issue on every other system I've tried | 14:55 |
mordred | edleafe: does travis give the ability to tell it what version of setuptools you want in the virtualenv it uses? | 14:55 |
edleafe | mordred: not that I know of. it seems to be a known issue after googling: https://github.com/travis-ci/travis-ci/issues/1197 | 14:56 |
mordred | yes. basically, anything that depends on anything that depends on distribute is going to break | 14:57 |
mordred | if the travis guys don't do something systemically to fix it | 14:57 |
edleafe | but setup.py is using setuptools | 14:57 |
edleafe | not distribute | 14:57 |
mordred | edleafe: nono. this has nothing to do with you | 14:58 |
mordred | this has to do with a 3rd level transitive dependency causing things to get upgraded in a specific sequence which breaks things | 14:58 |
edleafe | mordred: yeah, I know - just wanted to clarify | 14:58 |
mordred | so - in your install: section, before your python setup.py install | 14:59 |
mordred | edleafe: add "pip install -U setuptools" | 14:59 |
mordred | and everything should work | 14:59 |
*** ftcjeff has joined #openstack-infra | 15:00 | |
edleafe | mordred: trying it now... | 15:01 |
edleafe | mordred: no love from travis: https://travis-ci.org/rackspace/pyrax/jobs/9904417 | 15:05 |
*** ruhe has joined #openstack-infra | 15:05 | |
mordred | let the record show, I have been helpful to travis people: https://github.com/travis-ci/travis-ci/issues/1197 | 15:05 |
clarkb | is the virtualenv being created with distribute? | 15:06 |
edleafe | mordred: duly noted | 15:06 |
edleafe | mordred: and thanks for looking into this | 15:06 |
mordred | edleafe: that is a different issue | 15:07 |
mordred | or, maybe it's not. hrm... one sec | 15:07 |
mordred | edleafe: will you try one more thing for me, just for giggles? | 15:08 |
mordred | edleafe: will you replace "python setup.py install" with "pip install ." | 15:09 |
edleafe | mordred: ok, gimme a sec... | 15:09 |
edleafe | mordred: leaving in the 'pip install -U setuptools'? | 15:09 |
mordred | edleafe: yes | 15:09 |
*** ianw has quit IRC | 15:10 | |
*** yaguang has quit IRC | 15:10 | |
*** pcrews has joined #openstack-infra | 15:12 | |
mordred | WOW. why in the WORLD is pbr's override called by python-swiftclient getting called by pyrax's easy_install call | 15:13 |
mordred | that's AMAZING bleeding | 15:13 |
*** dolphm has quit IRC | 15:13 | |
mordred | oh -right - easy_install does everything in function calls in the same process | 15:14 |
fungi | jeblair: fyi, i've deleted fedora18-1.slave.openstack.org from jenkins.o.o, from rackspace nova and from rackspace dns (a and aaaa rrs). i checked jenkins01 (and 02) also just to be safe, but didn't see it in there yet. also would have deleted it from puppet-dashboard but looks like i may have already done that in the past couple weeks | 15:14 |
mordred | so python-swiftclient consumes pbr, which does things for its install which are still in process afterwards, so subsequent easy_install invocations FAIL | 15:14 |
BobBall | mordred: looks like your change will work - but I'm just testing it to be doubly sure | 15:14 |
mordred | WOW | 15:14 |
mordred | what a giant pile of terrible design! | 15:14 |
BobBall | mordred: and while I think a long term solution is probably needed let's get the short term fix in too :) | 15:15 |
jeblair | fungi: cool, thx! | 15:15 |
fungi | jeblair: out of curiosity where were you seeing those errors show up? puppet agent was disabled and the cron job was commented out for weeks | 15:15 |
mordred | BobBall: yeah. that way we won't feel pressure to solve the very tricky and intricate problem NOW | 15:15 |
edleafe | mordred: travis is all unicorns and rainbows now | 15:15 |
fungi | jeblair: oh, nevermind. it was on the autoupdate cronspam--i see it now | 15:16 |
clarkb | jeblair: fungi mordred translation change proposals to gerrit are working again on proposal.slave. I think we can safely delete the old tx slave now | 15:16 |
mordred | edleafe: awesome. that's because a) easy_install is TERRIBLE and b) easy_install is TERRIBLE and c) you guesed it, easy_install is the worst piece of software ever written by humans | 15:16 |
fungi | clarkb: yay! | 15:16 |
mordred | clarkb: woot | 15:16 |
jeblair | clarkb: cool | 15:16 |
edleafe | mordred: yeah, but it's *easy*!! | 15:16 |
*** reed has joined #openstack-infra | 15:17 | |
*** dkranz has joined #openstack-infra | 15:17 | |
clarkb | most patchsets ever? https://review.openstack.org/#/c/27091/ | 15:18 |
*** nijaba has quit IRC | 15:18 | |
*** danger_fo_away is now known as danger_fo | 15:19 | |
* mordred throws setuptools at edleafe | 15:19 | |
*** nijaba has joined #openstack-infra | 15:19 | |
*** nijaba has quit IRC | 15:19 | |
*** nijaba has joined #openstack-infra | 15:19 | |
fungi | clarkb: and no votes on it for nearly 4 months... does anybody look at glance translation update proposals? i'm guessing not :/ | 15:20 |
mordred | clarkb: do we have a plicy that a single core can +2 a translation change? | 15:21 |
clarkb | mordred: I don't think we do | 15:21 |
clarkb | I am open to that though | 15:22 |
*** dolphm has joined #openstack-infra | 15:22 | |
clarkb | glance dev seems to be really slow right now | 15:22 |
*** vogxn has quit IRC | 15:22 | |
* mordred just pinged in channel | 15:23 | |
jeblair | fungi: you mentioned yesterday you might be interested in helping to move jenkins slave nodes... still up for that? | 15:23 |
fungi | jeblair: sure, i'm happy to pitch in on that | 15:24 |
jeblair | fungi: okay, want to work on the precise nodes while i continue on centos? | 15:24 |
fungi | you're setting the node offline in jenkins.o.o, then waiting for any running job to complete, then adding the node on jenkins01 or 02 (odd vs even), then deleting it in jenkins.o.o? | 15:25 |
fungi | any steps i'm missing there? | 15:25 |
jeblair | fungi: yes -- except make sure to click the delete button on jenkins before hitting the save button on jenkins0[12] so that there's no chance of stepping on toes | 15:25 |
fungi | aha, got it. so delete from jenkins.o.o before adding to a new jenkins | 15:26 |
jeblair | yep | 15:26 |
fungi | i'll tackle precises | 15:26 |
fungi | now that my advisories for the day are sent | 15:26 |
jeblair | keepin ya busy, huh? | 15:27 |
fungi | i like being busy ;) | 15:27 |
*** CliMz has quit IRC | 15:27 | |
fungi | i do want to pick your brain about branch-specific mirroring at some point today too. i have a wip change for the jobs up, but i'll wait to pester you until i have the corresponding jeepyb patch up for review this afternoon | 15:28 |
jeblair | ok | 15:28 |
*** yaguang has joined #openstack-infra | 15:28 | |
*** vijendar has quit IRC | 15:31 | |
*** markmcclain has quit IRC | 15:31 | |
jeblair | fungi: i'm done with centos, i'll do precise3k now | 15:33 |
fungi | k | 15:33 |
*** psedlak has joined #openstack-infra | 15:34 | |
HenryG | This is new to me. What happened here? http://logs.openstack.org/17/39017/4/gate/gate-neutron-python27/ed7be43 | 15:35 |
*** odyssey4me has quit IRC | 15:36 | |
jeblair | clarkb: can you triage that? ^ it doesn't make sense to me. | 15:36 |
jeblair | fungi: ok, done with precise3k; i'm going to go turn down the devstack-gate knobs | 15:37 |
clarkb | jeblair: ya | 15:38 |
clarkb | HenryG: if you open of the subunit log and scroll to the end | 15:38 |
clarkb | HenryG: it appears that the test failures are captured in there. Still not sure why testr didn't report them to stderr correctly | 15:38 |
*** pabelanger has quit IRC | 15:39 | |
clarkb | oh interesting the thing I thought was a failure reported successful. /me keeps looking | 15:39 |
*** pabelanger__ has joined #openstack-infra | 15:39 | |
fungi | jeblair: okay. i'm slowly ramping up on the precise slaves and getting the migration pattern down, but will be a bit before i get through the rest of them | 15:39 |
*** pabelanger__ has quit IRC | 15:39 | |
*** pabelanger__ has joined #openstack-infra | 15:39 | |
jeblair | fungi: *nod* | 15:39 |
*** pabelanger__ is now known as pabelanger | 15:39 | |
clarkb | oh I see | 15:40 |
*** pabelanger has joined #openstack-infra | 15:40 | |
clarkb | HenryG: those are process return code failures, indicating the test runners exited uncleanly without returning 0 to the calling testr process | 15:40 |
clarkb | HenryG: is the code under testing prematurely exiting? | 15:41 |
HenryG | clarkb: My change did not touch these bits :( | 15:42 |
clarkb | looks like py26 and py27 failed in the same way so it isn't completely inconsistent | 15:43 |
clarkb | HenryG: is it possible that your change rebased atop master tickled something? | 15:43 |
HenryG | clarkb: Trying to think how that might be, but extremely unlikely for unit tests | 15:44 |
*** beagles has quit IRC | 15:44 | |
clarkb | HenryG: looking in the subunit logs I think the log capturing has the ERRORs | 15:45 |
clarkb | things like 2013-08-06 14:37:29,904 ERROR [neutron.api.v2.resource] create failed | 15:45 |
*** vijendar has joined #openstack-infra | 15:45 | |
clarkb | and 2013-08-06 14:37:34,512 ERROR [neutron.api.v2.resource] update failed | 15:46 |
clarkb | these feel more like functional tests and they are bombing out because something changed | 15:46 |
HenryG | clarkb: please bear with me while I ask ignorant questions ... | 15:49 |
*** rcleere has joined #openstack-infra | 15:49 | |
HenryG | Are these the same tests that I can run locally with 'tox -e py27' in neutron? | 15:50 |
clarkb | HenryG: yes | 15:50 |
clarkb | HenryG: however, your change was rebased atop the state of the gate when it was tested and failed | 15:50 |
clarkb | HenryG: so if you want that exact state you need to fetch the ref that was tested | 15:51 |
clarkb | git fetch http://zuul.openstack.org/p/openstack/neutron refs/zuul/master/Z333d40755cb1488680f87e4af90ec7d6 && git checkout FETCH_HEAD | 15:51 |
clarkb | that ref is near the beginning of the job's console log | 15:51 |
*** sarob has joined #openstack-infra | 15:52 | |
fungi | clarkb: mordred: looks like we recently broke pip on the centos slaves? http://puppet-dashboard.openstack.org:3000/reports/778887 "Could not locate the pip command." started on sunday... | 15:53 |
clarkb | on sunday? I didn't approve anything over the weekend /me looks in git logs | 15:54 |
fungi | i'm going to guess something locally depending on pbr, upgrading pbr, doing something to globally-installed pip | 15:55 |
clarkb | pbr did do a new release right? | 15:55 |
fungi | so might have been a pbr change or release tagged on sunday which triggered it? | 15:55 |
clarkb | fungi: new pbr release on the 4th | 15:55 |
clarkb | (which was sunday) I bet that is what caused the problem | 15:56 |
fungi | we have a loose temporal correspondence with that then | 15:56 |
fungi | stronger if that changed handling of pip | 15:56 |
*** tianst has quit IRC | 15:56 | |
mordred | it did not cange handling of pip | 15:56 |
zaro | clarkb: did you see that the zmq plugin is available now? | 15:56 |
*** avtar has joined #openstack-infra | 15:56 | |
clarkb | zaro: yup I saw that. It showed up late yesterday | 15:57 |
clarkb | zaro: so you think the pom.xml upload did it? | 15:57 |
zaro | yes, fo sure. | 15:57 |
clarkb | if so we should add uploading that file to the plugin upload job | 15:57 |
zaro | clarkb: https://bugs.launchpad.net/openstack-ci/+bug/1208901 | 15:57 |
uvirtbot | Launchpad bug 1208901 in openstack-ci "deploy jenkins plugin pom.xml file " [Undecided,New] | 15:57 |
clarkb | perfect | 15:58 |
clarkb | fungi: mordred remember centos is broken and doesn't use /usr/local | 15:58 |
clarkb | fungi: mordred isn't it possible that in upgrading pbr or $other thing dependencies were munged and pip was removed? similar to what we see with devstack on rhel? | 15:59 |
* anteaya wonders what life would be like if smalltalk had gained more traction and we could just issue devstack images | 15:59 | |
*** beagles has joined #openstack-infra | 16:00 | |
BobBall | hmmmm | 16:00 |
BobBall | Running the latest devstack removes python-setuptools which in turn removes nose, coverage, pip, numpy... | 16:00 |
clarkb | anteaya: ship around squeak VMs? | 16:00 |
anteaya | clarkb: would eliminate the dependency issues, would it not? | 16:01 |
mordred | clarkb: oh god | 16:01 |
anteaya | I take it mordred doesn't like the idea | 16:01 |
mordred | ok. so - I'm starting to believe that we REALLY need a setuptools 0.9.8 rpm | 16:01 |
mordred | because the dance we're doing with redhat right nowis not really working for me | 16:01 |
*** mrodden has quit IRC | 16:02 | |
clarkb | anteaya: it would, but have you used squeak? I can't get over the fact that it tries to be so self contained to the point of uselessness | 16:02 |
mordred | I don't mean we need all of the rpms in the world - but a setuptools 0.9.8 rpm will solve MANY things | 16:02 |
clarkb | anteaya: turns out my existing editor and browser and all these other tools are better than the conglomeration they ship. | 16:02 |
clarkb | mordred: do we need a corresponding deb? | 16:03 |
anteaya | clarkb: I have just done the tutorials, but I know a guy who has used smalltalk for years, he loves it | 16:03 |
clarkb | mordred: so that we can apply it symmetrically across platforms? | 16:03 |
anteaya | clarkb: fair enought | 16:03 |
mordred | clarkb: debian isn't broken in quite the same way - but yeah, that would be nice | 16:03 |
clarkb | anteaya: one of my professors was a big Squeak fan. wrote http://www.squeakbyexample.org/ | 16:03 |
mordred | clarkb: I've asked zul for one | 16:03 |
clarkb | anteaya: we were happy when he made us use mosml | 16:03 |
anteaya | but devstack is fairly self contained, and meant to be so, no? | 16:03 |
anteaya | clarkb: ? https://www.itu.dk/~sestoft/mosml.html | 16:05 |
clarkb | anteaya: ya | 16:06 |
anteaya | you liked it better than squeak? | 16:06 |
pleia2 | good morning | 16:06 |
clarkb | anteaya: yes, I mean it is completely different, but we felt the pain of mosml was more bearable than squeak | 16:06 |
anteaya | morning pleia2 | 16:07 |
clarkb | pleia2: morning | 16:07 |
anteaya | clarkb: okay, guess I have never felt the pain of squeak, just that it requires a structure that is very limited because the paradigm was never as popular as files | 16:07 |
anteaya | but I like prolog and forth, so I am odd | 16:08 |
*** Ryan_Lane has joined #openstack-infra | 16:08 | |
*** salv-orlando has joined #openstack-infra | 16:08 | |
clarkb | HenryG: any luck? | 16:08 |
*** sarob_ has joined #openstack-infra | 16:08 | |
*** sarob has quit IRC | 16:08 | |
HenryG | clarkb: I fetched that ref and ran tox locally. No problems. | 16:09 |
clarkb | zaro: for deploying the pom.xml I think we should extract the file from the hpi archive in the publish job then push both. Instead of pushing both to tarballs.o.o | 16:09 |
clarkb | HenryG: it is possible this is an interation between tests in that case | 16:10 |
clarkb | HenryG: with that ref checked out, you can download the subunit file from the failing test run. Then `source .tox/py27/bin/activate ; testr load $path_to_downloaded_subunit_file ; testr run --analyze isolation` this will attempt to determine which tests interact poorly with each other given the order tests were run in in the gate | 16:11 |
salv-orlando | clarkb, HenryG: we were chasing this issue in openstack-neutron | 16:11 |
clarkb | the other thing to try is running those tests on their own. It may be that they have some hidden dependency on other tests | 16:12 |
salv-orlando | as a matter of fact, we already seen intermittently this particular faillure, where the test runner appears to crash | 16:12 |
*** mrodden has joined #openstack-infra | 16:12 | |
salv-orlando | has there been any recent change in the gate which might have increased the levelly of tests concurrency? | 16:13 |
clarkb | salv-orlando: no, the unittest slaves are still 4 core machines | 16:13 |
clarkb | salv-orlando: but order isn't necessarily deterministic | 16:13 |
clarkb | testr will attempt to group tests optimally given previous test run times, jenkins does not keep the .testrepository dir though | 16:14 |
salv-orlando | clarkb: thanks. My question was about the fact that from today we're noticing a much higher impact of this particular failure | 16:14 |
clarkb | so to reproduce locally you may have an easier time if you moved .testrepository aside each time you run the tests | 16:14 |
*** cppcabrera has left #openstack-infra | 16:14 | |
clarkb | salv-orlando: last time we saw something like this in neutron there was a sys.exit() call in a plugin iirc | 16:15 |
clarkb | which caused the test suite to bomb out early | 16:15 |
*** nicedice has joined #openstack-infra | 16:16 | |
*** nayward has joined #openstack-infra | 16:16 | |
salv-orlando | sys.exit however should be deterministically reproducible… shouldn't it? | 16:17 |
clarkb | salv-orlando: not if it is hidden in error handling code or some other dark corner | 16:17 |
clarkb | but sys.exit does seem less likely if this is not deterministic | 16:17 |
*** vogxn has joined #openstack-infra | 16:18 | |
salv-orlando | I am more inclined towards a concurrency issues, but I'll check the sys.exit too. | 16:18 |
*** ^d is now known as ^demon|away | 16:18 | |
salv-orlando | This particular test case basically just executes a set of common unit tests with another plugin - but in its turn this plugin just uses the same mixin | 16:19 |
salv-orlando | as all the other plugin. | 16:19 |
*** nijaba has quit IRC | 16:19 | |
salv-orlando | The thing that is probably weird is that this test case inherits from another plugin's test case. | 16:19 |
*** yolanda has quit IRC | 16:19 | |
*** nijaba has joined #openstack-infra | 16:19 | |
salv-orlando | Anyway, I will keep trying to fix it. I Might stress the gate a bit with a few patches. | 16:20 |
mordred | notmyname: any chance I could convince you to release python-swiftclient? | 16:20 |
salv-orlando | If the gate becomes pretty much blocked by this, I suggest temporarily removing this test case. It adds very little to code coverage, and it's plugin specific. | 16:20 |
*** sarob_ has quit IRC | 16:21 | |
mordred | notmyname: with the requirments update that drops d2to1 from the reqs? it's causing people to trip over the distribute/setuptools bug. | 16:21 |
clarkb | salv-orlando: good to know. Let me know if you have testr or jenkins related questions. happy to help as much as I can | 16:21 |
clarkb | HenryG: ^ | 16:21 |
*** vogxn has quit IRC | 16:22 | |
mordred | notmyname: this one: https://review.openstack.org/#/c/40286/ specifically | 16:22 |
*** sarob has joined #openstack-infra | 16:22 | |
*** vogxn has joined #openstack-infra | 16:22 | |
*** dkranz has quit IRC | 16:22 | |
mordred | clarkb: one of the ways we can stop tripping over this setuptools/distribute thing | 16:22 |
* clarkb AFKs a bit to facilitate a move to the office | 16:22 | |
mordred | clarkb: is to get everything out of our pipeline that tries to install distribute | 16:22 |
clarkb | mordred: basically kill distribute with fire? | 16:23 |
mordred | which, at the moment, are pip installs of d2to1 and pyhton-mysql | 16:23 |
mordred | yeah | 16:23 |
mordred | we can safely install python-mysql from apt in both devstack and our machines that need it | 16:23 |
mordred | and if we can get python-*client to cut new releases with the pbr dep updated to post d2to1 | 16:23 |
mordred | I think we're out of the weeks | 16:23 |
mordred | woods | 16:23 |
mordred | I'm thinking that instead of trying to solve setuptools globally, we just stop using things that trip the bug | 16:24 |
mordred | and go back to system installed setuptools | 16:24 |
jeblair | that would be great | 16:24 |
mordred | yeah - I think we've been chasing the wrong rabbit | 16:24 |
clarkb | ++ | 16:25 |
clarkb | can we also work with mysql-python to fix their depend? | 16:25 |
reed | interesting... the UX team is evaluating using askbot to manage their discussions | 16:26 |
clarkb | that dependency on distribute has caused othee problems | 16:26 |
mordred | yup. well, I think mysql-python is actually safe for us right now. I have no idea when andy might cut another release | 16:27 |
mordred | but yeah | 16:27 |
*** yaguang has quit IRC | 16:27 | |
salv-orlando | clarkb: good news. enikanorov found the nasty sys.exit | 16:28 |
clarkb | \o/ can you link me to the review that fixea the problem? I am curious now | 16:30 |
*** odyssey4me has joined #openstack-infra | 16:32 | |
salv-orlando | clarkb: enikanorov is working. The sys.exit merely uncovered the real reason for the exception | 16:33 |
reed | do you know if there is a special process that users need to follow in order to change their SSH key in gerrit? | 16:33 |
reed | or just change the key and go push a review? | 16:33 |
salv-orlando | reed: settings --> ssh public keys? | 16:34 |
reed | I have a user reporting that reviews are refused after changing the key | 16:34 |
fungi | reed: they go to https://review.openstack.org/#/settings/ssh-keys | 16:34 |
reed | salv-orlando, that's what I assumed | 16:34 |
pleia2 | do we want to allow http access to cgit, or force https for everything? came up in the add ssl review: https://review.openstack.org/#/c/40253/5/modules/cgit/templates/git.vhost.erb | 16:34 |
*** dkranz has joined #openstack-infra | 16:35 | |
clarkb | pleia2: for all other https capable hosts we force https | 16:35 |
*** BobBall is now known as BobBall_Away | 16:35 | |
*** sdake_ has joined #openstack-infra | 16:35 | |
pleia2 | clarkb: ah, makes sense then | 16:36 |
fungi | i'm torn. on the one hand we've got people who have trouble getting git+https through their work proxies reliably (ibm?), on the other hand https-everywhere is a compelling idea | 16:36 |
fungi | and since we're adding git protocol too, which should be faster than plaintext http in theory, maybe the http benefits are not significant | 16:37 |
*** thomasbiege has quit IRC | 16:37 | |
fungi | also, having http and http copies of everything visible to crawlers means search results are diluted | 16:38 |
fungi | because they index it twive | 16:38 |
fungi | twice | 16:38 |
*** sandywalsh has quit IRC | 16:39 | |
*** UtahDave has joined #openstack-infra | 16:40 | |
*** yaguang has joined #openstack-infra | 16:40 | |
*** boris-42 has quit IRC | 16:41 | |
*** fbo is now known as fbo_away | 16:43 | |
*** zul has quit IRC | 16:43 | |
openstackgerrit | Elizabeth Krumbach Joseph proposed a change to openstack-infra/config: Add httpd ssl support to git.openstack.org https://review.openstack.org/40253 | 16:46 |
*** gyee has joined #openstack-infra | 16:48 | |
clarkb | fungi: ya git protocol should be much better and I think we can recommend it in cases where authenticating the remote end isn't super important | 16:51 |
jeblair | +1 | 16:52 |
jeblair | the devstack nodes on jenkins.o.o are almost run down; i'll increase the numbers on 01 and 02 now. | 16:52 |
Alex_Gaynor | jeblair: can't we just yell at teh tempest authors to make their tests faster? (snark) | 16:52 |
Alex_Gaynor | Also, has anyone tried to quantify if openstack has the most used CI system in all of open source? | 16:53 |
jeblair | Alex_Gaynor: we do that too ;) | 16:53 |
*** krtaylor has quit IRC | 16:53 | |
Alex_Gaynor | jeblair: yell, or quantify? | 16:53 |
clarkb | Alex_Gaynor: making the tests faster means we need more node :) as we can run more tests in shorter period of time | 16:53 |
jeblair | Alex_Gaynor: a little of both? | 16:53 |
clarkb | Alex_Gaynor: jeblair had a test hours per hour graph | 16:53 |
jeblair | clarkb: it's not done yet, i'm pretty sure the last version was wrong | 16:54 |
fungi | jeblair: precise slaves are all moved to their respective new masters, and i've confirmed at least one completed job ran on each now | 16:54 |
jeblair | Alex_Gaynor: but snark aside, what i'm actually doing is moving the the load off of our old jenkins server and on to two newly minted jenkins masters. | 16:54 |
clarkb | jeblair: it occurred to me that jenkins + gearman is what you would need to compete with travis >_> | 16:55 |
*** sandywalsh has joined #openstack-infra | 16:55 | |
Alex_Gaynor | jeblair: are we going to get a nice shinny SSL cert for 01 and 02? | 16:55 |
clarkb | Alex_Gaynor: I think travisci as a whole does a lot more open source testing | 16:55 |
Alex_Gaynor | clarkb: I guess OS is 2nd to "all of travis", which isn't half bad :) | 16:56 |
clarkb | Alex_Gaynor: but on a per project basis I think we run more tests than any travisci project | 16:56 |
Alex_Gaynor | Maybe GCC, they have some crazy platform testing | 16:56 |
jeblair | Alex_Gaynor: well, if it's important.... we've been doing a lot to make it so you don't actually have to visit jenkins most of the time... | 16:56 |
jeblair | Alex_Gaynor: so given that, i didn't think it was worth the hassle, especially if we start spinning up more jenkins with more weird names... | 16:56 |
Alex_Gaynor | jeblair: I often watch the stdout, since I find jenkins time estimates are... innacurate | 16:56 |
Alex_Gaynor | But if it's too much of a hassle, no worries | 16:57 |
*** vogxn has quit IRC | 16:57 | |
jeblair | we could: ignore it (i have 'accepted this certificate permanently); turn off https (but some of us still log in so it might be nice to avoid passing openid nonces in the clear); reverse proxy it from jenkins.o.o; or buy some certs. | 16:58 |
clarkb | Alex_Gaynor: you can create an exception and trust us :) | 16:58 |
fungi | i think reverse-proxying the realtime log stream through status.o.o might make sense | 16:58 |
Alex_Gaynor | clarkb: THat's what I did :) | 16:59 |
fungi | Alex_Gaynor: TRUST US | 16:59 |
fungi | ;) | 16:59 |
*** yaguang has quit IRC | 16:59 | |
*** sandywalsh has quit IRC | 17:00 | |
*** dkranz has quit IRC | 17:00 | |
*** ruhe has quit IRC | 17:04 | |
mordred | jeblair: we could also publish the ca that we use to allow people to choose to trust us if they wanted :) | 17:06 |
mgagne | Is an additional right required to approve stuff on the stable/grizzly branch? ref.: https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/gerrit/acls/stackforge/puppet-modules.config | 17:07 |
sdague | clarkb: so is there anything that can be done in the logstash config to get > 24hr searches to work? | 17:10 |
*** odyssey4me has quit IRC | 17:11 | |
jeblair | mgagne: yes, there's a global rule that allows the release team exclusive access to the stable branches. stackforge projects need an extra acl to override the override. | 17:11 |
jeblair | s/release/stable-maint/ | 17:12 |
mgagne | I'll grep for it, thanks! | 17:12 |
jeblair | mgagne: i don't think the all-projects acls have made it into puppet yet; you can see them here: https://review.openstack.org/gitweb?p=All-Projects.git;a=history;hb=refs%2Fmeta%2Fconfig;f=project.config | 17:12 |
openstackgerrit | Khai Do proposed a change to openstack-infra/config: deploy jenkins plugin pom.xml file. https://review.openstack.org/40455 | 17:12 |
*** sandywalsh has joined #openstack-infra | 17:13 | |
jeblair | mgagne: also, documentation here may explain some of the reasoning: http://ci.openstack.org/gerrit.html#access-controls | 17:13 |
*** zul has joined #openstack-infra | 17:13 | |
openstackgerrit | Khai Do proposed a change to openstack-infra/config: deploy jenkins plugin pom.xml file. https://review.openstack.org/40455 | 17:14 |
mgagne | jeblair: as a core member, I should therefore be able to review stuff? | 17:14 |
mgagne | jeblair: is a reload of gerrit required? | 17:14 |
clarkb | sdague: more bigger nodes (not likely to happen) or index less stuff | 17:15 |
jeblair | mgagne: what project? | 17:15 |
sdague | clarkb: hmph, the shards aren't working out? | 17:15 |
*** derekh has quit IRC | 17:15 | |
clarkb | sdague also right now I realize I am not getting events from the new servers will fix in a bit | 17:16 |
*** dkranz has joined #openstack-infra | 17:16 | |
sdague | ok, yeh, I was just trying to confirm if that floating ips bug was seen on any non neutron jobs | 17:16 |
mgagne | jeblair: stackforge/puppet-quantum | 17:16 |
clarkb | sdague not really as the amount of data over a 24 hour period doesnt change | 17:16 |
sdague | it only shows up twice in the last 24 hrs, both neutron jobs | 17:16 |
clarkb | so you can add shards but without a bunch of nodes there is little value | 17:17 |
*** Ryan_Lane has quit IRC | 17:17 | |
clarkb | sdague you can search older time periods just put upper and lower time bounds | 17:17 |
clarkb | or search on particular indexes | 17:17 |
*** nijaba has quit IRC | 17:19 | |
*** yaguang has joined #openstack-infra | 17:19 | |
mordred | mgagne: no, if you read the doc above, it will explain why | 17:19 |
mordred | mgagne: you can override that inyour own project | 17:19 |
mgagne | mordred: thanks, I'll reread the doc | 17:19 |
*** nijaba has joined #openstack-infra | 17:20 | |
*** nijaba has quit IRC | 17:20 | |
*** nijaba has joined #openstack-infra | 17:20 | |
*** afazekas has joined #openstack-infra | 17:20 | |
mordred | mgagne: look at modules/openstack_project/files/gerrit/acls/openstack-dev/devstack.config for an example of a project that overrides it | 17:20 |
clarkb | sdague we could try indexing only the gate to reign in the amount of data | 17:21 |
mgagne | mordred: thanks, that's what I was looking for | 17:21 |
*** rfolco has quit IRC | 17:22 | |
sdague | clarkb: hmmmm.... what if we discarded the DEBUG logs in the services | 17:22 |
sdague | because I don't think in the base case we're going to need that | 17:23 |
*** amotoki has quit IRC | 17:23 | |
sdague | we'll punch back out to the real logs for DEBUG | 17:23 |
*** nati_ueno has joined #openstack-infra | 17:23 | |
openstackgerrit | Mathieu Gagné proposed a change to openstack-infra/config: Allow puppet-manager-core to review stable branches https://review.openstack.org/40456 | 17:23 |
clarkb | sdague we can do that | 17:24 |
zaro | jenkins01 is exciting, but orig jenkins looks boring now. why so? | 17:24 |
clarkb | I will write that patch | 17:24 |
clarkb | zaro we are killing it | 17:25 |
zaro | clarkb: to be replaced with new one? jenkins02? | 17:26 |
*** odyssey4me has joined #openstack-infra | 17:26 | |
*** Ryan_Lane has joined #openstack-infra | 17:27 | |
clarkb | yes | 17:27 |
*** dina_belova has quit IRC | 17:28 | |
zaro | ahh.. can't wait! | 17:28 |
*** thomasm has joined #openstack-infra | 17:30 | |
*** krtaylor has joined #openstack-infra | 17:31 | |
*** SergeyLukjanov has quit IRC | 17:32 | |
*** SergeyLukjanov has joined #openstack-infra | 17:33 | |
*** ruhe has joined #openstack-infra | 17:33 | |
*** ruhe has quit IRC | 17:34 | |
*** SergeyLukjanov has quit IRC | 17:34 | |
fungi | zaro: for the near term, untrusted slaves will be split between jenkins01 and jenkins02 (odds and evens), while jenkins.openstack.org will continue to handle trusted slaves (proposal, pypi, mirrors, et cetera) | 17:34 |
fungi | though it sounds like jenkins.openstack.org will likely also get a rebuilt replacement to move the trusted slaves onto | 17:35 |
*** Ryan_Lane has quit IRC | 17:36 | |
harlowja_ | woah lots of scrollback, ha | 17:37 |
harlowja_ | i blame mordred | 17:37 |
harlowja_ | ha | 17:37 |
*** dina_belova has joined #openstack-infra | 17:38 | |
zaro | fungi: i was wondering about that, thnx. | 17:38 |
*** yaguang has quit IRC | 17:38 | |
*** dina_belova has quit IRC | 17:40 | |
*** vipul is now known as vipul-away | 17:40 | |
fungi | mordred: clarkb: i may have missed part of the discussion, but did we have a suggested solution to the broken pip installation on our centos6 slaves? reinstall python-pip from rpm? | 17:43 |
mordred | fungi: oh - no, we got sidetracked | 17:43 |
mordred | fungi: there is a script in devsstack that does the right thing... but I'm not really sure that's what we want to do on our centos slaves | 17:44 |
mordred | let me think about it for another second | 17:44 |
fungi | rpm -qa shows python-pip installed, but no pip executable in the path | 17:45 |
dtroyer | fungi: I've been testing a script to encapsulate all of that, try https://review.openstack.org/#/c/39827/ | 17:45 |
dtroyer | tools/install_pip.sh | 17:45 |
fungi | mmm, well these aren't devstack machines, but maybe | 17:46 |
*** ftcjeff has quit IRC | 17:46 | |
openstackgerrit | Clark Boylan proposed a change to openstack-infra/config: Fix logstash.o.o elasticsearch discover node list. https://review.openstack.org/40459 | 17:46 |
fungi | if this is generally useful outside of devstack, maybe it needs a different home | 17:46 |
dtroyer | maybe…it uses devstack's functions file but could be re-done standalone | 17:47 |
*** ruhe has joined #openstack-infra | 17:47 | |
dtroyer | hmmm…that one is old | 17:47 |
dtroyer | I just pused up my current one. it installs pip from a tarball as get-pip.py was not reliable over the weekend | 17:48 |
*** dolphm has quit IRC | 17:51 | |
openstackgerrit | Khai Do proposed a change to openstack-infra/config: deploy jenkins plugin pom.xml file. https://review.openstack.org/40455 | 17:51 |
openstackgerrit | James E. Blair proposed a change to openstack-infra/gear: Server: make job handle safer https://review.openstack.org/40462 | 17:51 |
clarkb | fungi: mordred for our centos slaves maybe we should prioritize the puppet refactor | 17:51 |
*** edleafe has left #openstack-infra | 17:51 | |
clarkb | and fix this across the board? | 17:52 |
mordred | clarkb: can you expand on "prioritize the puppet refactor" | 17:52 |
mordred | too many balls in air - I don't know which thing you mean there | 17:52 |
openstackgerrit | James E. Blair proposed a change to openstack-infra/gear: Server: make job handle safer https://review.openstack.org/40462 | 17:53 |
clarkb | mordred: if we refactor the setuptools/pip madness out of our base manifest we can apply it to the static slaves without breaking d-g | 17:53 |
mordred | ah - right | 17:53 |
clarkb | mordred: basically fix it in a generic way in puppet but only where we need it so that d-g and devstack are still independently testable | 17:53 |
*** salv-orlando has quit IRC | 17:53 | |
mordred | yes. I agree. in fact, I dont' think we need it on our static slaves | 17:53 |
clarkb | sdague: logstash should now be talking to all three jenkins masters. Will work on the filtering of debug messages shortly | 17:54 |
jeblair | don't need what on our static slaves? | 17:54 |
clarkb | jeblair: I think we do as it would correct the centos problems aiui | 17:55 |
jeblair | (i'm having trouble following the conversation because mordred said he agreed with clarkb, but appeared to contradict him) | 17:55 |
openstackgerrit | Elizabeth Krumbach Joseph proposed a change to openstack-infra/config: Add replication of git from gerrit to git.o.o https://review.openstack.org/37794 | 17:55 |
clarkb | jeblair: I think he did that | 17:56 |
jeblair | okay, i'll check back in a minute and see if this conversation got any more linear. | 17:56 |
sdague | clarkb: coolness | 17:56 |
clarkb | sdague: do you know if the oslo.config log levels that are not standard are documented somewhere? | 17:56 |
clarkb | sdague: so that I can drop DEBUG and below | 17:56 |
*** dkranz has quit IRC | 17:57 | |
clarkb | also can I just say that doing non standard python logging log levels seems like a bug | 17:58 |
mordred | clarkb, jeblair: I agree that we should do the refactor which should allow us to apply the pip/setuptools to things where they are needed | 17:58 |
*** pabelanger has quit IRC | 17:58 | |
mordred | separately, I do not believe that we need the pip/setuptools fix on our static slaves | 17:59 |
clarkb | mordred: can you explain the reason you don't believe that is necessary? | 17:59 |
openstackgerrit | Elizabeth Krumbach Joseph proposed a change to openstack-infra/config: Add replication of git from gerrit to git.o.o https://review.openstack.org/37794 | 17:59 |
sdague | clarkb: I thought it was standard somewhere | 17:59 |
*** dolphm has joined #openstack-infra | 18:00 | |
*** boris-42 has joined #openstack-infra | 18:00 | |
mordred | clarkb: I do not believe we are performing global actions on those machines wthat would cause setuptools/pip breakage | 18:01 |
*** dina_belova has joined #openstack-infra | 18:02 | |
*** afazekas has quit IRC | 18:02 | |
*** vipul-away is now known as vipul | 18:02 | |
*** sarob has quit IRC | 18:03 | |
*** sarob has joined #openstack-infra | 18:03 | |
jeblair | mordred: aside from running pip globally in puppet, which eventually always leads to breakage, which is why we want to replace the use of system pip with packages on our slaves. | 18:03 |
mordred | jeblair: yes | 18:04 |
*** markmcclain has joined #openstack-infra | 18:05 | |
clarkb | sdague: python logging only does DEBUG INFO WARNING ERROR CRITICAL | 18:05 |
clarkb | sdague: oslo.config does TRACE and AUDIT too | 18:05 |
clarkb | s/config// | 18:06 |
clarkb | I don't know why I wanted to say config there, it is in oslo logging which is still in incubation | 18:07 |
*** koolhead17 has joined #openstack-infra | 18:07 | |
*** sarob has quit IRC | 18:07 | |
koolhead17 | can some one help me with finding monty`s nick. I keep forgetting it | 18:07 |
clarkb | https://github.com/openstack/oslo-incubator/blob/master/openstack/common/log.py#L163-L167 | 18:08 |
clarkb | koolhead17: mordred | 18:08 |
koolhead17 | clarkb, aah thanks. stupid me | 18:08 |
*** melwitt has joined #openstack-infra | 18:09 | |
clarkb | sdague: looks like the new levels are all above the debug level | 18:09 |
clarkb | so I don't have to worry about them in this case | 18:09 |
*** dkranz has joined #openstack-infra | 18:10 | |
sdague | clarkb: yep | 18:12 |
*** SergeyLukjanov has joined #openstack-infra | 18:16 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/gearman-plugin: Add OFFLINE_NODE_WHEN_COMPLETE option https://review.openstack.org/40468 | 18:16 |
jeblair | clarkb, mordred: ^ the beginning of the end for the devstack inprogress/complete jobs. :) | 18:17 |
bodepd | clarkb: should I submit a patch to not install setuptools as a pip? | 18:19 |
*** nijaba has quit IRC | 18:19 | |
bodepd | clarkb: that is the thing that was causing errors from the pip class | 18:19 |
clarkb | bodepd: I think we already removed that | 18:20 |
*** nijaba has joined #openstack-infra | 18:20 | |
clarkb | bodepd: it caused problems with devstack gate. Unless we do it in mutliple place | 18:20 |
mordred | jeblair: well - so I think the global breakage when using pip globally is the thing we decided to chase the other rabbit on | 18:21 |
mordred | jeblair: as in, actually, nothing we install _should_ be trying to upgrade setuptools | 18:21 |
mordred | and now that new pbr is out that doesn't use d2to1, that shoudl be the case with all of the software that we do for ourselves | 18:21 |
mordred | which means system apt/yum instealed setuptools should be fine | 18:22 |
*** ruhe has quit IRC | 18:22 | |
jeblair | ok that makes me happy. :) | 18:22 |
bodepd | clarkb: oh. that must have been recently... | 18:23 |
mordred | jeblair, clarkb, bodepd: I think we got down a rabbit hole, and I'd like to try to unwind the complexity | 18:24 |
bodepd | clarkb mordred jeblair I am probably good to go | 18:24 |
mordred | cool | 18:24 |
bodepd | it looks like that resources was already removed | 18:24 |
bodepd | I still have a jenkins-job reload failure, but I'll sort that out today | 18:25 |
*** dina_belova has quit IRC | 18:25 | |
openstackgerrit | A change was merged to openstack-infra/config: Add more jobs for Savanna projects https://review.openstack.org/37987 | 18:26 |
fungi | mordred: so... just yum reinstall python-pip on those slaves for now and we should be fine? | 18:28 |
mordred | fungi: let's call that a yes | 18:29 |
fungi | mordred: clarkb: jeblair: i'm happy to patch up the system pip on centos6-dev1 in that case, and then follow up on the production ones if that works out | 18:31 |
dstufft | mordred: do you use pip install -U | 18:32 |
mordred | dstufft: for? | 18:32 |
dstufft | installing things globally | 18:32 |
dstufft | in openstac infra | 18:32 |
fungi | oh, though right now that server's got its own separate issues... "Failed to apply catalog: Invalid parameter psql_path at /etc/puppet/modules/postgresql/manifests/role.pp:54" | 18:32 |
mordred | dstufft: we use the puppet pip module | 18:32 |
*** harlowja_ has quit IRC | 18:32 | |
dstufft | sorry I'm only catching the tail end of things | 18:32 |
mordred | dstufft: it's all good... it's a long and confusing tail | 18:33 |
*** ftcjeff has joined #openstack-infra | 18:33 | |
mordred | dstufft: new theory - stop trying to upgrade setuptools everywhere, because it's a nightmare | 18:33 |
dstufft | but if anything depends on setuptools at all, and you execute ``pip install -U something-with-a-dep-on-setuptools-somewhere-in-dep-graph`` it will try to upgrade setuptools | 18:33 |
dstufft | because pip recursively upgrades | 18:33 |
mordred | dstufft: instead, stop intsalling things via pip that depend on setuptools or distribute | 18:33 |
mordred | right | 18:33 |
mordred | I believe the only things we've been installing so far that depend on setuptools|distribute | 18:33 |
dstufft | mordred: ok, just making sure that you know the recursive behavior | 18:34 |
mordred | are things that were using pbr, which was depending on d2to1 which depended on distribute | 18:34 |
mordred | but we killed d2to1 | 18:34 |
*** morganfainberg has quit IRC | 18:34 | |
mordred | so I _believe_ we're in good shape on our servers | 18:34 |
dstufft | are you writing plain setup.py files now? | 18:34 |
mordred | god no | 18:34 |
*** morganfainberg has joined #openstack-infra | 18:34 | |
*** chuck__ has joined #openstack-infra | 18:34 | |
mordred | that's a disaster and crazy | 18:34 |
*** dina_belova has joined #openstack-infra | 18:34 | |
mordred | pbr just no longer depends on d2to1 | 18:34 |
*** zul has quit IRC | 18:34 | |
dstufft | Are you parsing the setup.cfg yourself? | 18:35 |
*** harlowja has joined #openstack-infra | 18:35 | |
mordred | yup. we merged d2to1 code directly in to our then fixed it up when upstream d2to1 would not respond to patches removing the distribute dep | 18:35 |
*** rfolco has joined #openstack-infra | 18:35 | |
openstackgerrit | James E. Blair proposed a change to openstack-infra/config: Add jenkins-job-builder-core group https://review.openstack.org/40470 | 18:36 |
dstufft | gotcha | 18:36 |
mordred | since I talk to you all the time, I think that gives us more ability to morph and be compliant as the world in general progresses | 18:36 |
dstufft | makes sense | 18:36 |
dstufft | mordred: ALso how come no SSL on pypi.openstack.org | 18:37 |
clarkb | dstufft: because when we built it pip and easy install didn't care :) | 18:37 |
dstufft | clarkb: makes sense | 18:38 |
mordred | dstufft: good point | 18:38 |
dstufft | clarkb: so who do I bug to make it forced ssl? ;P | 18:38 |
mordred | clarkb: we shoudl fix that | 18:38 |
*** chuck__ is now known as zul | 18:39 | |
mordred | dstufft: us | 18:39 |
openstackgerrit | A change was merged to openstack-infra/jenkins-job-builder: Added some more scm options https://review.openstack.org/39298 | 18:39 |
dstufft | mordred: *bugs* | 18:39 |
dstufft | :) | 18:39 |
mordred | jeblair: when you get a chance, can you buy an ssl cert for pypi.openstack.org ? | 18:40 |
* mordred is assuming that jeblair is the ssl cert buying ninja | 18:43 | |
jeblair | mordred: why? | 18:44 |
mordred | jeblair: don't you normally do it? | 18:46 |
jeblair | mordred: no i mean why ssl for that host? | 18:46 |
mordred | so that pip installs from that host can be assured that they aren't being MITM'd now that dstufft has added ssl support to pip for us | 18:47 |
*** sarob has joined #openstack-infra | 18:47 | |
*** sarob has quit IRC | 18:47 | |
mordred | also, pip 1.5 is going to be ssl only, iirc | 18:47 |
*** sarob has joined #openstack-infra | 18:48 | |
dstufft | Nah it won't be SSL only, but there will be scary messages if your index doesn't have SSL | 18:48 |
*** dina_belova has quit IRC | 18:48 | |
openstackgerrit | lin-hua-cheng proposed a change to openstack/requirements: Add support for Keystone V3 Auth in Horizon. https://review.openstack.org/39779 | 18:50 |
jeblair | mordred: i'm a big fan of ssl, but this is getting to be a death by a thousand cuts. if we want to start adding ssl to non-user-facing hosts, we may want to consider an alternate setup. | 18:50 |
*** dina_belova has joined #openstack-infra | 18:51 | |
mordred | jeblair: I believe openstack devs consume pypi.openstack.org from time to time because it's so much faster ... but I'm also fine witih talking about different approaches | 18:51 |
jeblair | mordred: we'd need to split that out into another host to get a new ip address | 18:54 |
mordred | jeblair: ok. let's come back to it later then | 18:55 |
jeblair | mordred: i'd like to defer this until we've solved some of our more pressing problems. | 18:55 |
mordred | ++ | 18:55 |
openstackgerrit | Clark Boylan proposed a change to openstack-infra/config: Don't index logs with DEBUG log level. https://review.openstack.org/40474 | 18:55 |
clarkb | sdague: ^ testing that took far too much time... but I have locally tested it so it should owrk | 18:55 |
mordred | jeblair, clarkb: re global pip - we'll also need the new release of python-swiftclient released, since that curently also depends on d2to1 | 18:55 |
mordred | and we install that as a dep of things | 18:55 |
mordred | but that should be coming real soon now | 18:55 |
clarkb | ok good to know | 18:56 |
mordred | python-novaclient has been fixed already | 18:56 |
clarkb | jeblair: mordred: we could possibly use a floating IP for the pypi.o.o IP | 18:56 |
clarkb | we should probably start trying to use those things where sensible | 18:56 |
clarkb | but yes focus on more pressing needs | 18:56 |
jeblair | mordred: are there stable branches of things that will break due to d2to1 issues? | 18:57 |
openstackgerrit | Dan Bode proposed a change to openstack-infra/config: Add puppet-pip https://review.openstack.org/39833 | 18:57 |
clarkb | jeblair: I don't think so as d2to1 and pbr have been a havana thing right? or did we sneak it in at the end of grizzly? | 18:57 |
jeblair | clarkb: last i checked, rax didn't have floating ips. | 18:57 |
clarkb | :( | 18:57 |
jeblair | clarkb: that was a long time ago. | 18:57 |
mordred | jeblair: what clarkb said | 18:58 |
clarkb | meeting time | 19:00 |
*** UtahDave has quit IRC | 19:00 | |
*** krtaylor has quit IRC | 19:01 | |
*** vijendar has quit IRC | 19:02 | |
*** vijendar has joined #openstack-infra | 19:02 | |
hub_cap | hey guys im going to tag and bag the troveclient, and i had a quick question. i follow https://wiki.openstack.org/wiki/GerritJenkinsGithub#Tagging_a_Release right? but just pull from master and tag off that? | 19:03 |
mordred | yup | 19:03 |
mordred | make sure you use git tag -s | 19:03 |
mordred | :) | 19:03 |
hub_cap | coo. and the tag is _just_ the version? | 19:03 |
clarkb | hub_cap: make sure your tip of master matches github/gerrit | 19:03 |
*** vijendar has quit IRC | 19:03 | |
mordred | hub_cap: yup | 19:03 |
hub_cap | yes yes with the gpg stuff! | 19:03 |
mordred | so, git tag -s 1.2.3 | 19:03 |
mordred | and then git push gerrit 1.2.3 | 19:04 |
mordred | and you're gold | 19:04 |
hub_cap | mellow gold? | 19:04 |
hub_cap | clarkb: you threw me a wrench. how do i go doing that? | 19:04 |
hub_cap | ps said wrench was caught in my beard. it might be gone forever | 19:04 |
clarkb | hub_cap: make sure the commit sha1 matches the commit sha1 on github | 19:04 |
clarkb | hub_cap: which will be the case if you have kept your local master pristine eg no local dev or merges | 19:05 |
hub_cap | oh ya iz gonna check out fresh to make sure | 19:05 |
nati_ueno | clarkb: Hi How can I rename quantum to neutron in this page http://status.openstack.org/reviews/ ? | 19:08 |
pleia2 | nati_ueno: the script is called reviewday, hang on | 19:09 |
pleia2 | nati_ueno: https://github.com/openstack-infra/reviewday | 19:09 |
nati_ueno | pleia2: gotcha! | 19:09 |
hub_cap | clarkb: mordred do the tag msgs matter at _all_ | 19:09 |
pleia2 | nati_ueno: the bin/reviewday file in there is where they are defined | 19:10 |
clarkb | hub_cap: I like to put something in there. They show up if you git show $tag | 19:10 |
hub_cap | i just noticed that | 19:10 |
hub_cap | the last troveclient says | 19:10 |
hub_cap | A | 19:10 |
nati_ueno | pleia2: it looks easy fix. I'll send a patch | 19:10 |
hub_cap | classy | 19:10 |
pleia2 | nati_ueno: yep! thanks | 19:10 |
nati_ueno | pleia2: Thanks! | 19:11 |
*** cp16net has quit IRC | 19:12 | |
*** cp16net has joined #openstack-infra | 19:13 | |
markmcclain | Ok running into a strange dependency problem… this failed to merge https://review.openstack.org/#/c/37461/ | 19:13 |
markmcclain | yet we're getting unit tests failing with requests=1.2.3 installed | 19:13 |
markmcclain | https://jenkins02.openstack.org/job/gate-neutron-python27/4/console | 19:14 |
openstackgerrit | Nachi Ueno proposed a change to openstack-infra/reviewday: Rename quantum to neutron https://review.openstack.org/40480 | 19:15 |
clarkb | markmcclain: enikanorov and salv-orlando were fixing that | 19:15 |
clarkb | markmcclain: there is apparently an error and a bad sys.exit() | 19:16 |
markmcclain | no the error is in entrypoints loading | 19:16 |
markmcclain | and mismatched deps | 19:16 |
markmcclain | which triggers the exit() | 19:16 |
mordred | hrm | 19:17 |
markmcclain | latest nova client is 1.2.2 but somehow 1.2.3 is getting installed | 19:17 |
markmcclain | looks like we 1.2.3 in the mirror | 19:18 |
clarkb | oh new fail I should look closer apparently | 19:18 |
*** enikanorov_ has joined #openstack-infra | 19:18 | |
clarkb | fungi: ^ | 19:18 |
enikanorov_ | hi folks | 19:18 |
SergeyLukjanov | there are both 1.2.2 and 1.2.3 versions in mirror - http://pypi.openstack.org/openstack/requests/ | 19:18 |
SergeyLukjanov | btw we have the same problem with requests 1.2.3 in savanna | 19:19 |
*** anteaya has quit IRC | 19:19 | |
enikanorov_ | oh, i see you're already discussing this | 19:19 |
*** zul has quit IRC | 19:19 | |
*** prad has joined #openstack-infra | 19:19 | |
*** gyee has quit IRC | 19:19 | |
*** nijaba has quit IRC | 19:20 | |
*** nijaba has joined #openstack-infra | 19:20 | |
*** nijaba has quit IRC | 19:20 | |
*** nijaba has joined #openstack-infra | 19:20 | |
mordred | so - we just rolled out new stuff in devstack | 19:22 |
clarkb | mordred: this is unittests | 19:23 |
mordred | yah. just making sure | 19:23 |
clarkb | I think it is related to removing the requests upper bound | 19:23 |
clarkb | as nova fixed there issues | 19:23 |
clarkb | apparently other projects aren't so happy with it | 19:23 |
mordred | ah. interesting that they break in unittests but not in devstack :( | 19:26 |
*** krtaylor has joined #openstack-infra | 19:27 | |
mordred | markmcclain: we're trying to get more and more testing on openstack/requirements to prevent us from getting in to these, but I'mguessing there are still a couple to sort out | 19:27 |
markmcclain | yeah. was just wondering if the failed merge left it in a strange state | 19:28 |
*** dolphm has quit IRC | 19:28 | |
clarkb | markmcclain: it shouldn't as Gerrit should handle that cleanly | 19:30 |
clarkb | it is possible that requests 1.2.3 snuck in transitively | 19:30 |
*** vijendar has joined #openstack-infra | 19:31 | |
markmcclain | yeah.. I'm guess something required it uncapped | 19:31 |
clarkb | oh you know what | 19:32 |
markmcclain | keystone client is uncapped and nova is capped | 19:32 |
*** prad has quit IRC | 19:32 | |
*** prad has joined #openstack-infra | 19:32 | |
clarkb | markmcclain: at this point you may need to cap in neutron and -1 37461. I will remove my approval and +2 from there | 19:33 |
mordred | requests>=1.1,<1.2.3 is the thing in openstack/requirements | 19:33 |
mordred | we need to get all of the client libs synced with openstack/requirements asap | 19:33 |
mordred | although I did just request that everyone do that | 19:33 |
*** vipul is now known as vipul-away | 19:34 | |
clarkb | markmcclain: my approval is gone so we won't directly break you | 19:34 |
mordred | so hopefuly they will comply soon | 19:34 |
mtreinish | mordred: so devstack isn't working for me with the current master: http://paste.openstack.org/show/43347/ | 19:34 |
*** anteaya has joined #openstack-infra | 19:34 | |
mtreinish | it looks like it's installing the old version of json schema for glanceclient and openstackclient and using the new correct version for glance | 19:34 |
mtreinish | but when glance launches it uses .8 and fails | 19:35 |
mordred | why is it installing old versions of anything? | 19:35 |
markmcclain | clarkb: it is better to do that or just update novaclient which is still capped? | 19:36 |
mtreinish | mordred: don't know, I didn't think it should be, but that looks like what is happening | 19:36 |
sdague | mordred: yeh, things are weird for sure | 19:36 |
clarkb | markmcclain: if you can get a proper fix through quickly I would go with that | 19:37 |
clarkb | mtreinish: I just wouldn't count on that being speedy | 19:37 |
mordred | markmcclain: wait - what state do we _want_ to be in? | 19:37 |
markmcclain | I'm happy with them uncapped | 19:38 |
mordred | markmcclain: do we want to be in the state where everyone is uncapped and we're installing 1.2.3? | 19:38 |
mordred | great | 19:38 |
mordred | so what we want is to get requirements updated, which might mean fixing code somewhere first? | 19:38 |
mordred | and then getting the clients to sync requirements? | 19:38 |
markmcclain | right | 19:38 |
markmcclain | the only client that caps is nova | 19:38 |
markmcclain | I pushed this up: https://review.openstack.org/#/c/40483/ | 19:39 |
fungi | markmcclain: the revert in nova happened in https://review.openstack.org/#/c/38012/ | 19:39 |
fungi | rationale is in that commit message and the commit message of the commit it was reverting (including links to the bug) | 19:40 |
mordred | but I thought 1.2.3 is breaking neutron? | 19:41 |
markmcclain | 1.2.3 is breaking because of transitive deps | 19:41 |
markmcclain | entrypoints picking up the nova limit | 19:41 |
markmcclain | and then when the obj is load the installation has changed to a newer version | 19:42 |
mordred | AH | 19:42 |
mordred | I fully understand the problem now | 19:42 |
mordred | so we _do_ want to land the requirements update, and then we want to get novaclient to sync it, and then honestly you want novaclient to do another release | 19:43 |
mordred | becaus that's the only thing that's going to unbreak your unittests | 19:43 |
markmcclain | well I don't think we can now | 19:43 |
markmcclain | fungi linked the revert from aug 1 | 19:43 |
markmcclain | but that was in nova server not nova client | 19:44 |
clarkb | I think capping in the near future will unblock you, then you can do what mordred said as things fall into place | 19:44 |
clarkb | unless you can get a couple things to happen really quick | 19:44 |
markmcclain | russellb around? | 19:44 |
fungi | i can read the scrollback in more detail after the meeting concludes | 19:44 |
russellb | markmcclain: yep | 19:44 |
mordred | then - I think sdague and I will need to spend some time thinking about how to avoid this | 19:45 |
sdague | mordred: ok, looking at scrollback, what's the new issue? | 19:45 |
markmcclain | russellb: tl;dr capped version of requests in novaclient is transitively breaking things | 19:45 |
mordred | sdague: trapping for incompat requirements things around client libs and unittests | 19:46 |
markmcclain | russellb: thoughts on moving this through quickly: https://review.openstack.org/#/c/40483/ | 19:46 |
mordred | sdague: it might be something that will shake out once we're up to date with requirements on a more frequent basis | 19:46 |
sdague | right, this is pushing back towards running unit tests on devstack nodes, right? | 19:46 |
mordred | maybe - or maybe not | 19:46 |
mordred | I'm not sure I have thought about it long enough to know whether or not I think something new needs to change | 19:47 |
mordred | or whether we just need to give the current changes we just made time to percolate | 19:47 |
sdague | so.... ceilometer pulls the upstream master tarball for nova in it's unit tests | 19:47 |
mordred | that's a whole other issue | 19:47 |
mordred | although also needs addressing | 19:47 |
russellb | markmcclain: sure, if that went into global requirements, that's fine | 19:48 |
russellb | mordred: sdague ^ yes? | 19:48 |
mordred | I think that seems correct | 19:48 |
openstackgerrit | Khai Do proposed a change to openstack-infra/config: deploy jenkins plugin pom.xml file. https://review.openstack.org/40455 | 19:48 |
sdague | right, so we land this: https://review.openstack.org/#/c/40483/1, then get everyone to update | 19:49 |
*** boris-42 has quit IRC | 19:50 | |
sdague | hmmmm.... isn't zuul supposed to be running jobs on that? | 19:50 |
enikanorov_ | it is | 19:50 |
sdague | oh, sorry, that's a python-novaclient change | 19:51 |
sdague | so global-requirements is still capped in master | 19:51 |
*** _TheDodd_ has quit IRC | 19:51 | |
*** dolphm has joined #openstack-infra | 19:51 | |
clarkb | sdague: yes, 37461 needs a rebase | 19:52 |
sdague | clarkb: ok, let me do that | 19:52 |
sdague | oh, right, everyone has now modified the wrong file | 19:53 |
*** UtahDave has joined #openstack-infra | 19:53 | |
*** ftcjeff has quit IRC | 19:54 | |
*** vipul-away is now known as vipul | 19:54 | |
openstackgerrit | Matthew Treinish proposed a change to openstack-dev/pbr: Add option to run testr serially https://review.openstack.org/39811 | 19:54 |
openstackgerrit | Sean Dague proposed a change to openstack/requirements: removal invalid pin of python-requests<=1.2.2 https://review.openstack.org/37461 | 19:54 |
sdague | ok, so that should give us test runs to see if it works | 19:55 |
*** _TheDodd_ has joined #openstack-infra | 19:55 | |
*** odyssey4me has quit IRC | 19:56 | |
*** dina_belova has quit IRC | 19:58 | |
*** odyssey4me has joined #openstack-infra | 19:58 | |
sdague | mordred: so something about the latest pip-ness has meant that we are no longer overwriting local versions it looks like | 19:58 |
sdague | which we used to do | 19:58 |
openstackgerrit | Dan Bode proposed a change to openstack-infra/config: Add puppet-pip https://review.openstack.org/39833 | 19:59 |
*** dprince has quit IRC | 20:00 | |
anteaya | jeblair: thanks for shoehorning me in | 20:01 |
clarkb | I didn't get to sneak this in but I will probably be AFK for much of Thursday and definitely most of Friday | 20:01 |
jeblair | anteaya: sorry we didn't have much time | 20:01 |
anteaya | I'll see if I can mix up a 1.5 upgrade patch for storyboard | 20:01 |
anteaya | jeblair: no worries | 20:01 |
mordred | sdague: can you point me to something? | 20:01 |
* jeblair points mordred to sdague | 20:02 | |
*** rfolco has quit IRC | 20:02 | |
anteaya | you have a lot to cover | 20:02 |
anteaya | that was about all I needed anyway | 20:02 |
anteaya | ttx said he isn't opposed to a 1.5 patch, so that is something | 20:02 |
clarkb | anteaya: maybe start with trying to support both like horizon? | 20:02 |
ttx | If the choice is =1.4 or >=1.5.1, I prefer the latter | 20:03 |
clarkb | horizon has an example of running tests to check both | 20:03 |
anteaya | clarkb: can you expand that comment? | 20:03 |
ttx | but I think >=1.4 is possible too | 20:03 |
clarkb | anteaya: it is possible to support 1.4 and 1.5 concurrently. Horizon does this. The enforce it with two unittest jobs. One runs with 1.4 the other 1.5 | 20:03 |
anteaya | ah I did not know, I will look at horizons tests, I have to do that anyway | 20:03 |
clarkb | anteaya: check out the tox.ini | 20:03 |
clarkb | fungi: will you be working from seattle? | 20:04 |
anteaya | ttx ack >=1.5.1 is prefered to =1.4 | 20:04 |
fungi | clarkb: yeah, though hours will be fragmented i expect since we'll want to do some touristy things during the day | 20:04 |
*** lcestari has quit IRC | 20:04 | |
clarkb | cool, we should definitely grab beers if you can swing it | 20:04 |
fungi | clarkb: so might be mornings and late nights hacking to accommodate that | 20:04 |
anteaya | clarkb: I will, thank you though since we are early in the game, would there be much fallout if we just supported >=1.5.1? | 20:05 |
fungi | clarkb: definitely. any time you're free that week. zaro too | 20:05 |
clarkb | anteaya: the problem with that is it may restrict your potential install base to people willing to install from source or build their own packages | 20:05 |
clarkb | anteaya: not an issue for us, but if you want to get outside contributors it may become important | 20:05 |
zaro | fungi: thought we were going to computer museum? | 20:06 |
anteaya | clarkb: what would happen if we supported >=1.5.1 and welcomed anyone who wanted to support =1.4? | 20:06 |
fungi | zaro: that's on the to do list for that week too | 20:06 |
openstackgerrit | Dan Bode proposed a change to openstack-infra/config: Add puppet-pip https://review.openstack.org/39833 | 20:06 |
clarkb | fungi: early in the week will probably be better. I have a second round of family/friend stuff beginning the 23rd, but I can do whenever | 20:06 |
fungi | zaro: so we can always work in computer museum and beer in one time slot | 20:07 |
anteaya | since we have NO storyboard tests, it isn't like we currently have created an expectation that we are dropping | 20:07 |
anteaya | well no storyboard tests of consequence | 20:07 |
clarkb | anteaya: that is doable too, but if you have to put in extra work it might scare some people away | 20:07 |
zaro | fungi: wfm | 20:07 |
clarkb | anteaya: I don't think there is a solid answer, just things to consider :) | 20:07 |
fungi | clarkb: that works better for us anyway, because there are more people from our cruise group showing up later in the week as we get closer to departing, and they'll have stuff they want to do anyway | 20:07 |
clarkb | elysian fields is down that way >_> | 20:07 |
anteaya | clarkb: fair enought | 20:07 |
anteaya | considering my django foo at present, I want to keep the focus as narrow as possible | 20:08 |
anteaya | it increases my ability to actually have some output | 20:08 |
anteaya | rather than getting lost in documentation | 20:08 |
sdague | mordred: what's up? | 20:08 |
sdague | sorry, was in mtreinish's office | 20:08 |
clarkb | food time. back in a bit | 20:09 |
mordred | sdague: god. I've already forgotten | 20:09 |
sdague | so warlock 0.8.1, which is satisfied by g-r, requires incompatible jsonschema | 20:09 |
mordred | and you have offices? | 20:09 |
clarkb | mordred: I was just about to ask that same question | 20:09 |
sdague | yeh, he's 3 doors down from me | 20:09 |
mordred | sdague: oh - - yeah - can you link me to the pip based problem evil? | 20:09 |
*** krtaylor has quit IRC | 20:10 | |
sdague | I think mtreinish had the dump | 20:10 |
*** odyssey4me has quit IRC | 20:11 | |
openstackgerrit | A change was merged to openstack-infra/config: Add Python 3.3 PyPI mirror jobs https://review.openstack.org/39999 | 20:11 |
fungi | i'll watch those ^ later this evening when they kick off | 20:12 |
openstackgerrit | A change was merged to openstack-infra/config: Make the python33 template part of python-jobs https://review.openstack.org/40321 | 20:12 |
mgagne | I would like a review on this change: https://review.openstack.org/#/c/40456/ | 20:12 |
fungi | and that ^ one frees us up to start running the empty pyXX jobs for git-review changes now | 20:12 |
*** dolphm has left #openstack-infra | 20:12 | |
openstackgerrit | Sean Dague proposed a change to openstack/requirements: Raise warlock requirement https://review.openstack.org/37616 | 20:13 |
sdague | mordred: honestly, I think part of the problem is we've had a pretty big backup on requirements review | 20:14 |
sdague | jd__ is really the only person who's been reviewing | 20:14 |
fungi | mgagne: lgtm | 20:14 |
sdague | I'm going to run around and rebase things which should probably land | 20:14 |
jd__ | hm, can I help? | 20:14 |
openstackgerrit | A change was merged to openstack-infra/config: Ensure /var/lib/zuul is owned by zuul https://review.openstack.org/39612 | 20:14 |
fungi | jd__: apparently you already do help | 20:14 |
jd__ | fantastic! | 20:15 |
fungi | yes, i agree | 20:15 |
*** psedlak has quit IRC | 20:16 | |
mordred | sdague: I've been waiting on them until we got the gating in place | 20:17 |
sdague | yep, that's cool | 20:17 |
sdague | they all need rebases though | 20:17 |
sdague | and now we'll get test results! | 20:17 |
mordred | w00t | 20:18 |
mordred | zaro: next time you hack on gerrit ... | 20:19 |
*** nijaba has quit IRC | 20:20 | |
mordred | I thnk that topic links defaulting to creating a link that also includes the project name is crazypants | 20:20 |
openstackgerrit | Sean Dague proposed a change to openstack/requirements: Removes the cap for SQLAlchemy https://review.openstack.org/38035 | 20:21 |
*** nijaba has joined #openstack-infra | 20:21 | |
*** nijaba has quit IRC | 20:21 | |
*** nijaba has joined #openstack-infra | 20:21 | |
sdague | also, auto merge is funny | 20:21 |
sdague | and in the sqla case changed tests/files/gr-base.txt instead of global-requirements.txt | 20:21 |
sdague | maybe we should take these in batches, as I wasn't smart enough to rebase them all on each other | 20:22 |
mordred | haha | 20:25 |
dstufft | mordred: jeblair just catching up on things, I just assumed Openstack had a wildcard cert :/ | 20:28 |
jeblair | dstufft: if we did, i'm not sure we'd put it on some of the hosts we run (at least, not yet) | 20:28 |
jeblair | those things are dangerous if they get loose. :) | 20:29 |
dstufft | jeblair: heh, yea I tend to run a single host that unwraps TLS and use internal networks (or self signed certs) going from that single node to everything else | 20:30 |
*** dkliban_afk has quit IRC | 20:30 | |
*** sdake has quit IRC | 20:31 | |
fungi | dstufft: what are "internal" networks? ;) | 20:31 |
jeblair | dstufft: yeah, that's one of the things i think we should consider if we want to do https more places. might be easier if we had access to www.openstack.org and openstack.org webservers. | 20:31 |
jeblair | fungi: tor hidden services! mordred: ;) | 20:32 |
fungi | for us, internal networks would be ptp vpnm | 20:32 |
fungi | heh, onion routing ftw! | 20:32 |
dstufft | fungi: that's the "or self signed certs" option ;P | 20:32 |
fungi | dstufft: self-signed certs steal food from starving certificate authority executives | 20:32 |
dstufft | good, they're assholes anyways | 20:33 |
fungi | how will they ever make their car payments now? what about the mortgage on their third vacation home? | 20:33 |
*** dkliban_afk has joined #openstack-infra | 20:35 | |
bodepd | rediculously vague jjb question | 20:35 |
bodepd | I'm trying to install/configure jenkins with jjb in one run | 20:36 |
bodepd | it always fails during the puppet run, but then always works as soon as I invoke manually from the machine | 20:36 |
mgagne | bodepd: logs? | 20:36 |
bodepd | jenkins-jobs update /etc/jenkins_jobs/config | 20:36 |
jeblair | bodepd: maybe jenkins hasn't really started yet (it takes a while?) | 20:36 |
mgagne | bodepd: did you find a way to get the API token after the Jenkins installation? | 20:37 |
bodepd | https://gist.github.com/bodepd/6168382 | 20:37 |
bodepd | I also added a 15 second sleep before jenkins-jobs update | 20:38 |
mgagne | bodepd: jenkins takes time to start and shows "Jenkins is preparing to work..." for a while before being available | 20:38 |
bodepd | in the exec | 20:38 |
*** ^demon|away is now known as ^d | 20:38 | |
bodepd | 20 seconds though? | 20:38 |
*** sarob has quit IRC | 20:38 | |
bodepd | I guess I could bump it up to a whole minute and try it out | 20:39 |
mgagne | bodepd: just to make sure it's this problem and not something else | 20:39 |
bodepd | mgagne: I am using the password (and setting that with some nasty XML templates) | 20:39 |
bodepd | mgagne: I can also set the api key via XML template | 20:39 |
jeblair | bodepd, mgagne: depending on whether you are trying to manage an existing install, or bootstrap a new one, you can use a predetermined api key | 20:39 |
fungi | more complication, but maybe you want something retrying to query the api endpoint for a configurable length of time | 20:39 |
jeblair | bodepd: ah, you're probably onto this then. but yeah, if you supply 'secret.key' and the user xml file for the jjb user, you can set all that up before hand | 20:40 |
*** sdake has joined #openstack-infra | 20:40 | |
*** sdake has quit IRC | 20:40 | |
*** sdake has joined #openstack-infra | 20:40 | |
jeblair | (assuming you have already obtained those from an existing jenkins install; doesn't help the new user who is spinning one up for the first time) | 20:41 |
jeblair | (supporting that would probably require implementing jenkins brain-dead encryption protocol in puppet) | 20:41 |
*** sdake has quit IRC | 20:41 | |
*** sdake has joined #openstack-infra | 20:41 | |
*** sdake has quit IRC | 20:41 | |
*** sdake has joined #openstack-infra | 20:41 | |
fungi | jenkins rolled its own encryption protocol? i'm afraid to look | 20:42 |
mordred | and then you can hit an api endpoint, and if it responds, then jenkins is up - and if it doesn't, it's not | 20:42 |
*** gyee has joined #openstack-infra | 20:42 | |
*** gyee_ has joined #openstack-infra | 20:43 | |
*** dkliban_afk has quit IRC | 20:43 | |
*** gyee_ has quit IRC | 20:43 | |
jeblair | bodepd: maybe that's how you should exit from the jenkins service restart? ^ | 20:43 |
*** sarob has joined #openstack-infra | 20:43 | |
bodepd | jeblair: by trying to connect? | 20:44 |
bodepd | jeblair: of coarse that is the real solution | 20:44 |
bodepd | jeblair: but I'd rather just have it work atm | 20:44 |
bodepd | (the real solution is for the jjb update script to be a native type that blocks and waits a predetermined time for the service to be ready) | 20:45 |
mordred | bodepd: thing is - I have seen jenkins take >30 minutes to start before | 20:45 |
bodepd | also, I noticed in the code, that it tries to create a job, then checks if the job is there | 20:45 |
bodepd | so the logs don't really lead me to the right spot in the code | 20:45 |
bodepd | mordred: say what :) | 20:45 |
mordred | bodepd: not kidding. | 20:46 |
bodepd | this goes back to my question from yesterday about just ripping it out :) | 20:46 |
bodepd | build bot anyone? | 20:46 |
* mordred stabs bodepd | 20:47 | |
mordred | bodepd: use it for anything at all for more than 5 minutes and then come back and say that | 20:47 |
mordred | :) | 20:47 |
bodepd | I'm mostly just making an educated guess it can't be worse than jenkins | 20:48 |
mordred | you'd almost be right | 20:48 |
mordred | thing is- we've already added most of what we need to jenkins | 20:48 |
mordred | and we'd have to start completely from scratch with buildbot | 20:49 |
mordred | as in, no part of our current in frastructure would carry over | 20:49 |
mordred | and if we're going to do that - then we might as well just continue working on non-jenkins gearman workers for zuul | 20:49 |
bodepd | mordred: I understand. I'm just so fristrated with it's lack of usable APIs | 20:49 |
mordred | :) | 20:49 |
Mithrandir | buildbot is made of cheese. | 20:50 |
fungi | is it runny? | 20:51 |
bodepd | mordred: non-jenkins gearman workers you say... | 20:51 |
bodepd | mordred: and zuul becomes the portal? | 20:51 |
mordred | bodepd: yes | 20:51 |
mordred | this is already in the brainstorm stage | 20:52 |
Mithrandir | fungi: moldy. | 20:52 |
mordred | because, it turns out - gearman is pretty amazing at distributing work to be done by people :) | 20:52 |
*** dkliban_afk has joined #openstack-infra | 20:56 | |
*** dina_belova has joined #openstack-infra | 20:58 | |
clarkb | back from lunch and so much scrollback | 21:00 |
openstackgerrit | Monty Taylor proposed a change to openstack-infra/config: Add oslo.version https://review.openstack.org/40498 | 21:00 |
*** CaptTofu has quit IRC | 21:00 | |
*** CaptTofu has joined #openstack-infra | 21:01 | |
*** krtaylor has joined #openstack-infra | 21:01 | |
*** dina_belova has quit IRC | 21:02 | |
clarkb | mordred: I just tried your topic without project search and it is slow, maybe that is why the behavior today defaults to including the project | 21:02 |
*** ArxCruz has quit IRC | 21:03 | |
mordred | hrm. | 21:04 |
mordred | maybe an ... INDEX ... would be helpful | 21:04 |
mordred | that was an index on topic, project | 21:04 |
mordred | so that both topci and topic, project could benefit | 21:04 |
*** dina_belova has joined #openstack-infra | 21:08 | |
markmcclain | anyone want to +2, Approve this: https://review.openstack.org/#/c/37461/ | 21:08 |
*** SergeyLukjanov has quit IRC | 21:08 | |
clarkb | markmcclain: looking | 21:09 |
*** pabelanger_ has quit IRC | 21:09 | |
clarkb | done | 21:09 |
markmcclain | clarkb: thanks | 21:09 |
* clarkb settles in to do more code review | 21:09 | |
*** pabelanger has joined #openstack-infra | 21:09 | |
bodepd | I got it working! https://gist.github.com/bodepd/6168663 | 21:09 |
bodepd | (but I'm not proud of the solution :( 0 | 21:10 |
clarkb | bodepd: you need a while loop that curls the jenkins server anddoesn't exit until after the please wait page goes away >_> | 21:10 |
bodepd | I probably shouldn't use the word solution either | 21:10 |
bodepd | yeah, I'll open a ticker for that <_< | 21:11 |
*** emagana has joined #openstack-infra | 21:11 | |
bodepd | ticket | 21:11 |
*** derekh has joined #openstack-infra | 21:12 | |
zaro | mordred: i'll inquire about it next time the gerrit channel is active. seems pretty dead right now. | 21:13 |
*** dina_belova has quit IRC | 21:13 | |
mordred | zaro: or, ignore me really. I'mjust bitching | 21:13 |
clarkb | ha | 21:14 |
*** sdake has quit IRC | 21:14 | |
openstackgerrit | A change was merged to openstack-infra/gear: Server: make job handle safer https://review.openstack.org/40462 | 21:15 |
*** vipul is now known as vipul-away | 21:16 | |
*** sarob_ has joined #openstack-infra | 21:17 | |
*** salv-orlando has joined #openstack-infra | 21:17 | |
*** sdake has joined #openstack-infra | 21:19 | |
*** sdake has quit IRC | 21:19 | |
*** sdake has joined #openstack-infra | 21:19 | |
*** sarob has quit IRC | 21:20 | |
*** sarob has joined #openstack-infra | 21:20 | |
fungi | pleia2: on 36593 were you wanting to amend that part of the init.pp with commentary about being centos-only, or are we clear to merge it? | 21:20 |
*** nijaba has quit IRC | 21:20 | |
*** sarob_ has quit IRC | 21:21 | |
*** nijaba has joined #openstack-infra | 21:21 | |
*** nijaba has quit IRC | 21:21 | |
*** nijaba has joined #openstack-infra | 21:21 | |
*** nati_ueno has quit IRC | 21:21 | |
fungi | mordred: clarkb: jeblair: worth noting, the "missing pip" problem seems to have struck all our centos machines. git and pbx are both suffering from it too | 21:23 |
fungi | i did 'yum reinstall python-pip' on git.o.o just now, and will see how that works out for subsequent puppet runs | 21:24 |
*** thomasm has quit IRC | 21:24 | |
clarkb | good to know | 21:25 |
*** vijendar has quit IRC | 21:25 | |
mordred | fungi: awesome | 21:26 |
clarkb | I am going to ninja approve a couple things related to proposal.slave and logstash | 21:27 |
mordred | go forit | 21:27 |
fungi | clarkb: i apologize for not reviewing those yet, if i haven't | 21:29 |
clarkb | fungi: you reviewed most of them. thank you | 21:29 |
jeblair | i'm working in chrono order | 21:29 |
fungi | i guess i feel more behind than i am, in that case | 21:29 |
openstackgerrit | A change was merged to openstack-infra/config: Replace tx node label with proposal in jobs. https://review.openstack.org/40018 | 21:29 |
openstackgerrit | A change was merged to openstack-infra/config: Fix logstash.o.o elasticsearch discover node list. https://review.openstack.org/40459 | 21:29 |
clarkb | They are all low impact with relatively high return rates. ^ Is like the last step before turning off tx.slave | 21:30 |
openstackgerrit | A change was merged to openstack-infra/config: Don't index logs with DEBUG log level. https://review.openstack.org/40474 | 21:30 |
fungi | i'm all for that | 21:30 |
clarkb | mordred: what do you think about my comments on https://review.openstack.org/#/c/39967/1 | 21:31 |
*** woodspa__ has quit IRC | 21:31 | |
fungi | huh, so the next puppet run seems to have blown away pip again... http://puppet-dashboard.openstack.org:3000/reports/780987 | 21:32 |
clarkb | fungi: mordred: for the project registry, I am almost beginning to think we need to host that with etcd or in a database eg someplace queryable through an api | 21:32 |
fungi | clarkb: probably so. we keep churning out more bits and pieces which depend on project-specific metadata lists | 21:33 |
clarkb | fungi: any chance the symlink is breaking things? | 21:34 |
*** emagana has quit IRC | 21:34 | |
fungi | clarkb: it's entirely possible | 21:34 |
fungi | if something is trying to pip install -U pip | 21:34 |
fungi | OR! maybe epel just added a new python-pip rpm... | 21:35 |
fungi | i will start investigating in that direction | 21:35 |
clarkb | LOLOLOLOL | 21:35 |
clarkb | fungi: that is the problem | 21:35 |
clarkb | pip is a symlink to pip-python. pip-python is a symlink to pip | 21:35 |
*** echohead has joined #openstack-infra | 21:36 | |
clarkb | LOLs all the way down | 21:36 |
fungi | yep | 21:36 |
clarkb | that is hilarious | 21:36 |
echohead | hi, #openstack-infra. | 21:36 |
clarkb | echohead: ohai | 21:36 |
echohead | i'm having some trouble with cloning things from review.openstack.org on ipv6. | 21:36 |
fungi | we saw this on fedora 18, so i already have a workaround in the provider for it. just need to make that more ubiquitous | 21:36 |
*** sdake has quit IRC | 21:36 | |
clarkb | "Tonight on when hacks go bad, pip, epel and symlinks" | 21:37 |
echohead | like the ipv6 address for review.openstack.org always times out. is this expected? | 21:37 |
* fungi whips up emergency puppetry | 21:37 | |
clarkb | echohead: it is not expected | 21:37 |
*** sdake has joined #openstack-infra | 21:37 | |
*** sdake has quit IRC | 21:37 | |
*** sdake has joined #openstack-infra | 21:37 | |
*** mriedem has quit IRC | 21:37 | |
harlowja | clarkb i think they recently did that weirdness with pip, pip-python, python-python symlinsk | 21:37 |
harlowja | not quite sure why, ha | 21:37 |
clarkb | echohead: does ssh -p user@review.openstack.org gerrit ls-projects timeout too? | 21:37 |
fungi | echohead: are you seeing it do that when connecting via ssh from a virtual machine in rackspace possibly? | 21:37 |
clarkb | echohead: and does adding -vvv to that ssh command show anything interesting? | 21:38 |
echohead | fungi: no, this is a physical machine in my co's datacenter. | 21:38 |
echohead | clarkb: trying now. | 21:38 |
fungi | ah, okay. cloning via http then i take it? | 21:38 |
clarkb | echohead: you need -p29418 sorry missed the port | 21:38 |
*** sparkycollier has joined #openstack-infra | 21:38 | |
mordred | clarkb: if it's in a database, I still want the source of it to be a thing in the repo | 21:39 |
*** ^d has quit IRC | 21:39 | |
fungi | harlowja: yep. the puppet pip package provider had a workaround to symlink pip-python to pip, so having it suddenly move took it by surprise | 21:39 |
clarkb | mordred: ya, I think that will be necessary to make it editable | 21:39 |
harlowja | def fungi | 21:39 |
echohead | fungi: yes, http. | 21:39 |
fungi | er, pip to pip-python, but regardless | 21:39 |
clarkb | mordred: but curl foo/projects is a lot easier than clone this thing, update it make sure you don't diverge and so on | 21:39 |
jeblair | clarkb: project registry? | 21:39 |
harlowja | i think the new epel package even symlinks python-pip | 21:40 |
clarkb | jeblair: the list of projects that we copy and paste all over for different reasons | 21:40 |
harlowja | all variations possible, ha | 21:40 |
clarkb | echohead: I aks because we have seen weirdness with ipv6 and different protocols, just wondering if ssh performs any differently | 21:40 |
fungi | harlowja: it does. what we ended up with was a circular symlink between pip and python-pip pointing at each other | 21:40 |
echohead | clarkb: ssh hangs while trying to establish a connection: | 21:40 |
echohead | debug1: Connecting to review.openstack.org [2001:4800:780d:509:3bc3:d7f6:ff04:39f0] port 29418. | 21:40 |
harlowja | fungi nice, good ole epel :-p | 21:40 |
clarkb | yay it is at least consistent | 21:40 |
clarkb | echohead: do you have routable ipv6? | 21:41 |
clarkb | echohead: eg can you hit anything else over ipv6? | 21:41 |
*** nati_ueno has joined #openstack-infra | 21:41 | |
echohead | yes, i can hit ipv6.google.com, for example. | 21:41 |
fungi | echohead: ping6 2001:470:8:d2f::34 (it's my house) | 21:41 |
echohead | fungi: i can ping you just fine. | 21:42 |
fungi | echohead: but ping6 2001:4800:780d:509:3bc3:d7f6:ff04:39f0 fails for you? | 21:42 |
echohead | fungi: that's correct. | 21:42 |
clarkb | what about 2001:4800:780d:509:3bc3:d7f6:ff04:359b jenkins.o.o | 21:42 |
fungi | i wonder if rackspace is having trouble with v6 at one of their peering points | 21:42 |
clarkb | which is in the same DC presumable | 21:42 |
echohead | clarkb: that address works fine from here too. | 21:42 |
fungi | that would be a good confirmation | 21:42 |
clarkb | echohead: hmm ok. do you have mtr available? can you mtr review.o.o via ipv6? | 21:43 |
fungi | huh, so two addresses within the same /112 and you can reach one but not the other | 21:43 |
*** vipul-away is now known as vipul | 21:44 | |
echohead | clarkb: http://paste.openstack.org/show/43363/ | 21:44 |
clarkb | so packets are getting lost within rackspace | 21:45 |
clarkb | not a peering issue | 21:45 |
fungi | his hop #13 there is the penultimate hop for me as well | 21:46 |
clarkb | jlk: ^ any idea of what is going on there? | 21:46 |
fungi | echohead: what global v6 address are you coming from? i'll do a return trace and see if the path is majorly asymmetrical | 21:47 |
jeblair | clarkb: seems like just having the projects.yaml in the config repo should be sufficient, and ensure it's copied to /etc/projects.yaml everywhere it's needed | 21:47 |
echohead | fungi: 2607:f700:3460:fa0:230:48ff:fecc:d632 | 21:47 |
jlk | clarkb: looking, but networks… ugh. | 21:47 |
clarkb | jeblair: that will work too | 21:48 |
fungi | echohead: looks like the packets might be vanishing on the return path after the hand-off from he to spectrum | 21:48 |
jeblair | mordred: you may want to revisit https://review.openstack.org/#/c/40068/ | 21:48 |
jlk | and ipv6 -- even more eww. | 21:48 |
mordred | jeblair: I'm excited to | 21:48 |
fungi | echohead: oh, actually routing loop now in bbgrp.net | 21:48 |
clarkb | fungi: asymetrical routes? ugh | 21:48 |
clarkb | asymetrical routes is how you confuse all the firewalls | 21:49 |
mordred | jeblair: I will fix after dinner | 21:49 |
clarkb | *stateful firewalls | 21:49 |
jlk | clarkb: I'm not seeing any systemic issues our outages | 21:49 |
clarkb | jlk: thanks, fungi seems to have found some asymetric badness upstream | 21:49 |
jeblair | i have noticed dropped packets from here | 21:49 |
fungi | echohead: clarkb: http://paste.openstack.org/show/43364 | 21:50 |
*** dkliban_afk is now known as dkliban | 21:51 | |
echohead | ok, so it looks like you can route to me, but i can't route to you? | 21:51 |
fungi | echohead: so i'll wager your packets are getting to review.o.o and then the responses are hitting a loop in your provider | 21:51 |
sparkycollier | hope this is not too off topic, but wondering if any mac users out there prefer macports or homebrew? | 21:52 |
fungi | echohead: probably they have multiple paths in their network and one of them has a bad default back to the other or similar, and load is distributed based on an address hash so it works for some remote addresses and not others seemingly at random | 21:52 |
echohead | fungi: i see. thanks for you help. i'll ask the network guy here about it. | 21:53 |
jeblair | sparkycollier: it's so nice to see you, regardless of the topic! :) | 21:54 |
fungi | echohead: specifically, it looks like they have a secondary-side core router sending traffic back up to a border router, judging from their naming conventions. maybe cr02 is announcing routes via an igp which it doesn't have a non-default next hop to | 21:54 |
sparkycollier | jeblair it's a new half years resolutoin | 21:54 |
echohead | fungi: showed your traceroute to a guy here and he's working to remove the loop now. thanks! | 21:56 |
fungi | echohead: np. that used to be my job, so he's got my sympathies | 21:56 |
clarkb | jeblair: any reason for not merging https://review.openstack.org/#/c/36593/ | 21:57 |
jeblair | clarkb: just to give fungi or mordred a chance to see it | 21:58 |
dtroyer | sparkycollier: I've been happy with homebrew for a couple of years now. | 22:00 |
clarkb | jeblair: ok | 22:01 |
clarkb | pleia2: the https for cgit change has been rereviewed | 22:01 |
ttx | anteaya: so, to recap: i would prefer to make storyboard compatible with >=1.4, but if for whatever reason we need an 1.5-specific feature (or have to choose between being 1.4 or 1.5-compatible) then we'd pick 1.5 | 22:02 |
*** sarob has quit IRC | 22:02 | |
fungi | clarkb: on 36593 i was wondering if pleia2 was wanting to amend that part of the init.pp with commentary about being centos-only, or if we were clear to merge it | 22:03 |
*** sarob has joined #openstack-infra | 22:03 | |
ttx | anteaya: I haven't seen that we were in that hard place yet, but maybe you know better (I run 1.4) | 22:03 |
*** giulivo has quit IRC | 22:03 | |
fungi | i'll +2 and ask in the review insead | 22:03 |
ttx | sparkycollier: will skip the phone call and go to bed, nothing specific to report | 22:03 |
ttx | sparkycollier: put all on the etherpad | 22:03 |
clarkb | fungi: I think that can happen in a subsequent change because cgit being attached to centos has been an operating assumption aiui | 22:04 |
jeblair | fungi, clarkb: i don't think that's important, just mentioning it as in interesting assumption. | 22:04 |
fungi | clarkb: just saw your new comment there and agree. +2 and approved | 22:05 |
* ttx eods | 22:05 | |
openstackgerrit | A change was merged to openstack-infra/config: Add git-daemon to cgit server. https://review.openstack.org/36593 | 22:06 |
clarkb | mordred: https://review.openstack.org/#/c/36634/ if you haven't seen that yet you should take a look | 22:06 |
anteaya | ttx okay, I was just noticing that 1.5 deals with urls in a completely different fashion than 1.4 | 22:07 |
*** _TheDodd_ has quit IRC | 22:07 | |
anteaya | and if we are building something new, I favour the newest release to build it on | 22:07 |
*** sarob has quit IRC | 22:07 | |
anteaya | plus we don't have db migrations to worry about yet, but that is coming soon | 22:07 |
clarkb | https://www.djangoproject.com/weblog/2013/aug/06/breach-and-django/ | 22:07 |
openstackgerrit | A change was merged to openstack/requirements: removal invalid pin of python-requests<=1.2.2 https://review.openstack.org/37461 | 22:07 |
anteaya | so if we were to upgrade I would favour the upgrade sooner rather than later | 22:07 |
clarkb | fungi: jeblair mordred ^ we may want to disable compression on review.o.o and jenkins.o.o maybe? | 22:08 |
openstackgerrit | A change was merged to openstack/requirements: Add support for Keystone V3 Auth in Horizon. https://review.openstack.org/39779 | 22:08 |
clarkb | I think jenkins does a lot of compression internally, not sure about gerrit | 22:08 |
*** sarob has joined #openstack-infra | 22:08 | |
fungi | oh, ouch | 22:09 |
*** dina_belova has joined #openstack-infra | 22:09 | |
*** CaptTofu has quit IRC | 22:09 | |
clarkb | and we should probably update our ciphersuite like ryan_lane did to reduce the broadness of available attacks | 22:09 |
clarkb | fungi: did you want to take a swipe at that? I can propose a change but I feel like I will be cargo culting what other people say is secure | 22:10 |
*** burt has quit IRC | 22:10 | |
clarkb | apparently ubuntu apache2.2.2 supports turning SSLCompression off | 22:11 |
fungi | clarkb: i'll take a look. compressing before encrypting is known to leak badly | 22:12 |
*** CaptTofu has joined #openstack-infra | 22:12 | |
fungi | so turning it off for https sites is probably a good call | 22:12 |
clarkb | it will still be a problem on centos but there is no private data there | 22:12 |
*** dina_belova has quit IRC | 22:13 | |
clarkb | sdague: we are now not indexing DEBUG level logs in the d-g screen logs | 22:18 |
clarkb | sdague: should make a difference as the vast bulk of the data was there I htink | 22:19 |
SlickNik | Hey guys... | 22:19 |
SlickNik | did jenkins.openstack.org change? I don't see the usual jobs there… | 22:19 |
clarkb | sdague: I am still indexing all of the console logs and things like syslog, swift, and so on | 22:19 |
clarkb | sdague: because they don't differentiate levels | 22:19 |
clarkb | SlickNik: yes | 22:19 |
clarkb | SlickNik: we now have 3 jenkins masters. Jenkins.o.o jenkins01.o.o and jenkins02.o.o | 22:19 |
clarkb | SlickNik: jenkins.o.o is going to slowly die with 01 and 02 running most of the jobs | 22:20 |
SlickNik | ah, I see | 22:20 |
SlickNik | thanks! | 22:20 |
clarkb | note the zuul status page will link you directly to the job on the correct master | 22:20 |
*** nijaba has quit IRC | 22:20 | |
lifeless | clarkb: https://review.openstack.org/#/c/40322/ <- seen this? | 22:20 |
clarkb | lifeless: I hadn't I am slowly getting through the list. reviewing that one now | 22:21 |
lifeless | fungi: the compress-encrypt leaks are quite specific | 22:21 |
lifeless | clarkb: tanks! | 22:21 |
*** nijaba has joined #openstack-infra | 22:21 | |
lifeless | fungi: IIRC you need a block cipher with constant prefixed compressed output, same as the cookie leak issues | 22:22 |
*** weshay has quit IRC | 22:22 | |
*** jrex_laptop has joined #openstack-infra | 22:22 | |
openstackgerrit | A change was merged to openstack-infra/config: Enable pypi jobs for diskimage-builder https://review.openstack.org/40322 | 22:26 |
fungi | lifeless: yeah, nobody's demonstrated similar material leaks with stream ciphers *yet* anyway | 22:26 |
lifeless | clarkb: thanks! | 22:26 |
clarkb | fungi: lifeless see this is why I get you guys to look at it. Less cargo cult and more understanding :) | 22:27 |
clarkb | fungi: lifeless I would be interested in the why when we end up with a config that we want to usethough | 22:28 |
*** datsun180b has quit IRC | 22:28 | |
*** jrex_laptop has quit IRC | 22:28 | |
fungi | clarkb: i'm not a cryptographer, i only play one on cypherpunks mailing lists | 22:28 |
lifeless | what's needed to get https://review.openstack.org/#/c/40140/ landed ? | 22:30 |
lifeless | clarkb: ^ fungi: ^ | 22:30 |
clarkb | lifeless: jeblair | 22:30 |
openstackgerrit | Khai Do proposed a change to openstack-infra/config: deploy jenkins plugin pom.xml file. https://review.openstack.org/40455 | 22:30 |
openstackgerrit | Steve Baker proposed a change to openstack-infra/config: Create new repo to host legacy heat-cfn client. https://review.openstack.org/38226 | 22:32 |
*** changbl_ has quit IRC | 22:33 | |
openstackgerrit | Khai Do proposed a change to openstack-infra/config: deploy jenkins plugin pom.xml file. https://review.openstack.org/40455 | 22:34 |
fungi | clarkb: yeah, the puppetlabs rsync module was actually a very serendipitous find. i thought to look for one before writing it because... you never know... but didn't expect it to be so nearly spot on for what we actually wanted to implement | 22:34 |
*** avtar has quit IRC | 22:35 | |
clarkb | fungi: with default parameters even :) | 22:36 |
openstackgerrit | Khai Do proposed a change to openstack-infra/config: deploy jenkins plugin pom.xml file. https://review.openstack.org/40455 | 22:37 |
lifeless | clarkb: fungi: http://support.novell.com/security/cve/CVE-2012-4929.html | 22:37 |
uvirtbot | lifeless: The TLS protocol 1.2 and earlier, as used in Mozilla Firefox, Google Chrome, Qt, and other products, can encrypt compressed data without properly obfuscating the length of the unencrypted data, which allows man-in-the-middle attackers to obtain plaintext HTTP headers by observing length differences during a series of guesses in which a string in an HTTP request potentially matches an unknown string in an HTTP header, aka a "CRIME" attack | 22:37 |
clarkb | thank you uvirtbot | 22:38 |
lifeless | clarkb: fungi: and http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2039356 claims | 22:38 |
lifeless | It has been confirmed that CRIME is ineffective against vCenter Operations Manager (vCOps) 5.6 and higher. The TLS CRIME vulnerability appears to be isolated to the use of the libqt4 libraries for compression. | 22:38 |
lifeless | (not that we're using vcenter) | 22:38 |
lifeless | just - tl;dr afaict - a) server can obfuscate the length of the data and avoid the compressor state leak crime depends on, and upstreams are doing that now. | 22:39 |
lifeless | and b) browsers generally disable tls compression themselves now | 22:39 |
lifeless | so I wouldn't worry about the sslcompression setting; but off if you have any concerns. | 22:39 |
*** sparkycollier has quit IRC | 22:40 | |
*** prad has quit IRC | 22:42 | |
clarkb | gotcha, thanks | 22:43 |
*** zaro0508 has joined #openstack-infra | 22:45 | |
*** CaptTofu has quit IRC | 22:45 | |
*** sarob has quit IRC | 22:46 | |
*** jpich has quit IRC | 22:46 | |
*** sarob has joined #openstack-infra | 22:46 | |
*** ianw has joined #openstack-infra | 22:47 | |
*** CaptTofu has joined #openstack-infra | 22:48 | |
pabelanger | clarkb, pyflakes was kept because jeblair asked to for it. Since JJB gates on it, I had another review up to remove it | 22:49 |
clarkb | pabelanger: I see, feel free to ignore that comment then | 22:49 |
pabelanger | np | 22:49 |
clarkb | I will note that in the review | 22:49 |
*** sarob has quit IRC | 22:51 | |
fungi | lifeless: so, crime relied on tls compression but breach extends those techniques to leverage compression applied by higher layers (such as gzip encoding of responses from the server) | 22:52 |
fungi | so while the attack is similar, the attack surface is not | 22:53 |
clarkb | I think BREACH is worth worrying about | 22:54 |
*** sarob has joined #openstack-infra | 22:55 | |
fungi | breach basically happens when the connection is mitm below the encryption later and the attacker doesn't have the ability to decrypt the session but ca coerce the victim to elicit server responses with some content determined by the attacker | 22:55 |
clarkb | mordred: https://review.openstack.org/#/c/40470/ adds a JJB core group. Did you want to review that before I approve? | 22:56 |
fungi | say i'm connected to an https web application and you're able to force my packets to flow through your sniffer, and you're also able to get me to send requests to the site with a string of your choosing which the server will echo within the rest of its response, then you can trigger the request and observe the compression ratio to fine-tune guesses of a session key which is also contained within the | 22:58 |
fungi | response | 22:58 |
fungi | that sort of fun | 22:58 |
clarkb | there is now an OSX wine like emulator | 22:59 |
jeblair | clarkb, fungi: about rsync -- how will/could this interact with the other things we serve there? | 22:59 |
jeblair | it looks like it defines a pypi module which is appropriately rooted | 22:59 |
jeblair | and i guess we could add a tarballs module too? | 23:00 |
jeblair | if we wanted to make tarballs rsyncable? | 23:00 |
clarkb | jeblair: yes we could have multiple defines for each of the paths we want to make rsyncable | 23:00 |
fungi | jeblair: yes, it allows for multiple modules each with a distinct name | 23:00 |
fungi | so we could even add tarballs right now if we wanted, same way, just one more rsync::server::module block | 23:01 |
jeblair | and i assume rsync://pypi.o.o or rsync://static.o.o would work equally well? | 23:01 |
jeblair | we may want to only publicise rsync://pypi.o.o/pypi though, so we have flexibility to move it around | 23:01 |
fungi | jeblair: yes, rsync does not have anything like http 1.1 host headers | 23:01 |
fungi | so if we wanted to limit it to a specific host name, that name would need to resolve to one and only one ip address where the daemon was listening | 23:02 |
*** pentameter has quit IRC | 23:02 | |
fungi | but i agree just limiting the name we publicize with it should be sufficient to deter people being surprised if it moves in the future | 23:03 |
stevebaker | jeblair: hey, I updated https://review.openstack.org/#/c/38226/ to host heat-cfnclient in openstack-dev | 23:03 |
jeblair | fungi: are you okay with adding a new daemon on a host that (will eventually) host our distribution artifacts? | 23:04 |
jeblair | increased attack surface and all | 23:04 |
clarkb | jeblair: distribution artifacts == deb/rpm packages? | 23:05 |
clarkb | jeblair: that may be an argument to host those things elsewhere? | 23:05 |
jeblair | clarkb: no, we make tarballs. :) | 23:06 |
clarkb | oh those things | 23:06 |
*** rcleere has quit IRC | 23:07 | |
clarkb | in other security news researchers are recommending a switch to elliptic curve crypto (eg ecdsa) as RSA may become vulnerable after new progress made on solving discreet logarithmic problem | 23:09 |
*** dina_belova has joined #openstack-infra | 23:09 | |
clarkb | I think our centos6 hosts have too old openssl/openssh to support ecdsa :( | 23:09 |
lifeless | fungi: clarkb: yes, the higher level stuff has made the http/2 design 'fun'. | 23:09 |
*** zaro0508 has quit IRC | 23:10 | |
*** david-lyle has quit IRC | 23:11 | |
*** david-lyle has joined #openstack-infra | 23:11 | |
*** danger_fo is now known as danger_fo_away | 23:12 | |
lifeless | jeblair: the rsync equivalent of vhosts is the exported path - like nfs | 23:12 |
fungi | jeblair: i am not terribly worried about rsyncd itself, and we can at least audit its configuration to make sure it's not more promiscuous. though maybe we should consider adding a dedicated unprivileged account to run it under. i'll amend the patch to do that (the module takes it as an option i believe) | 23:13 |
jeblair | fungi: probably a good idea; nobody has a notoriously high level of access | 23:13 |
lifeless | jeblair: in terms of being able to move it round, I think just docs - tell folk clearly that its rsync on pypi.o.o/pypi; in the description for the pypi export note the canonical name for it. | 23:13 |
*** dina_belova has quit IRC | 23:13 | |
fungi | jeblair: i'm more worried about people trusting downloads obtained from an unauthenticated protocol without actually comparing signed checksum lists | 23:13 |
*** mrodden has quit IRC | 23:14 | |
fungi | and the rsync protocol does not cryptographically authenticate the server to the client (which is why rsync over ssh is generally better) | 23:14 |
lifeless | fungi: thats the TUF stuff which will kick in as upstream adopts it | 23:14 |
fungi | lifeless: yes, i'm very much looking forward to that | 23:14 |
lifeless | I do take your point that there is a risk here... | 23:15 |
lifeless | does run-mirror retrieve gpg sigs from pypi? | 23:16 |
clarkb | the signatures are independent of the package right? in that case no | 23:16 |
clarkb | or are they in the tarball? | 23:16 |
fungi | lifeless: at the moment our auto-published pypi tarballs aren't even signed ether | 23:16 |
lifeless | .asc files | 23:16 |
lifeless | fungi: ok, so this isn't really a new risk | 23:16 |
fungi | something else i'd love to see improved | 23:16 |
jeblair | fungi: i was more getting at the fact that we're increasing the attack surface of the server itself by running another C daemon. | 23:16 |
fungi | jeblair: agreed. i think we rely on local permissions enforcement to protect us some there | 23:17 |
fungi | and rsyncd hasn't had too ugly of a history of remote exploits | 23:18 |
fungi | last one i remember well was about a decade ago | 23:18 |
jeblair | which could be good news or bad. :) | 23:19 |
openstackgerrit | A change was merged to openstack-infra/config: Add jenkins-job-builder-core group https://review.openstack.org/40470 | 23:19 |
*** rnirmal has quit IRC | 23:19 | |
* fungi will not comment on the positive and negative aspects of his lack of memories... smells like a set up | 23:20 | |
clarkb | I went ahead and approved ^ I didn't hear any disagreement of the idea when it was proposed to the list | 23:20 |
*** UtahDave has quit IRC | 23:21 | |
*** nijaba has quit IRC | 23:21 | |
lifeless | jeblair: fungi: 2011 last vuln I think? | 23:21 |
*** nijaba has joined #openstack-infra | 23:22 | |
lifeless | but that affects receiver only | 23:22 |
jeblair | so they either fixed everything or no one's paying attention anymore. | 23:22 |
*** dnavale has joined #openstack-infra | 23:22 | |
lifeless | indeed | 23:23 |
reed | dnavale, what happened with your new ssh key? | 23:23 |
dnavale | i'd followed all the steps in the https://wiki.openstack.org/wiki/Documentation/HowTo#First-time_Contributors. | 23:24 |
dnavale | had everything set up but hadnt submitted anything until yday | 23:24 |
clarkb | fungi: were you going to write the fix for centos pip symlinkage? | 23:24 |
clarkb | fungi: I can if you are busy | 23:24 |
dnavale | when i tried to, it kept asking me for the passphrase and would then give an error.. | 23:25 |
clarkb | dnavale: this is happening when running git-review? | 23:25 |
dnavale | so one of my colleagues who had similar issues suggested to get a new ssh key.. did that.. | 23:25 |
dnavale | yes.. | 23:25 |
dnavale | thats right | 23:25 |
jeblair | so tbh, i'm not sure i'm keen about running an entirely new public facing service for this. it just seems like such a heavyweight solution to lifeless's problem. | 23:25 |
*** sarob has quit IRC | 23:25 | |
dnavale | with the new key, the git review still gave me the error.. and asked me to sign the agreement.. | 23:26 |
jeblair | lifeless: but then i still don't understand why running 'run-mirror' doesn't work. it keeps a pip cache, it should be quite fast. | 23:26 |
*** sarob has joined #openstack-infra | 23:26 | |
clarkb | dnavale: can you go to https://review.openstack.org/#/settings/ and give me your account id number? | 23:26 |
jeblair | dnavale, reed: that page looks like it needs updating | 23:26 |
jeblair | you do not need to ad an ssh key to launchpad | 23:27 |
openstackgerrit | Elizabeth Krumbach Joseph proposed a change to openstack-infra/config: Add httpd ssl support to git.openstack.org https://review.openstack.org/40253 | 23:27 |
jeblair | dnavale: you do need to add an ssh key to gerrit | 23:27 |
lifeless | jeblair: you have to have a suitable dev environment, so you have to install mysql-dev etc etc etc | 23:27 |
reed | jeblair, indeed | 23:27 |
dnavale | sure.. account id# 8103 | 23:27 |
jeblair | dnavale: https://review.openstack.org/#/settings/ssh-keys | 23:27 |
lifeless | jeblair: this is a big footprint on the host machine, and overlaps with the actions inside the chroot as well | 23:27 |
jeblair | dnavale: make sure your ssh key is in there | 23:27 |
pleia2 | had a bit of a messy rebase there, I think I got it though | 23:27 |
* reed yells at the gods of online passwords! | 23:27 | |
dnavale | yes.. it is in there | 23:27 |
lifeless | jeblair: just copying a working mirror is /much/ easier. I'm not against plumbing up a chrooted dedicated mechanism for running run-mirror eventually. | 23:28 |
lifeless | jeblair: but it's also /slow/, because run-mirror is spidering. | 23:28 |
jeblair | lifeless: what's the 'host machine' you're talking about here? | 23:28 |
lifeless | jeblair: my laptop, for instance. | 23:28 |
clarkb | dnavale: looking in the DB you haven't signed a CLA agreement | 23:28 |
clarkb | dnavale: https://review.openstack.org/#/settings/agreements on that page is there a row that says verified individual account agreement? | 23:29 |
dnavale | yes.. but when i try to, it says the address is not one of the registered ones and ask me to fill a new form | 23:29 |
clarkb | *OpenStack Individual Contributor Account Agreement | 23:29 |
lifeless | jeblair: pypi, like most REST services, doesn't understand latency at all, and dies the death of a thousand cuts. | 23:29 |
reed | why is the documentation team saying that you need a github account? | 23:30 |
lifeless | jeblair: the cache avoids repeated bulk data download by run-mirror, but the spider-everything overhead doesn't go away | 23:30 |
clarkb | dnavale: you need to use the same email address in gerrit as was provided to the openstack foundation. And if you haven't signed up with the openstack foundation you will need to do that first | 23:30 |
dnavale | Application Error | 23:30 |
dnavale | Server Error | 23:30 |
dnavale | The request could not be completed. You may not be a member of the foundation registered under this email address. Before continuing, please make sure you have joined the foundation at http://openstack.org/register/ | 23:30 |
*** sarob has quit IRC | 23:30 | |
clarkb | dnavale: have you done that? | 23:30 |
dnavale | hmmm.. i've been using the same email id everywhere | 23:30 |
harlowja | ianw: all that is needed for qpid, seems to work when i tried it on my new vm, https://review.openstack.org/#/c/40481/ | 23:31 |
ianw | harlowja: thanks, was just restoring my vm to give it a try :) | 23:31 |
jeblair | lifeless: i see, so this is for your local dev environment. i know mordred uses run-mirror, but i guess he doesn't have the latency issues due to only occasionally being in the southern hemisphere. | 23:31 |
harlowja | ianw cool | 23:32 |
dnavale | i dont think so, i was told that since we are a part of red hat, thats already done.. | 23:32 |
dnavale | but i'll try and do that now.. | 23:32 |
harlowja | ianw make sure u use -p to specify that persona, instead of the default rabbit one | 23:32 |
clarkb | dnavale: they were probably talking about the corporate CLA stuff, all of this is done at the individual level on top of that | 23:32 |
dnavale | Ah.. ok.. thanks.. i'll register and try again.. | 23:32 |
lifeless | jeblair: I wrote up a fuller explanation about run-mirror at the end of the listing-files patch we've abandoned; dunno if you saw that | 23:32 |
*** mrodden has joined #openstack-infra | 23:33 | |
*** mrodden1 has joined #openstack-infra | 23:35 | |
reed | dnavale, what were you told exactly? | 23:37 |
dnavale | clarkb: thanks.. i was able to submit the CLA.. | 23:37 |
*** mrodden has quit IRC | 23:37 | |
fungi | clarkb: i had started and then got sucked away, sorry. settled in now and can focus on patches | 23:38 |
*** CaptTofu has quit IRC | 23:39 | |
dnavale | i guess i thought that we wouldn't need to fill in the individual form when contributing as part of red hat.. sorry about that.. | 23:39 |
*** CaptTofu has joined #openstack-infra | 23:39 | |
reed | dnavale, oh, ok. Please tell the person that told you the incorrect info to talk to me about that and fix the internal documentation | 23:40 |
dnavale | ok.. will do.. | 23:43 |
clarkb | dnavale: are you able to submit changes to gerrit now? | 23:45 |
dnavale | thanks again.. | 23:45 |
dnavale | yes.. i'm able to do that.. | 23:46 |
jeblair | lifeless: why do you notice run-mirror's runtime? i mean, we run it in the background, we're not exactly waiting for it to finish. | 23:46 |
openstackgerrit | Jeremy Stanley proposed a change to openstack-infra/config: No longer link pip to pip-python on Red Hat https://review.openstack.org/40516 | 23:47 |
fungi | clarkb: ^ testing that on git.o.o momentarily | 23:47 |
lifeless | jeblair: you don't notice it because you have a long lived server you can just use; start from scratch some day | 23:49 |
*** zul has joined #openstack-infra | 23:49 | |
lifeless | jeblair: you're on a low latency inter-server environment in the gate; someone that just downloaded tripleo and is building an image is not. | 23:49 |
lifeless | jeblair: if you'd be happier I can spin up a dedicated HP Cloud instance to deliver this, but that really seems like shadowing the -infra role, which I really don't want to do. | 23:51 |
jeblair | lifeless: when we set up the pypi mirror, we explicitly agreed it would not be a publicied and recommended distribution channel for openstack dependencies | 23:51 |
lifeless | jeblair: interesting; I didn't know that. | 23:52 |
jeblair | lifeless: i thought this was for you on your laptop? lots of people use cloud servers for their own development. is this a public service that the tripleo program needs to provide for its users? | 23:52 |
lifeless | jeblair: thats the intent yes. | 23:53 |
lifeless | (service for tripleo users, on by default) | 23:53 |
jeblair | lifeless: you threw me with the laptop thing there. | 23:53 |
lifeless | https://review.openstack.org/#/c/38543/ in the comments in there | 23:53 |
lifeless | 5th up from the bottom | 23:53 |
lifeless | "Our developers - myself, spamaps, pleia2, ng etc are all building lots of images, and each image has up to ~12 virtual envs. We spend a huge amount of time downloading stuff from pypi, since the ssl change means squid no longer caches the content." | 23:53 |
clarkb | wait what? why can't squid mitm you? | 23:54 |
lifeless | clarkb: pip forces SSL now. | 23:54 |
clarkb | does this have to do wit the proxy behavior of pip? | 23:54 |
clarkb | lifeless: yeah but squid can mitm ssl | 23:54 |
lifeless | clarkb: Getting every tripleo dev+user to configure ssl-bump with a snakeoil cert is a huge barrier to entry. | 23:55 |
clarkb | thats fair | 23:55 |
clarkb | it does make the proxy setup more complex | 23:55 |
lifeless | you'd also need to configure tcp interception | 23:55 |
lifeless | which is doable, but again - barrier to entry. | 23:56 |
lifeless | and since iptables works on ip's not dns names, /complex/ to get right | 23:56 |
lifeless | you'd need a thing intercepting all dns lookups, pulling out the A and CNAME records for the pypi CDN, mapping those to iptables rules just in time... | 23:56 |
nati_ueno | clarkb: do you have a bug report for Today's morning neturon ut issue? It looks like still broken, so I wanna know current status | 23:56 |
lifeless | or you need squid wildcard mitming all SSL | 23:57 |
lifeless | which frankly is a bad idea as most SSL only websites have terrible caching headers and squid will end up caching stuff not meant to be written to disk. | 23:57 |
lifeless | clarkb: ^ | 23:57 |
clarkb | nati_ueno: which issue? I got a late ish start today and don't remember any neutron things | 23:58 |
clarkb | nati_ueno: is this what sdague was talking about yesterday? | 23:58 |
openstackgerrit | A change was merged to openstack-infra/config: No longer link pip to pip-python on Red Hat https://review.openstack.org/40516 | 23:58 |
fungi | so that ^ kept puppet from blowing away the pip executable on git.o.o after i reinstalled the rpm. once the master pulls it down, i'll reinstall python-pip on git, pbx and all the centos6 jenkins slaves | 23:58 |
nati_ueno | clarkb: Ahh sorry. May be I talked with sdague | 23:58 |
lifeless | jeblair: so the basic issue then is that what i'm asking for is in contradiction with a prior -infra decision. Do we revisit that decision? | 23:58 |
clarkb | nati_ueno: I know he filed a bug about something. Not sure if that is the same thing you are talking about | 23:59 |
nati_ueno | clarkb: so we faced requirement version issue | 23:59 |
jeblair | lifeless: i had not seen the latest comments in the original change (my review workload has something like a 4 day cycle currently). they clarify quite a bit, thanks. | 23:59 |
clarkb | nati_ueno: with requests? | 23:59 |
nati_ueno | clarkb: changes for requirement.txt brokes neutron unit test | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!