Wednesday, 2015-04-22

*** achanda has quit IRC00:01
openstackgerritJoshua Harlow proposed openstack/tooz: Avoid using a thread local token storage  https://review.openstack.org/17613300:04
openstackgerritJoshua Harlow proposed openstack/tooz: Avoid using a thread local token storage  https://review.openstack.org/17613300:05
openstackgerritJoshua Harlow proposed openstack/tooz: Avoid using a thread local token storage  https://review.openstack.org/17613300:06
*** tsekiyam_ has joined #openstack-oslo00:14
*** tsekiyama has quit IRC00:18
*** tsekiyam_ has quit IRC00:19
*** mtanino has quit IRC00:19
openstackgerritJoshua Harlow proposed openstack/tooz: Heartbeat on acquired locks copy  https://review.openstack.org/17614200:23
*** achanda has joined #openstack-oslo00:23
*** david-lyle has quit IRC00:24
*** david-lyle has joined #openstack-oslo00:27
*** sputnik13 has quit IRC00:27
*** achanda has quit IRC00:28
*** jaypipes has quit IRC00:37
*** AAzza_afk has joined #openstack-oslo00:43
*** AAzza has quit IRC00:43
*** AAzza_afk is now known as AAzza00:43
*** prad has quit IRC00:53
bknudsonopenstack/common/service.py seems to be doing a lot of work in the child just to catch SIGTERM so that it can exit with 1 rather than a signal indicator...00:55
bknudsonanyone have a problem with having the child just not catch SIGTERM and go away on the signal?00:56
*** browne has quit IRC01:00
openstackgerritJoshua Harlow proposed openstack/taskflow: Retain chain of missing dependencies (WIP)  https://review.openstack.org/17614801:02
openstackgerritJoshua Harlow proposed openstack/taskflow: Retain chain of missing dependencies (WIP)  https://review.openstack.org/17614801:03
*** prad has joined #openstack-oslo01:08
openstackgerritBrant Knudson proposed openstack/oslo-incubator: service child process normal SIGTERM exit  https://review.openstack.org/17615101:14
Kennanhi :bknudson01:32
Kennanthere ?01:32
Kennan:krotscheck ?01:35
*** flwang has quit IRC01:38
*** zzzeek has quit IRC01:48
*** flwang has joined #openstack-oslo01:50
*** flwang has left #openstack-oslo01:51
*** jamesllondon has quit IRC01:53
*** zzzeek has joined #openstack-oslo01:54
*** zzzeek has quit IRC01:54
*** jamesllondon has joined #openstack-oslo01:54
*** stevemar has joined #openstack-oslo01:56
*** gtt116__ has quit IRC01:59
*** gtt116 has joined #openstack-oslo01:59
*** browne has joined #openstack-oslo02:01
*** harlowja is now known as harlowja_away02:03
*** _amrith_ is now known as amrith02:05
*** alexpilotti has quit IRC02:27
*** ChuckC has joined #openstack-oslo02:29
*** jamielennox is now known as jamielennox|away02:39
*** sputnik13 has joined #openstack-oslo02:39
*** sputnik13 has quit IRC02:44
*** sputnik13 has joined #openstack-oslo02:44
*** sputnik13 has quit IRC02:47
*** amrith is now known as _amrith_03:06
*** sputnik13 has joined #openstack-oslo03:14
*** sputnik13 has quit IRC03:14
*** arnaud___ has joined #openstack-oslo03:50
*** joesavak has joined #openstack-oslo04:16
*** enikanorov__ has quit IRC04:21
*** joesavak has quit IRC04:29
*** sdake_ has joined #openstack-oslo04:32
*** prad has quit IRC04:45
*** prad has joined #openstack-oslo05:04
*** inc0 has joined #openstack-oslo05:06
*** nkrinner has joined #openstack-oslo05:20
*** yamahata has joined #openstack-oslo05:29
*** sdake_ has quit IRC05:42
*** haigang has joined #openstack-oslo05:44
*** achanda has joined #openstack-oslo05:49
*** arnaud___ has quit IRC06:05
*** jamesllondon has quit IRC06:10
*** achanda has quit IRC06:20
*** achanda has joined #openstack-oslo06:21
*** stevemar has quit IRC06:31
*** sdake has joined #openstack-oslo06:35
*** jaosorior has joined #openstack-oslo06:54
*** browne has quit IRC06:58
*** gtt116_ has joined #openstack-oslo07:02
*** shardy has joined #openstack-oslo07:04
*** gtt116 has quit IRC07:05
*** achanda has quit IRC07:29
*** liusheng has joined #openstack-oslo07:30
*** achanda_ has joined #openstack-oslo07:32
*** sdake has quit IRC07:35
*** andreykurilin__ has joined #openstack-oslo07:44
*** achanda_ has quit IRC07:45
*** sdake has joined #openstack-oslo07:48
*** sdake has quit IRC08:00
*** yamahata has quit IRC08:04
*** alexpilotti has joined #openstack-oslo08:04
*** ozamiatin has joined #openstack-oslo08:11
*** rushiagr_away is now known as rushiagr08:13
*** ajo has joined #openstack-oslo08:18
*** jamespage has quit IRC08:24
*** jamespage has joined #openstack-oslo08:24
*** jamespage has quit IRC08:24
*** jamespage has joined #openstack-oslo08:24
*** ccrouch has quit IRC08:31
*** prad has quit IRC08:32
*** haigang has quit IRC08:32
*** prad has joined #openstack-oslo08:34
*** haigang has joined #openstack-oslo08:41
*** gtt116 has joined #openstack-oslo08:45
*** gtt116_ has quit IRC08:45
*** haigang has quit IRC08:46
*** haigang has joined #openstack-oslo08:47
*** andreykurilin__ has quit IRC08:49
*** andreykurilin__ has joined #openstack-oslo08:50
*** haigang has quit IRC08:52
*** prad has quit IRC08:53
*** sheeprine has quit IRC08:56
*** sputnik13 has joined #openstack-oslo08:58
*** haigang has joined #openstack-oslo08:58
*** prad has joined #openstack-oslo08:59
*** sheeprine has joined #openstack-oslo09:00
*** ihrachyshka has joined #openstack-oslo09:01
*** sputnik13 has quit IRC09:02
*** arnaud___ has joined #openstack-oslo09:06
*** nkrinner has quit IRC09:07
*** nkrinner has joined #openstack-oslo09:09
*** arnaud___ has quit IRC09:10
*** rushiagr is now known as rushiagr_away09:17
*** Kennan2 has joined #openstack-oslo09:19
*** Kennan has quit IRC09:19
*** i159 has joined #openstack-oslo09:21
*** e0ne has joined #openstack-oslo09:30
*** rushiagr_away is now known as rushiagr09:37
*** ajo has quit IRC09:41
*** dguitarbite has joined #openstack-oslo09:45
*** ajo has joined #openstack-oslo09:46
*** shardy_ has joined #openstack-oslo09:46
*** shardy has quit IRC09:48
*** shardy_ has quit IRC09:52
*** shardy has joined #openstack-oslo09:52
*** ajo has quit IRC09:58
*** inc0 has quit IRC10:00
*** ozamiatin has quit IRC10:01
*** andreykurilin__ has quit IRC10:04
*** ajo has joined #openstack-oslo10:15
*** inc0 has joined #openstack-oslo10:22
*** inc0 has quit IRC10:25
*** inc0_ has joined #openstack-oslo10:25
*** _amrith_ is now known as amrith10:27
*** rushiagr is now known as rushiagr_away10:48
*** e0ne is now known as e0ne_10:51
*** inc0_ has quit IRC10:55
*** e0ne_ is now known as e0ne10:55
*** haigang has quit IRC10:58
*** Kennan2 has quit IRC11:08
*** haigang has joined #openstack-oslo11:08
*** Kennan has joined #openstack-oslo11:23
*** e0ne is now known as e0ne_11:27
*** shardy has quit IRC11:28
*** haypo has joined #openstack-oslo11:32
*** amrith is now known as _amrith_11:34
*** e0ne_ has quit IRC11:38
*** bknudson has quit IRC11:40
*** inc0_ has joined #openstack-oslo11:50
*** inc0__ has joined #openstack-oslo11:52
*** inc0_ has quit IRC11:56
*** e0ne has joined #openstack-oslo11:58
*** haigang has quit IRC12:02
*** haigang has joined #openstack-oslo12:03
*** inc0__ has quit IRC12:03
*** inc0__ has joined #openstack-oslo12:04
*** inc0__ has quit IRC12:11
*** inc0 has joined #openstack-oslo12:12
*** haigang has quit IRC12:12
*** jaypipes has joined #openstack-oslo12:15
*** cdent has joined #openstack-oslo12:18
*** ozamiatin has joined #openstack-oslo12:19
*** rushiagr_away is now known as rushiagr12:39
*** e0ne is now known as e0ne_12:41
*** gordc has joined #openstack-oslo12:41
*** inc0 has quit IRC12:42
*** bknudson has joined #openstack-oslo12:43
*** kgiusti has joined #openstack-oslo12:46
*** e0ne_ is now known as e0ne12:46
ihrachyshkaflaper87, who's managing oslo.log? I don't see any oslo-log-core team in gerrit.12:52
dhellmannjd__: can you join #openstack-relmgr-office to discuss https://review.openstack.org/#/c/175851,  please?12:53
*** zzzeek has joined #openstack-oslo13:09
*** rushiagr is now known as rushiagr_away13:12
openstackgerritMerged openstack/oslo-incubator: Revert "Revert "Revert "Optimization of waiting subprocesses in ProcessLauncher"""  https://review.openstack.org/17585113:13
*** yamahata has joined #openstack-oslo13:19
*** mriedem_away is now known as mriedem13:22
dhellmannjd__: nevermind, sdague filled me in13:22
jd__dhellmann: ok sorry I was AFK13:24
dhellmannjd__: that's what I figured when I looked at the clock; np13:24
dhellmannI was confused about the number of reverts and what the original 'optimization' had been13:24
jd__dhellmann: are we sure that's the culprit finally?13:25
dhellmannjd__: I think it is one of a couple. We want this for keystone, apparently13:25
jd__dhellmann: sigh :| ok!13:25
*** jungleboyj has quit IRC13:29
*** jecarey has joined #openstack-oslo13:29
*** stevemar has joined #openstack-oslo13:29
*** andreykurilin__ has joined #openstack-oslo13:33
*** stevemar has quit IRC13:37
openstackgerritIhar Hrachyshka proposed openstack-dev/hacking: Add support for flake8 off_by_default for optional checks  https://review.openstack.org/13405213:41
openstackgerritIhar Hrachyshka proposed openstack-dev/hacking: Add support for flake8 off_by_default for optional checks  https://review.openstack.org/13405213:42
openstackgerritIhar Hrachyshka proposed openstack-dev/hacking: Add support for flake8 off_by_default for optional checks  https://review.openstack.org/13405213:43
*** _amrith_ is now known as amrith13:44
*** e0ne is now known as e0ne_13:46
*** amotoki has joined #openstack-oslo13:49
haypo" Merged openstack/oslo-incubator: Revert "Revert "Revert ..." hum, someone fall into an unlimited loop13:50
*** e0ne_ has quit IRC13:56
*** e0ne has joined #openstack-oslo13:59
*** nkrinner has quit IRC14:01
*** tsekiyama has joined #openstack-oslo14:01
*** sigmavirus24_awa is now known as sigmavirus2414:05
*** stevemar has joined #openstack-oslo14:07
*** tsekiyama has quit IRC14:17
*** salv-orlando has quit IRC14:18
*** jungleboyj has joined #openstack-oslo14:19
*** ChuckC has quit IRC14:19
*** tsekiyama has joined #openstack-oslo14:20
*** tsekiyama has quit IRC14:20
*** tsekiyama has joined #openstack-oslo14:21
*** mtanino has joined #openstack-oslo14:21
*** stpierre has joined #openstack-oslo14:25
*** stevemar has quit IRC14:26
ozamiatindhellmann, hi, could you please give me a hint where CALL+fanout used?14:28
dhellmannozamiatin: have you looked in nova? I think it used to be used there. sileht may know better14:28
Kiallzzzeek: about? Had a SQLA and/or olso.db question and I'm striking out with Google.... DB Connections as checked in/out of the connection pool, I seem to remember there being a delay between reuse of connections.. i.e. a connection can't be used for X ms after it's checked into the pool.. am I imagining that?14:29
zzzeekmoment14:29
Kiallthanks :)14:29
ozamiatindhellmann, i saw a direct call fanout was not discovered yet )14:30
silehtozamiatin, https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/rpc/client.py#L14014:30
silehtozamiatin, this is not possible with the public API14:31
silehtozamiatin, rabbit driver doesn't implement that too14:31
ozamiatinsileht, thanks! that's what i assumed it should be :)14:31
haypodhellmann, hi. do you know when patches on requirements will be accepted again (i would like to upgrade eventlet)?14:32
dhellmannhaypo: not until the release is done14:32
haypodhellmann, ok. so it should be short ;)14:33
dhellmannhaypo: no idea, at the rate things are going14:33
openstackgerritIhar Hrachyshka proposed openstack/oslo.context: Add user_name and project_name to context object  https://review.openstack.org/17633314:36
ihrachyshkadhellmann, weren't you supposed to be on vacation? :)14:37
ihrachyshkadhellmann, so who's managing oslo repos that do not have separate oslo-$project-core teams?14:38
ihrachyshkais it oslo-core?14:38
dhellmannihrachyshka: dims is out this week, I was just out Monday14:38
ihrachyshkaI see14:39
dhellmannihrachyshka: yes, if there's not a separate core team yet, it's just oslo-core. We have a few repos where we should probably create separate core teams14:39
*** sdake has joined #openstack-oslo14:43
*** yamahata has quit IRC14:43
*** yamahata has joined #openstack-oslo14:44
*** sdake_ has joined #openstack-oslo14:44
zzzeekKiall: oh youre still waiting, sorry :)14:44
*** jungleboyj has quit IRC14:44
zzzeekKiall: um that is not true.   connection that is checked in is ready to go again immediately14:44
Kiallzzzeek: Humm, Ok.. I'm imagining it so..14:45
zzzeekKiall: there’s a timeout that causes a connection that is X seconds old to be closed and reopened brefore use, that’s it14:45
zzzeekopenstack sets that timeout to 360014:45
*** jungleboyj has joined #openstack-oslo14:45
Kiallzzzeek: Yea, we're hitting an issue where, no matter how large we set the pool size (and mysql connection limits), we're consuming them ALL and ending up with a QueuePool timeout :( Trying to get a handle on what's happening, and noticed we have a very SQL heavy method which it's grabbing a new connection for each and every query, but serially, so one at a time. Clearly a bug.. But, without a reuse delay likely isn't the cause of the issue14:47
KiallI'm looking into!14:47
*** sdake has quit IRC14:48
*** browne has joined #openstack-oslo14:50
*** jungleboyj has quit IRC14:52
*** jungleboyj has joined #openstack-oslo14:53
*** russellb has quit IRC15:00
*** amrith is now known as _amrith_15:02
*** russellb has joined #openstack-oslo15:05
*** ozamiatin has quit IRC15:12
*** yassine_ has joined #openstack-oslo15:12
openstackgerritVictor Stinner proposed openstack/oslo.db: Add Python 3 classifiers to setup.cfg  https://review.openstack.org/17635515:13
openstackgerritVictor Stinner proposed openstack/oslo.db: Add Python 3 classifiers to setup.cfg  https://review.openstack.org/17635515:15
*** andreykurilin__ has quit IRC15:18
*** yassine_ has quit IRC15:22
*** yassine_ has joined #openstack-oslo15:22
*** i159 has quit IRC15:22
*** yassine_ has quit IRC15:24
*** lefais has joined #openstack-oslo15:24
*** lefais has quit IRC15:25
*** yassine_ has joined #openstack-oslo15:25
*** ihrachyshka has quit IRC15:35
*** e0ne has quit IRC15:36
*** e0ne has joined #openstack-oslo15:37
*** yamahata has quit IRC15:43
ttxjd__, dhellmann: would like to clear some confusion around https://bugs.launchpad.net/oslo-incubator/+bug/144658315:46
openstackLaunchpad bug 1446583 in Keystone "services no longer reliably stop in stable/kilo" [Critical,In progress] - Assigned to Julien Danjou (jdanjou)15:46
ttxif you happen to be around15:46
jd__ttx: yep15:46
ttxso, fix is in oslo-incubator15:47
ttxjd__: you proposed it for keystone/master and keystone/kilo15:47
ttxdhellmann proposed it for keystone/kilo15:47
ttxcomments on the bug say it's useless15:47
ttxunclear if anything else is affected15:48
*** sdake_ has quit IRC15:48
jd__so far that's also what I know15:48
ttxmaybe sdague has extra info15:48
jd__bknudson seems to have debugged it further15:48
ttxjust wondering what is affected and what fixes it15:49
ttxi the mean time I'm holding on the keystone patches15:49
bknudsonI don't know why https://review.openstack.org/#/c/175851/ was proposed as a solution? Maybe there was some discussion that I missed.15:50
bknudsonwas it because of the timing of when it was committed?15:51
ttxthat was jd__'s solution to the bug15:51
bknudsonhow was it verified?15:51
bknudsonI was able to recreate a shutdown hang locally by just connecting nc to keystone and then it wouldn't shut down.15:52
bknudsonI assume that's the problem that grenade was seeing.15:52
bknudsonalthough maybe it was something else.15:53
jd__I just pushed the patches because dhellmann asked me to do so and based on a git log and changes done to service.py that _seemed_ like the culprit :)15:54
jd__but it might also be a totally different change15:54
jd__someone should do a git bisect?15:54
ttxbknudson: and https://review.openstack.org/#/c/176151/ is the patch for it ?16:01
bknudsonttx: https://review.openstack.org/#/c/176151/ worked for me locally... keystone would shut down if there was a connection open.16:02
bknudsonalso, I think it's the right thing to do. (KISS principle)16:02
*** browne has quit IRC16:08
*** arnaud___ has joined #openstack-oslo16:10
*** yassine_ has quit IRC16:12
dhellmannbknudson: did you talk to asalkeld about why he had the signal handler in there like that in the first place? I agree it's curious, but I wonder if there is a difference in behavior in some circumstances16:12
bknudsondhellmann: I didn't16:12
bknudsonI'll add him to the review.16:13
bknudsonI don't know how good the coverage is... obviously the change I made passes so the child exit status isn't tested.16:13
*** viktors is now known as viktors|afk16:14
dhellmannyeah16:14
dhellmannbknudson: are we not running keystone under apache? why is this even an issue?16:14
bknudsondhellmann: we deprecated running eventlet keystone in K.16:15
dhellmannbknudson: I guess we're still testing that way, though?16:15
bknudsonso before that devstack was always running keystone with evnetlet.16:15
bknudsonI think the postgres test runs with eventlet16:16
dhellmannah, so some jobs16:16
dhellmannok16:16
bknudsonand we have a work item to provide a script for grenade to "migrate" config from eventlet to httpd.16:16
bknudsonmorganfainberg: ^ was talking about this yesterday.16:16
dhellmannk16:16
bknudsonI thought we didn't need one, but whatever.16:17
dhellmannat this point I'm just trying to understand how to fix the issue we have in the gate16:17
* morganfainberg reads scroll back.16:17
bknudsondhellmann: I think https://review.openstack.org/#/c/176151/ , applied to keystone, will do it.16:17
bknudsonit is curious that this seems to have only become a real problem recently... I've been seeing it on my local system for a long time.16:18
morganfainbergbknudson: interaction with a lib somewhere is my guess.16:18
bknudsonmost shutdown scripts are more aggressive about getting rid of unwanted processes.16:18
morganfainbergOr a chance in event let.16:18
morganfainbergChange*16:18
morganfainbergyeah but not shutting down cleanly is suboptimal. Your app should shutdown if it can. If it almost never does, that is a valid issue. Kill -9 while valid, is ugly16:19
dhellmannbknudson: have you proposed a version of 176151 to keystone, too?16:20
bknudsondhellmann: I haven't. I can do that easy enough.16:20
dhellmannbknudson: yeah, if you could do that and mark it as depending on the version in the incubator (to keep us honest) that would be good. We should also test it in the stable/kilo branch of keystone, I suppose16:21
dhellmannit seems innocuous, and the test coverage isn't checking exit codes but there are tests for killing subprocesses16:21
dhellmannstill, I'd like to be able to see if it fixes the issues sdague had merging https://review.openstack.org/#/c/175391/16:22
dhellmannttx: how does this sound? ^^16:22
bknudsondo you only want to see it in keystone stable/juno?16:22
dhellmannbknudson: that's the one I care about, but it should go into keystone master, too16:22
bknudsonor do I have to go through the whole rigamarole of backporting to oslo-incubator stable/juno , keystone master, etc.16:22
dhellmannoh, no, not oslo-incubator stable16:23
dhellmannoslo-incubator master -> keystone master -> keystone stable/kilo16:23
bknudsonoh, right, it's kilo.16:23
dhellmannyeah, we don't have a stable/kilo for the incubator yet16:23
bknudsonI didn't think this was just keystone , cinder is also implicated.16:24
bknudsondid somebody want to try the fix on cinder?16:24
dhellmannI don't even know if we're seeing the issue with cinder any more, or if it was the same issue16:25
ttxdhellmann: sounds good16:25
dhellmannbknudson: I believe, from what sdague has said, that testing https://review.openstack.org/#/c/175391 with your backport will tell us if keystone is working ok -- although that's why we thought the other patch fixed the problem :-/16:25
ttxoslo-incubator master -> keystone master -> keystone stable/kilo16:25
bknudsonI need to use a different change id in keystone since otherwise I don't see how depends-on would work.16:28
dhellmannbknudson: yes, but isn't that what we normally do with syncs?16:29
bknudsonI'm doing a "cherry-pick" this time... I could do a full sync.16:30
dhellmannbknudson: you can sync just that module, hang on16:30
dhellmannbknudson: ./update.sh --module service --base keystone ../keystone16:31
dhellmannrun that from your incubator checkout and make the last argument the path to the keystone directory16:31
bknudsonactually, that's the only module that's changed, so a full sync is the same result.16:31
dhellmannk16:31
bknudsonI'll just do a full sync then.16:31
dhellmannbknudson, ttx: I'm keeping some notes about all of these patches in https://etherpad.openstack.org/p/april-21-gate-wedge16:31
ttxok, I'll  disappear for a few hours, beer/dinner16:32
bknudsondhellmann: here's the sync to master: https://review.openstack.org/#/c/176391/16:35
bknudsonand here's stable/kilo: https://review.openstack.org/#/c/176392/16:36
*** haypo has quit IRC16:36
bknudsoncherry-pick16:36
dhellmannthanks, bknudson16:37
*** andreykurilin__ has joined #openstack-oslo16:37
bknudsonno problem.16:37
*** arnaud___ has quit IRC16:39
dhellmannbknudson, ttx: I've set up https://review.openstack.org/#/c/175391/ to run with the new service change. I'm going to grab lunch while zuul does its thing16:47
*** subscope_ has joined #openstack-oslo16:48
*** browne has joined #openstack-oslo16:50
*** e0ne has quit IRC16:50
*** jungleboyj has quit IRC16:50
* morganfainberg watches zuul crunch away.16:56
*** harlowja_away is now known as harlowja16:59
*** jaosorior has quit IRC17:02
*** andreykurilin__ has quit IRC17:03
*** jungleboyj has joined #openstack-oslo17:05
*** subscope_ has quit IRC17:07
*** subscope_ has joined #openstack-oslo17:07
*** salv-orlando has joined #openstack-oslo17:07
*** russellb has quit IRC17:17
*** russellb has joined #openstack-oslo17:21
*** sdake has joined #openstack-oslo17:24
*** ChuckC has joined #openstack-oslo17:26
*** sdake has quit IRC17:29
*** sdake has joined #openstack-oslo17:30
-openstackstatus- NOTICE: gerrit is restarting to clear hung stream-events tasks. any review events between 16:48 and 17:32 utc will need to be rechecked or have their approval votes reapplied to trigger testing in zuul17:32
*** russellb has quit IRC17:35
*** russellb has joined #openstack-oslo17:40
*** david-lyle has quit IRC17:42
*** yamahata has joined #openstack-oslo17:50
*** achanda has joined #openstack-oslo17:51
*** e0ne has joined #openstack-oslo17:59
*** yamahata has quit IRC18:03
openstackgerritJoshua Harlow proposed openstack/tooz: Have run_watchers take a timeout and respect it  https://review.openstack.org/17641718:09
*** alexpilotti has quit IRC18:25
*** yamahata has joined #openstack-oslo18:33
*** yamahata has quit IRC18:40
*** _amrith_ is now known as amrith18:43
*** bkc has joined #openstack-oslo18:47
*** stevemar has joined #openstack-oslo18:50
*** pblaho_ has joined #openstack-oslo19:02
*** pblaho has quit IRC19:05
*** pblaho__ has joined #openstack-oslo19:10
openstackgerritJoshua Harlow proposed openstack/oslo.utils: Add the start of a futures model for async work/additions  https://review.openstack.org/17643919:10
*** pblaho_ has quit IRC19:10
openstackgerritJoshua Harlow proposed openstack/oslo.utils: Add the start of a futures model for async work/additions  https://review.openstack.org/17643919:11
openstackgerritJoshua Harlow proposed openstack/oslo.utils: Add the start of a futures model for async work/additions  https://review.openstack.org/17643919:11
*** stevemar has quit IRC19:15
openstackgerritJoshua Harlow proposed openstack/oslo.utils: Add the start of a futures model for async work/additions  https://review.openstack.org/17643919:17
sdagueok, back now for a bit19:17
sdaguecinder didn't ever take the patch in question, and it fails a lot less than keystone19:17
*** achanda has quit IRC19:22
*** jaypipes has quit IRC19:24
*** amrith is now known as _amrith_19:25
*** ozamiatin has joined #openstack-oslo19:25
bknudsongive me a few minutes and I'll try my nc test on cinder.19:28
*** achanda has joined #openstack-oslo19:31
sdaguebknudson: also, do you have a narrower test case for this that could be put into the oslo tree for testing?19:31
sdagueas you have apparently gotten closer to the heart of it19:32
bknudsonsdague: I'm sure I could come up with a test case, but I also expect it would take a while... I'd probably have to fork&exec an eventlet server and send it a signal.19:33
bknudsonand open a tcp connection to it.19:33
sdaguebknudson: take a while to run, or to write?19:34
bknudsontake a while to write.19:34
sdagueyeh, so given that this code in oslo was revert / revert / revert it feels like actually writing the correct test case would be really useful19:35
sdaguebecause otherwise, someone is going to tweak something again in the future in the name of optimization, and break it19:35
bknudsonI agree. tests are always good.19:35
sdaguebknudson: don't get me wrong, I don't want the test to hold up the fix19:36
sdagueI just want a promiss of the test coming as soon as possible19:36
bknudsonI can work on it.19:36
*** nkrinner has joined #openstack-oslo19:37
sdaguethanks19:40
ttxsounds worthwhile. I don't really like critical issues detected at D-819:40
* dhellmann catches up with sdague, ttx, bknudson19:41
sdaguemorganfainberg: so... who's going to approve - https://review.openstack.org/#/c/176392/ ?19:41
sdagueoh, that's blocked on oslo change landing?19:42
dhellmannsorry, I've been fighting with the expense report system for the last 45 min or so, and am just catching up with our status19:42
morganfainbergsdague, yes.19:42
morganfainbergsdague, i'll approve it as soon as oslo change lands.19:42
dhellmannif we're confident of this change fixing things, I can approve the oslo side19:42
morganfainbergand we are happy with the change that is.19:42
dhellmannsdague: it looks like your patch works with bknudson's patch, if I'm looking at https://review.openstack.org/#/c/175391/ correctly19:43
sdaguebknudson seems to have the narrower manual reproduce, and likes this, so given the time crunch, I'd say lets do it, and get some tests together for it later19:43
dhellmann++19:43
dhellmann+2a on https://review.openstack.org/#/c/176151/19:44
sdaguedhellmann: yeh, I hit recheck again to give us another round of test results19:44
*** stevemar has joined #openstack-oslo19:44
dhellmannsdague: sounds good19:44
morganfainberggetting keystone master one +2a19:44
morganfainbergright now.19:44
dhellmannfwiw, your patch had passed before with the *other* fix, so I don't know if either is actually related :-/19:44
bknudsonbtw - I was able to recreate this with cinder... "nc localhost 8776" and then c-api fails to stop.19:45
dhellmannthat is, your patch may not have been as good a test case as we thought, sdague19:45
sdaguedhellmann: well the other patch puts it back to stable/juno code, which may still have had a race19:45
sdaguebut narrower19:45
dhellmannyeah19:45
sdaguedhellmann: well, my patch was the thing that found this the first time, and was what was blocked19:45
bknudsonyou should be able to check the log to see if there were any connections open to keystone when it was trying to stop.19:45
morganfainbergsdague, dhellmann, ttx, https://review.openstack.org/#/c/176392/ i don't mind +A this, but i don't have a second stable to +A atm.19:45
morganfainbergs/+A/+219:45
bknudsonsince we added that bit.19:45
ttxon it19:46
morganfainbergttx, thnx19:46
dhellmannsdague: yeah, that's what I thought, but it seems odd that 2 different fixes cleared it up so I'm less confident of either as the "right" fix19:46
ttxhmm, do we need to revert the revert of the revert of the revert on oslo-incub ?19:46
morganfainbergttx, i .. how many revert of reverts is that?19:47
dhellmannttx: as things stand we do not have the optimization in place, and I think the optimization is a little suspicious, so I'm comfortable leaving it out19:47
ttxhm, ok, but we would keep it out of stable/kilo, or in ?19:48
ttxi.e. what do we do of https://review.openstack.org/#/c/175859/ ?19:48
morganfainberginteresting... stable/kilo ACLs aren't as locked down as stable/juno19:49
openstackgerritMerged openstack/oslo-incubator: service child process normal SIGTERM exit  https://review.openstack.org/17615119:49
dhellmannttx: we should make the service module in keystone stable/kilo match the one in oslo-incubator master19:49
ttxmorganfainberg: that's because it's pre-release stable/kilo19:49
bknudsonI don't know how up-to-date cinder is with oslo-incubator... seems to not be up to date.19:49
dhellmannI don't know what that means about the specific patches we have open, or if bknudson's sync took care of it19:49
morganfainbergah19:49
stevemardhellmann, theres a patch already? https://review.openstack.org/#/c/176391/19:49
* ttx is already tired of explaining why stable/kilo is different from stable/*19:50
dhellmannbknudson: someone from the cinder team is going to need to get involved in submitting and approving patches19:50
ttxI already regret proposed/*19:50
stevemarerr https://review.openstack.org/#/c/176392/119:50
morganfainbergttx, you could have just ignored it :P19:50
stevemarthanks ttx :P19:50
dhellmannstevemar: that's master, we want stable/kilo to match too19:50
ttxat least nobody was asking why it was specvial. ANd nobody was pushing random patches to it19:50
morganfainbergttx, it didn't matter to me [i assumed it was the same]19:50
stevemardhellmann, bad copy pasta, see other msg19:50
stevemardhellmann, it's merging now :)19:50
dhellmannstevemar: cool19:51
morganfainbergttx: ok so that is the last outstanding bug for kilo rc2 doing a 2nd check over new bugs19:52
morganfainbergmaking sure nothing else came in19:52
ttxmorganfainberg: we might want https://review.openstack.org/#/c/175859 if bknudson's patch doesn't include it19:52
morganfainbergbknudson, does yours include that? ^19:53
morganfainbergi think it did.19:53
bknudsonmy patch to keystone included everything in oslo-incubator19:53
ttxlooks like it does19:53
morganfainbergok we're good then19:53
bknudsonso if it's merged into oslo-incubator master then it's in keystone19:53
ttxok, so we can abandon the other one19:53
morganfainbergttx: no new bugs that look like RC blockers. i'll slate getting request-ids into the context for logging as a backport but it's a much larger change for us than other projects.19:54
*** openstackgerrit has quit IRC19:54
ttxmorganfainberg: RC2 will have to wait for requirements syncs to be reenabled so we merge the last ones19:54
*** openstackgerrit has joined #openstack-oslo19:55
morganfainbergttx: once that merges to stable/kilo i'm happy with RC2, so when reqs lands we're set19:55
bknudsonI tried my oslo-incubator patch in cinder and it has the same effect -- exits on SIGINT even when there's a client connected.19:55
ttxbknudson: so we might want the same patch there ,19:56
ttx?19:56
bknudsony, if you're willing to take it.19:57
bknudsonI'm not going to try to do a sync of the whole part.19:57
bknudsonmy patch is only like 5 lines so it's easy to make the change.19:57
ttxbah we'll handle the sync if thingee is fine with it19:58
*** subscope_ has quit IRC19:59
bknudsonhttps://review.openstack.org/#/c/176455/ -- cinder change in master19:59
dhellmannttx: I'm going to abandon https://review.openstack.org/#/c/176082/ as not needed20:01
bknudsoncherry-picked to stable/kilo -- https://review.openstack.org/#/c/176457/20:01
ttxdhellmann: you fine with cherrypicking the change in cinder rather than wholesale sync ?20:02
dhellmannttx: yeah, at this point let's fix the bug20:03
ttxok20:03
dhellmannthis code is up for graduation next cycle20:03
dhellmannassuming we can find someone to own it20:03
dhellmannwe're also talking about deprecating it in favor of supervisord20:03
*** sputnik13 has joined #openstack-oslo20:11
*** e0ne has quit IRC20:12
*** achanda has quit IRC20:12
*** sputnik13 has quit IRC20:32
*** sputnik13 has joined #openstack-oslo20:34
*** stevemar has quit IRC20:41
*** stevemar has joined #openstack-oslo20:45
*** achanda has joined #openstack-oslo20:53
*** ozamiatin has quit IRC20:57
*** nkrinner has quit IRC21:00
*** kgiusti has left #openstack-oslo21:07
openstackgerritJoshua Harlow proposed openstack/taskflow: Retain chain of missing dependencies  https://review.openstack.org/17614821:10
*** stevemar has quit IRC21:13
openstackgerritJoshua Harlow proposed openstack/taskflow: Add + use diagram explaining retry controller area of influence  https://review.openstack.org/17649621:22
*** openstackgerrit has quit IRC21:29
*** openstackgerrit has joined #openstack-oslo21:30
openstackgerritJoshua Harlow proposed openstack/taskflow: Add + use diagram explaining retry controller area of influence  https://review.openstack.org/17649621:32
openstackgerritJoshua Harlow proposed openstack/taskflow: Add + use diagram explaining retry controller area of influence  https://review.openstack.org/17649621:33
*** yamahata has joined #openstack-oslo21:34
openstackgerritJoshua Harlow proposed openstack/taskflow: Test more engine types in argument passing unit test  https://review.openstack.org/17650221:42
*** sdake has quit IRC21:44
*** yamahata has quit IRC21:46
*** bknudson has quit IRC21:46
openstackgerritJoshua Harlow proposed openstack/taskflow: Expose fake filesystem 'join' and 'normpath'  https://review.openstack.org/17651521:56
*** cdent has quit IRC21:56
*** ndipanov has quit IRC21:56
*** andreykurilin__ has joined #openstack-oslo22:05
*** bknudson has joined #openstack-oslo22:05
*** bkc__ has joined #openstack-oslo22:08
*** bkc has quit IRC22:08
*** bkc__ has quit IRC22:11
*** openstackgerrit has quit IRC22:11
*** bkc__ has joined #openstack-oslo22:11
*** openstackgerrit has joined #openstack-oslo22:11
*** jaypipes has joined #openstack-oslo22:20
*** hogepodge has quit IRC22:27
*** sdake has joined #openstack-oslo22:32
*** hogepodge has joined #openstack-oslo22:32
lifelesszigo: https://bugs.launchpad.net/pbr/+bug/1419860 <- could you update please22:36
openstackLaunchpad bug 1419860 in PBR "Folders inside packages aren't installed" [Undecided,Incomplete]22:36
lifelesszigo: and https://bugs.launchpad.net/pbr/+bug/140510122:37
openstackLaunchpad bug 1405101 in PBR "oslo.concurrency doesn't get openstack/common installed correctly" [Undecided,Incomplete]22:37
zigolifeless: As much as I can remember, no, it wasn't fixed.22:37
zigo(I mean #1419860)22:37
zigoUpdating them both.22:37
lifelesszigo: it would help to have non-debbuild reproduction instructions22:38
lifelesszigo: if thats possible22:38
zigolifeless: I have no idea how to do that !22:38
zigoHowever, I wrote instructions for upstreams on how to recreate the Debian build env which I use.22:38
zigohttp://openstack.alioth.debian.org/22:39
lifelesszigo: if that link isn't in the bugs, please add it.  :)22:39
zigoWill do.22:39
lifelesszigo: (I've closed the tabs for now, will be comnig backto it in a bit)22:39
*** pradk has joined #openstack-oslo22:41
*** gordc has quit IRC22:47
*** andreykurilin__ has quit IRC22:59
*** ChuckC has quit IRC23:00
*** stevemar has joined #openstack-oslo23:01
*** jd__ has quit IRC23:08
*** jd__ has joined #openstack-oslo23:08
*** ajo has quit IRC23:10
*** ChuckC has joined #openstack-oslo23:21
*** david-lyle has joined #openstack-oslo23:24
*** ChuckC has quit IRC23:25
*** ChuckC has joined #openstack-oslo23:25
*** mriedem is now known as mriedem_away23:26
*** amotoki has quit IRC23:27
*** sigmavirus24 is now known as sigmavirus24_awa23:27
*** sputnik13 has quit IRC23:29
*** sdake has quit IRC23:39
*** jecarey has quit IRC23:47
* morganfainberg looks at the gate23:53
*** tsekiyam_ has joined #openstack-oslo23:57
*** tsekiyama has quit IRC23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!