Tuesday, 2015-04-21

*** bknudson has joined #openstack-oslo00:05
openstackgerritJoshua Harlow proposed openstack/taskflow: Use states.check_task/retry_transition in action(s)  https://review.openstack.org/17561100:15
*** achanda has joined #openstack-oslo00:16
*** sputnik13 has quit IRC00:19
*** achanda_ has quit IRC00:19
openstackgerritJoshua Harlow proposed openstack/taskflow: Use states.check_task/retry_transition in action(s)  https://review.openstack.org/17561100:19
*** achanda has quit IRC00:21
*** tsekiyam_ has joined #openstack-oslo00:22
*** openstackgerrit has quit IRC00:22
*** openstackgerrit has joined #openstack-oslo00:22
*** tsekiya__ has joined #openstack-oslo00:24
*** tsekiyam_ has quit IRC00:24
*** tsekiya__ has quit IRC00:24
*** tsekiyama has quit IRC00:26
*** openstack has joined #openstack-oslo00:36
openstackgerritJoshua Harlow proposed openstack/taskflow: Use states.check_task/retry_transition in action(s)  https://review.openstack.org/17561100:37
*** browne has quit IRC00:59
*** zzzeek has quit IRC01:01
openstackgerritJoshua Harlow proposed openstack/taskflow: Use states.check_task/retry_transition in action(s)  https://review.openstack.org/17561101:02
*** BrianShang_ has joined #openstack-oslo01:28
*** BrianShang has quit IRC01:31
*** browne has joined #openstack-oslo01:42
*** jecarey has joined #openstack-oslo01:45
*** liusheng has quit IRC02:16
*** sputnik13 has joined #openstack-oslo02:29
*** achanda has joined #openstack-oslo02:35
*** harlowja is now known as harlowja_away02:35
*** harlowja_away is now known as harlowja02:37
*** stevemar has joined #openstack-oslo02:39
*** sputnik1_ has joined #openstack-oslo02:46
*** exploreshaifali has quit IRC02:46
*** exploreshaifali has joined #openstack-oslo02:47
*** sputnik13 has quit IRC02:48
*** exploreshaifali has quit IRC03:30
*** achanda has quit IRC03:41
*** jecarey has quit IRC03:45
*** harlowja is now known as harlowja_away03:56
*** achanda has joined #openstack-oslo04:17
*** sputnik1_ has quit IRC04:23
*** amotoki has joined #openstack-oslo04:24
openstackgerritJoshua Harlow proposed openstack/taskflow: Add a conductor running example  https://review.openstack.org/12941204:48
*** flaper87 has quit IRC05:01
*** flaper87 has joined #openstack-oslo05:04
*** nkrinner has joined #openstack-oslo05:21
*** dmn has joined #openstack-oslo05:37
dmnHi, I'm trying to make use of oslo.db for an openstack project but I find that there are no examples given in the documentation ( http://docs.openstack.org/developer/oslo.db). So Is there any other link that I might be missing where I can find some examples?05:40
*** arnaud___ has joined #openstack-oslo05:50
*** achanda has quit IRC05:52
*** achanda has joined #openstack-oslo05:57
*** inc0 has joined #openstack-oslo06:01
*** achanda has quit IRC06:03
*** salv-orlando has quit IRC06:29
*** dmn has quit IRC06:30
*** jamielennox is now known as jamielennox|away06:34
*** dmn has joined #openstack-oslo06:44
*** Kennan has quit IRC06:53
*** Kennan has joined #openstack-oslo06:54
*** arnaud___ has quit IRC07:00
*** arnaud___ has joined #openstack-oslo07:01
*** arnaud___ has quit IRC07:01
*** browne has quit IRC07:06
*** SridharGaddam has joined #openstack-oslo07:08
*** pcaruana has quit IRC07:11
*** jaosorior has joined #openstack-oslo07:22
*** flaper87 has quit IRC07:23
*** flaper87 has joined #openstack-oslo07:23
*** shardy_z is now known as shardy07:36
*** jamielennox|away is now known as jamielennox07:43
*** haypo has joined #openstack-oslo07:45
*** stevemar has quit IRC07:49
haypooslo messaging now has a non-voting python 3.4 gate: https://review.openstack.org/#/c/172221/07:51
haypoplease report me any issue because i plan to make it voting later07:51
haypooh. i posted a new patch set yesterday, but i don't see the python34 gate :-( https://review.openstack.org/#/c/172135/07:52
haypooops, nevermind, it's at the end, and... it failed07:52
*** arnaud___ has joined #openstack-oslo08:02
*** arnaud___ has quit IRC08:06
openstackgerritVictor Stinner proposed openstack/oslo.messaging: Fix test_matchmaker_redis on Python 3  https://review.openstack.org/17575908:10
openstackgerritMerged openstack-dev/pbr: Revert "Support platform-specific requirements files"  https://review.openstack.org/17464608:23
openstackgerritVictor Stinner proposed openstack/oslo.messaging: Enable eventlet dependency on Python 3  https://review.openstack.org/17213508:24
haypojd__, hi. to enable eventlet in oslo messaging on python3, i must add futures dependency, whereas futures is *not* needed on python3 (concurrent.futures is part of the stdlib since python 3.2). https://review.openstack.org/#/c/172135/6/requirements-py3.txt08:26
haypojd__, it's an issue in tox, https://bitbucket.org/hpk42/tox/issue/236/tox-must-create-the-source-distribution08:26
jd__haypo: so?08:26
haypojd__, on the issue, we told me that it's not a bug /o\08:27
haypojd__, do you think that it would be possible to workaround this issue? maybe fix pbr to support environment markers?08:27
jd__haypo: I'm surprised, why do you need the requirements to be install to build a source tarball?08:28
*** sirushti has joined #openstack-oslo08:30
*** ozamiatin has joined #openstack-oslo08:30
*** ihrachyshka has joined #openstack-oslo08:31
haypojd__, i don't understand everything. why i see is that oslo.messaging.egg-info/requires.txt is created during the creation of the source distribution08:32
haypoand the failing unit test uses this file08:32
haypojd__, for example, running "python setup.py install" in .tox/py34 creates oslo.messaging.egg-info/requires.txt and so works around the issue08:33
jd__I see08:33
jd__yeah the thing is that requires.txt is generated from intall_requires08:33
jd__which depends whether you are running python 2 or 3 etc08:34
jd__so yeah env markers are the solution once again08:34
*** e0ne has joined #openstack-oslo08:34
haypojd__, your patch removed env markers from install_requires no?08:35
haypomaybe we keep survive a few months with futures in requirements-py3.txt until the issue is fixed :-p08:36
haypowe can* survive08:36
jd__yeah because setuptools does not recognize them08:43
jd__so one has to fix it AFAIU08:44
hayposirushti, Steap : i updated https://wiki.openstack.org/wiki/Python3#OpenStack_applications to mention your python3 specs for heat&neutron, and that i'm working on a spec for nova08:47
hayposirushti, Steap is cyril who wrote the neutron spec for python308:48
haypoi plan to copy his spec for nova :-) right now, i'm trying to fix most obvious syntax issues just to check that it's possible to run at least a single nova test ;)08:48
*** f13o has joined #openstack-oslo08:54
*** arnaud___ has joined #openstack-oslo09:03
*** arnaud___ has quit IRC09:07
*** ozamiatin has quit IRC09:08
dmnHi, I'm trying to make use of oslo.db for an openstack project but I find that there are no examples given in the documentation ( http://docs.openstack.org/developer/oslo.db). So Is there any other link that I might be missing where I can find some examples?09:21
sirushtihaypo, thanks. Yeah, I used flake8/2to6 and got a bunch of tests running, not the important ones anyway ;) and btw, hello Steap :)09:24
hayposirushti, i'm trying to release a new version of Paste, but there are a lot of failing tests even on python 2.7!?09:25
openstackgerritMerged openstack/oslo.concurrency: Add binary parameter to execute and ssh_execute  https://review.openstack.org/17171009:25
sirushtihaypo, not sure!09:30
sirushtihaypo, I'll check it out a bit later!09:31
*** dmn has quit IRC09:32
*** alexpilotti has quit IRC09:37
*** salv-orlando has joined #openstack-oslo09:42
*** cdent has joined #openstack-oslo09:59
*** e0ne is now known as e0ne_10:02
*** e0ne_ is now known as e0ne10:04
*** boris-42 has quit IRC10:05
*** boris-42 has joined #openstack-oslo10:08
*** yamahata has quit IRC10:12
*** e0ne is now known as e0ne_10:33
*** e0ne_ is now known as e0ne10:34
*** arnaud___ has joined #openstack-oslo10:52
*** _amrith_ is now known as amrith10:55
*** arnaud___ has quit IRC10:56
*** dmn has joined #openstack-oslo11:04
sdaguestable/kilo services no longer shut down, they seem to lose track of children11:05
sdagueit's repeatable enough that we can no longer do upgrade testing11:06
sdagueI'm looking at changes in oslo-incubator service.py that changed since juno and this looks like the patch that's in play for the services that have failed so far - https://review.openstack.org/#/c/156345/11:06
openstackgerritRyan Moore proposed openstack/oslo.config: Print name of invalid option in error message  https://review.openstack.org/17206611:08
*** f13o has quit IRC11:10
sdagueok, so this is another cross cutting bug that I think we need to address for the release - https://bugs.launchpad.net/oslo-incubator/+bug/144658311:28
openstackLaunchpad bug 1446583 in oslo-incubator "services no long reliably stop in stable/kilo" [Critical,New]11:28
*** alexpilotti has joined #openstack-oslo11:41
*** kgiusti has joined #openstack-oslo11:54
*** dmn has quit IRC12:08
*** dmn has joined #openstack-oslo12:10
*** dmn has quit IRC12:14
*** salv-orlando has quit IRC12:18
*** salv-orlando has joined #openstack-oslo12:21
*** dmn has joined #openstack-oslo12:28
*** gordc has joined #openstack-oslo12:34
dhellmanndmn: there are some very basic examples in http://docs.openstack.org/developer/oslo.db/usage.html but there isn't a lot of detail there. You may have better luck looking at the code for a project that is using the library already, unfortunately. We're still working on improving the documentation for all of the libraries.12:38
*** arnaud___ has joined #openstack-oslo12:40
dhellmannsdague: so you think patch 156345 is causing bug 1446583?12:41
openstackbug 1446583 in oslo-incubator "services no long reliably stop in stable/kilo" [Critical,New] https://launchpad.net/bugs/144658312:41
dhellmannsdague: if so, do you want to propose a simple revert?12:41
sdaguehonestly, I don't know12:42
sdaguethere were a lot of changes in service.py over the course of the release around signal handling12:42
sdaguehttps://github.com/openstack/oslo-incubator/commits/master/openstack/common/service.py12:43
dmndhellmann: What exactly does oslo.db do ? Does it do anything else apart from db connectivity?12:44
dhellmannyeah, I know we had a few patches related to making SIGHUP work more usefully12:44
sdaguehttps://github.com/openstack/oslo-incubator/commit/442fc2234e8500fb11633d8d8ee7dfe5492fa4a3 is suspicious, but it's not used anywhere as far as I know12:44
*** arnaud___ has quit IRC12:44
dhellmanndmn: it does have some other features like helping with database migrations, but it is mostly meant to give you a connection using the standard configuration options and then you use the sqlalchemy API directly12:45
sdaguehttps://github.com/openstack/oslo-incubator/commit/bf92010cc9d4c2876eaf6092713aafa94dcc8b35 seems to be the last patch that merged before the projects synced12:45
sdagueFeb 20th is when keystone did a sync12:45
sdaguethe fact that it was reverted, then reverted reverted also raises eyebrows a bit12:46
dhellmannyeah, I'm just noticing that in the git logs12:46
dmndhellmann: oh, okay. Thanks :)12:47
dhellmannjd__: ping12:47
jd__dhellmann: o/12:47
dhellmannjd__: are you familiar with the changes to service.py this cycle? ^^12:48
*** amotoki has quit IRC12:48
jd__dhellmann: maybe let me read the backlog12:48
dhellmannbnemec: ping ^^12:48
*** prad has quit IRC12:49
jd__dhellmann: if I had to bet I'd bet on https://review.openstack.org/#/c/156345/12:50
dhellmannjd__: yeah, that one has been reverted and then restored, so I agree12:51
jd__yeah we reverted because neutron was doing bad things and that didn't work12:51
jd__IIRC12:51
jd__so does this impact more projects than Neutron only?12:51
dhellmannyeah, that's what the message says12:51
dhellmanncinder and keystone12:52
jd__damn…12:52
dhellmannsdague: what's a good way to test fixing the problem? is this reliably repeatable in the gate?12:52
*** bknudson has quit IRC12:53
sdaguewell, it's been popping up in https://review.openstack.org/#/c/175391/ quite often (often enough that it's not able to pass 4 grenade jobs needed to merge)12:53
dhellmannjd__: I'm thinking we just revert that optimization for good and live with the sleep12:53
jd__dhellmann: we did that already but I think we brought back in after Neutron updated12:53
sdaguejd__: what was the neutron issue?12:54
dhellmannsdague: ok, so if we propose a revert in the incubator, and then sync that into cinder and keystone with depends-on for the incubator change, that should give us a test?12:54
sdaguedhellmann: I think so12:54
dhellmannneutron had more than one processlauncher instance in their top-level app12:54
dhellmannapparently the class is meant to be a singleton12:54
jd__sdague: https://bugs.launchpad.net/neutron/+bug/143832112:54
openstackLaunchpad bug 1438321 in neutron "Fix process management for neutron-server" [Low,In progress] - Assigned to Elena Ezhova (eezhova)12:54
sdagueso, before doing that, can someone look at keystone and see if it's got the same issue?12:55
dhellmannjd__: do you have time to set up the revert? I told ttx I would work on releasing liberty libraries to unwedge the master requirements thaw12:55
sdagueto know if this is actually the same issue, or a different issue with the code12:55
jd__dhellmann: revert in oslo-incubator? I can do that12:55
dhellmannjd__: revert in the incubator, then sync into cinder and keystone with depends-on set to the incubator patch12:56
dhellmannjd__: sync into cinder and keystone in stable/kilo12:56
jd__dhellmann: ok I'll do that12:56
jd__dhellmann: sdague: are they bug number on keystone and cinder that I can use?12:56
jd__s/they/there/12:57
sdaguejd__: yes, https://bugs.launchpad.net/oslo-incubator/+bug/144658312:57
openstackLaunchpad bug 1446583 in oslo-incubator "services no long reliably stop in stable/kilo" [Critical,New]12:57
ttxsdague: It's likely to affect more than just cinder and keystone, right12:57
sdaguettx: maybe, those are the only 2 I've seen it on so far12:58
dhellmannlet's start by addressing the services we know have the problem and then expand if we have to12:58
ttxack12:58
openstackgerritJulien Danjou proposed openstack/oslo-incubator: Revert "Revert "Revert "Optimization of waiting subprocesses in ProcessLauncher"""  https://review.openstack.org/17585112:58
ttxI'd just rather make sure we spread that one during the RC2 timeframe rather than next week12:59
dhellmannjd__: I assigned the bug to you for oslo-incubator12:59
dhellmannttx: sure, if this fixes the issue in cinder and keystone we can encourage adoption in the other projects, too12:59
jd__dhellmann: https://review.openstack.org/#/c/175851/ should be the first step12:59
dhellmannjd__: thanks12:59
dhellmannjd__: can you add links to the changes in the bug report? I did ^^ but as you submit to the other projects it would be good to have them there for easier tracking13:00
* dhellmann steps out to get breakfast13:00
jd__sure13:00
*** stpierre has joined #openstack-oslo13:00
dhellmannjd__: thanks13:00
*** inc0 has quit IRC13:03
*** joesavak has joined #openstack-oslo13:04
*** e0ne is now known as e0ne_13:04
*** amrith is now known as _amrith_13:04
jd__dhellmann: sdague: do I miss something or this optimization is not present in Cinder? https://github.com/openstack/cinder/blob/stable/kilo/cinder/openstack/common/service.py#L33713:05
dhellmannjd__: I wonder if it was already reverted?13:06
dhellmannmaybe we're looking at the wrong patch, maybe it's the signal handling for config reloading that is causing the issue13:06
jd__dhellmann: AFAICS it never went in for Cinder13:06
*** e0ne_ is now known as e0ne13:06
*** salv-orlando has quit IRC13:06
jd__so we maybe looking at the wrong patch indeed13:07
jd__Keystone has it though13:07
*** e0ne is now known as e0ne_13:16
*** bknudson has joined #openstack-oslo13:16
*** dmn has quit IRC13:17
jd__dhellmann: I'm pausing until we have the real culprit as it seems too fuzzy to me right now, I've commented on the bug13:18
dhellmannjd__: ok. do you have time to dig into the keystone issue today? I'm going to be swamped with releases and dims is out this week. bnemec probably won't be online for another hour or two13:18
jd__dhellmann: I'll take a look if nobody beat me to it later this (my) afternoon yeah13:19
dhellmannjd__: ok, thanks13:20
*** e0ne_ is now known as e0ne13:21
*** jungleboyj has joined #openstack-oslo13:21
*** tedross has joined #openstack-oslo13:22
sdaguejd__: it looks like it's not in cinder, that being said, the keystone-all fail seems to happen a lot more often, so the cinder fail might be unrelated additional race13:25
*** viktors|afk is now known as viktors13:25
*** jungleboyj has quit IRC13:26
*** e0ne is now known as e0ne_13:26
*** e0ne_ is now known as e0ne13:26
jd__sdague: ok, interesting :) I'll do a revert for Keystone then and we'll see13:28
sdaguehttp://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhlIGZvbGxvd2luZyBzZXJ2aWNlcyBhcmUgc3RpbGwgcnVubmluZ1wiIEFORCBtZXNzYWdlOlwiZGllXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mjk2MTU0NTQ2MzB913:28
*** prad has joined #openstack-oslo13:29
sdaguethere seem to be a few nova-conductor fails on unrelated jobs as well in this way, which might be the same race as the cinder one. Who knows, so many races :)13:29
*** mriedem_away is now known as mriedem13:36
*** zzzeek has joined #openstack-oslo13:39
*** tedross has left #openstack-oslo13:40
*** EmilienM has quit IRC13:40
*** EmilienM has joined #openstack-oslo13:41
*** zz_jgrimm is now known as jgrimm13:49
*** yamahata has joined #openstack-oslo13:52
*** salv-orlando has joined #openstack-oslo13:52
*** amotoki has joined #openstack-oslo13:54
*** salv-orlando has quit IRC13:54
*** _amrith_ is now known as amrith13:58
*** salv-orlando has joined #openstack-oslo14:00
*** jecarey has joined #openstack-oslo14:00
*** sigmavirus24_awa is now known as sigmavirus2414:04
*** mtanino has joined #openstack-oslo14:06
bnemecWeird.  There should have been zero functional changes to service.py in Cinder between Juno and Kilo.14:13
openstackgerritMerged openstack/oslo.versionedobjects: Copy the default value for field  https://review.openstack.org/16859014:20
sdaguebnemec: so, realistically, the cinder fail might be a different bug that's just been around14:20
sdaguein keystone this is very noticably worse14:21
*** sputnik13 has joined #openstack-oslo14:21
bnemecYeah, the revert seems like the right thing to do.14:21
bnemecThat's the only functional change in keystone.14:22
*** pblaho has quit IRC14:24
openstackgerritMerged openstack/oslo.versionedobjects: New config group for oslo_versionedobjects  https://review.openstack.org/16594214:24
*** pblaho has joined #openstack-oslo14:25
*** amrith is now known as _amrith_14:26
*** _amrith_ is now known as amrith14:27
*** sputnik13 has quit IRC14:28
*** arnaud___ has joined #openstack-oslo14:29
*** browne has joined #openstack-oslo14:30
*** arnaud___ has quit IRC14:33
*** achanda has joined #openstack-oslo14:35
*** achanda has quit IRC14:43
*** jecarey has quit IRC14:44
dhellmannharlowja_away: I need to release taskflow from master today. Is it safe to release HEAD, or do you want to pick a SHA?14:46
dhellmannharlowja_away: I need at least 0a94d68 to uncap the requirements14:47
*** sileht has quit IRC14:57
*** stevemar has joined #openstack-oslo14:58
*** jecarey has joined #openstack-oslo15:05
*** jaypipes has quit IRC15:11
*** yamahata has quit IRC15:16
dstaneksorry...just catching up. is this a keystone bug?15:17
sdaguedstanek: it exposes in keystone15:26
sdaguewhether it's a keystone vs. oslo service bug I think is still unclear15:26
*** jsavak has joined #openstack-oslo15:26
*** dguitarbite has joined #openstack-oslo15:27
*** arnaud___ has joined #openstack-oslo15:27
*** joesavak has quit IRC15:29
*** jungleboyj has joined #openstack-oslo15:31
*** browne has quit IRC15:35
*** david-lyle has quit IRC15:36
*** amotoki has quit IRC15:41
*** andreykurilin__ has joined #openstack-oslo15:41
*** amotoki has joined #openstack-oslo15:43
*** sputnik13 has joined #openstack-oslo15:45
openstackgerritDoug Hellmann proposed openstack/oslo-incubator: Ignore non-numerical releases in highest_semver.py  https://review.openstack.org/17595115:53
*** sileht has joined #openstack-oslo15:57
*** tsekiyama has joined #openstack-oslo16:01
*** david-lyle has joined #openstack-oslo16:02
openstackgerritDoug Hellmann proposed openstack/oslo-incubator: Adjust the way we find the set of tags to check for "latest"  https://review.openstack.org/17596516:05
*** haypo has quit IRC16:14
*** jaypipes has joined #openstack-oslo16:30
*** exploreshaifali has joined #openstack-oslo16:30
*** joesavak has joined #openstack-oslo16:34
*** jsavak has quit IRC16:37
*** arnaud___ has quit IRC16:39
*** exploreshaifali has quit IRC16:40
dstaneksdague: how are you reproducing it?16:42
sdaguedstanek: please see the bug16:43
sdaguetrying to get everything documented in it16:43
*** browne has joined #openstack-oslo16:43
sdaguehttps://bugs.launchpad.net/oslo-incubator/+bug/144658316:44
openstackLaunchpad bug 1446583 in Keystone "services no longer reliably stop in stable/kilo" [Critical,In progress] - Assigned to Julien Danjou (jdanjou)16:44
dstaneksdague: this one? https://bugs.launchpad.net/oslo-incubator/+bug/144658316:44
*** e0ne has quit IRC16:45
sdagueyes16:46
*** harlowja_away is now known as harlowja16:57
harlowjadhellmann HEAD is fine afaik16:57
dhellmannharlowja: cool, thanks16:58
harlowjanp16:58
elarsonpleia2: so there is a repo now for oslo.cache (https://github.com/openstack/oslo.cache) and I wanted to check in ot see how Xiaoyuan is doing. do you know if there any code just yet?17:00
* elarson is happy to stay on top of the administrative bits if that is more helpful 17:00
*** joesavak has quit IRC17:01
*** david-lyle has quit IRC17:08
pleia2elarson: now that she has a handle on how keystone uses dogpile.cache, she's currently looking through other oslo projects incorporate things, so I'm hopeful that with the basic framework you have set up that she'll be able to land some code soon17:14
pleia2I just sent off her the code repo link this morning17:14
*** achanda has joined #openstack-oslo17:26
*** sdake has joined #openstack-oslo17:28
*** e0ne has joined #openstack-oslo17:33
*** e0ne is now known as e0ne_17:33
*** nkrinner has quit IRC17:34
*** e0ne_ is now known as e0ne17:34
*** dguitarbite has quit IRC17:38
*** e0ne is now known as e0ne_17:44
*** e0ne_ is now known as e0ne17:45
harlowjadhellmann we missed u during yesterday's meeting, was like super-fun17:47
harlowja*super-ultra-fun17:47
dhellmannharlowja: I need to read the logs :-)17:47
harlowjahaha17:47
harlowjaif u dare17:47
*** ihrachyshka has quit IRC17:50
*** e0ne is now known as e0ne_17:51
harlowjaholy crap thats alot of library releases, lol17:53
*** e0ne_ has quit IRC17:56
*** joesavak has joined #openstack-oslo17:59
*** jamesllondon has joined #openstack-oslo18:02
*** cdent has quit IRC18:03
*** shardy has quit IRC18:04
*** shardy has joined #openstack-oslo18:04
*** sdake_ has joined #openstack-oslo18:09
*** e0ne has joined #openstack-oslo18:10
*** shardy has quit IRC18:12
*** sdake has quit IRC18:13
*** shardy has joined #openstack-oslo18:14
*** achanda has quit IRC18:15
*** achanda has joined #openstack-oslo18:20
openstackgerritJoshua Harlow proposed openstack/taskflow: Use states.check_task/retry_transition in action(s)  https://review.openstack.org/17561118:24
*** mriedem has quit IRC18:29
*** shardy_ has joined #openstack-oslo18:33
openstackgerritJoshua Harlow proposed openstack/taskflow: Use states.check_task/retry_transition in action(s)  https://review.openstack.org/17561118:33
*** mriedem has joined #openstack-oslo18:35
*** shardy has quit IRC18:36
*** openstackgerrit has quit IRC18:37
*** openstackgerrit has joined #openstack-oslo18:37
*** joesavak has quit IRC18:38
*** david-lyle has joined #openstack-oslo18:39
harlowjadhellmann i also see 'taskflow 0.6.0' (?) vs what appears to be 0.9.0 ?18:42
harlowjaanother bug maybe?18:42
*** joesavak has joined #openstack-oslo18:42
harlowjasome of those version numbers seem off :-P18:43
harlowjatooz is also @ 0.15.0 and not 0.8.018:43
* harlowja confused, ha18:43
dhellmannharlowja: yeah, blah18:45
*** ccrouch has quit IRC18:46
*** ccrouch has joined #openstack-oslo18:46
*** david-ly_ has joined #openstack-oslo18:46
*** david-lyle has quit IRC18:46
dhellmannharlowja: yeah, I didn't have the repos updated on  my email box like I thought I did18:47
harlowjaoh noes18:47
harlowja:-/18:47
harlowja*hope the right versions went out18:47
dhellmannharlowja: the tagging script worked right, just not the email script18:48
harlowjakk18:48
*** Kiall has joined #openstack-oslo19:04
openstackgerritJoshua Harlow proposed openstack/taskflow: Use states.check_task/retry_transition in action(s)  https://review.openstack.org/17561119:05
*** sdake has joined #openstack-oslo19:07
*** sdake_ has quit IRC19:11
*** david-ly_ has quit IRC19:18
*** shardy_ has quit IRC19:19
*** shardy has joined #openstack-oslo19:20
harlowjasputnik13 for https://review.openstack.org/#/c/152357/ do u want to try the signature api; or do u think we should do that in a different change?19:23
harlowjait does seem like that signature object could be used more (although maybe in another review?)19:23
*** jamesllondon has quit IRC19:26
* krotscheck is keeping tabs of the big thaw. It's kinda like a game of whackamole :)19:34
*** alexpilotti has quit IRC19:36
boris-42dhellmann: maybe19:40
boris-42mfedosin: it's better to make single email19:40
boris-42dhellmann: ^19:40
boris-42dhellmann: for all libs *19:40
*** achanda has quit IRC19:41
dhellmannboris-42: we won't normally do so many, but I want to have a separate searchable thread for each release19:41
boris-42dhellmann: ok..19:42
krotscheckIn case it's helpful, I updated my patch for increasing oslo_config version in global requirements to use the most recent version, and the various builds have already passed the point where they failed last time.19:45
krotscheckThat patch is based on the 'remove caps' patch.19:45
dhellmannkrotscheck: that's good news!19:46
krotscheckdhellmann: Wasn't there something that fungi needs to re-enable though? Or will that not impact the requirements build patches?19:46
dhellmannkrotscheck: yeah, we're flying without a net right now so we have to put some jobs back in place19:47
krotscheckdhellmann: Actually, is there anything on this list that I can do? I can't promise that I'd be as effective as you, but I can contribute.19:48
dhellmannkrotscheck: step 9.6 on https://etherpad.openstack.org/p/the-big-thaw if you want to follow along19:48
dhellmannright now we're waiting for things to land one at a time to make sure we don't get into a bad state19:48
dhellmannkrotscheck: so thank you, but having just a couple of people actually making changes will keep us on track for now19:49
krotscheckdhellmann: No worries, let me know how I can help :)19:54
dhellmannkrotscheck: if we need another pair of hands before we're done, I'll hit you up19:54
krotscheckdhellmann: You got it19:55
sputnik13harlowja: well if we do it in a different change this one should be abandoned yes?19:55
harlowjathats one way of doing it19:55
sputnik13yeah, I'll take a look at the signature api and update the change to use it19:55
boris-42dhellmann: btw did you see post regarding to voting coverage job19:56
boris-42dhellmann: I believe this is very important for libs19:57
* harlowja wants coveralls integration :-P19:57
harlowjaboris-42 is someone giving u a raise if u get more coverage ;)19:58
harlowjahahah19:58
harlowjaj/k19:58
*** inc0 has joined #openstack-oslo19:58
dhellmannboris-42: I did see it, but have been busy generating my own email today so I didn't have time to get involved in that thread, yet19:59
sputnik13coveralls?20:00
boris-42dhellmann: ok20:00
harlowjasputnik13 http://coveralls.io/20:01
harlowja'Free for open source'20:01
harlowja*apparently might work with gerrit to (not sure)20:01
sputnik13+1 coveralls20:03
sputnik13http://www.sonarsource.com <-- this is nice too but it's not a managed service like coveralls20:03
harlowjacool20:04
harlowjagithub has nice integrations, which it'd be supercool if openstack could somehow use20:04
harlowjaPR suck, but the integrations are nice :)20:05
inc0dansmith, quick question. I'm revising versionedobjects once more, and to me it seems that if we make an RPC call and it lands on *older* node than it originated, most likely it will raise IncompatibleVersion? Or am I missing something?20:08
dansmithinc0: if you pass an object at 1.5 to a node that only supports 1.2, then it will fail to deserialize, yeah20:09
dansmithinc0: so you could backlevel it first,20:09
*** andreykurilin__ has quit IRC20:09
dansmithinc0: or do what nova does and bounce it to a service that can do the backporting for you20:10
inc0dansmith, but unless I actually know where it lands, I don't know where it lands20:10
dansmithso in nova, conductor is always the newest node, and that means any node can talk to any other node, and if one receives something too new, it asks conductor to backlevel it20:10
inc0it might be good idea tho to make some sort of config - namely "this is oldest working version currently, backport to it."20:10
dansmithinc0: yep, and we were at one point going to report that information from each service20:11
inc0yeah, but if you'll perform rolling upgrade, you might end up with several conductors with different versions20:11
dansmithinc0: but we have compute nodes that talk to other compute nodes, and migrations can take weeks, so we wanted new nodes to speak new messages between them as soon as possible20:11
dansmithinc0: right, for us right now, all the conductors go at once, but they're stateless so it's easy20:12
*** kgiusti has left #openstack-oslo20:12
inc0ok, thanks for clarification20:12
*** andreykurilin__ has joined #openstack-oslo20:12
inc0I'm thinknig how to do that in not-so-stateless heat-engine20:12
dansmitha version pin may be better for you20:13
dansmithwe do that for our normal RPC interfaces20:13
inc0maybe this config would be quick and easy solution20:13
dansmithyep20:13
inc0I'll make a patch with it for core lib, will make stuff easier for other projects20:14
*** achanda has joined #openstack-oslo20:14
dansmithhmm, what does the library need to do?20:14
dansmithbecause, you'd need a pin for each object you have20:14
dansmithor some project-specific scheme20:14
inc0expose logic I guess20:15
inc0config itself will be in project conf file20:16
inc0but it should be one generic config option for every project20:16
*** jamesllondon has joined #openstack-oslo20:16
*** jamesllondon has quit IRC20:16
inc0because I guess having separate backport engine would be needless effort for most of projects20:17
*** e0ne is now known as e0ne_20:17
inc0nova might use it too by the way, you'll keep this config on until you'll upgrade all of conductors20:17
inc0this will remove requirement of "all coductors go at once" - at least from object side20:17
*** amrith is now known as _amrith_20:18
krotscheckWell, at least I'm back to the error I had a week ago ;)20:18
inc0well, I'll dig into it. Food for thought for now:)20:20
Kialldhellmann: about? I see you cut a release of python-designateclient, up until now, we've handled that outselves.. Should we continue to? (Not sure if it's a once off given the recent client versioning changes etc, or the new plan!)20:20
*** david-lyle has joined #openstack-oslo20:21
*** jamesllondon has joined #openstack-oslo20:23
*** jamesllondon has quit IRC20:23
dansmithinc0: it won't because the code that sits above the object needs to know about the pin in that case20:28
dansmithinc0: either way, I'm not sure what "single config" you'll expose, because it needs to be a specific version for each object separately20:28
inc0dansmith, true...I'll think how to dodge that...maybe some sort of master version of project should be in order20:29
inc0and then add relations between each object versions and master version20:30
dansmithinc0: well, I'd recommend we incubate that in projects that want to go this route and figure out what works before we bake it into the library20:30
dansmithbecause I'm skeptical :)20:30
inc0dansmith, let me refine this idea and then we'll discuss how to proceed allright? I'm fairly sure that there is good solution to this problem which will satisfy both of us20:33
dansmithheh, sure20:33
inc0I'll submit bp before vancouver and we can have discussion there20:33
dansmithokay20:34
dhellmannKiall: normally you should, but we needed to release all of the libs to uncap requirements in master20:37
KiallYep, just spotted the email where you explained that :)20:37
KiallThanks!20:37
*** inc0 has quit IRC20:39
*** cdent has joined #openstack-oslo20:45
krotscheckcheck-grenade-dvsm seems to fail on cliff.20:46
dhellmannkrotscheck: there's some discussion in #openstack-qa20:47
krotscheckdhellmann: Thanks, joining20:47
*** shardy_ has joined #openstack-oslo20:48
*** jamesllondon has joined #openstack-oslo20:51
*** alexpilotti has joined #openstack-oslo20:51
*** shardy has quit IRC20:51
*** openstackgerrit has quit IRC20:52
*** openstackgerrit has joined #openstack-oslo20:52
*** stevemar2 has joined #openstack-oslo21:01
*** stevemar has quit IRC21:01
*** stevemar2 has quit IRC21:02
*** stevemar has joined #openstack-oslo21:04
*** jecarey has quit IRC21:07
*** exploreshaifali has joined #openstack-oslo21:07
*** jgrimm is now known as zz_jgrimm21:20
*** shardy_ has quit IRC21:21
*** jungleboyj has quit IRC21:29
*** e0ne_ has quit IRC21:31
*** sdake_ has joined #openstack-oslo21:31
*** sdake has quit IRC21:34
*** sigmavirus24 is now known as sigmavirus24_awa21:35
*** sigmavirus24_awa is now known as sigmavirus2421:35
*** zz_jgrimm is now known as jgrimm21:40
*** jecarey has joined #openstack-oslo21:44
*** mriedem is now known as mriedem_away21:44
*** stpierre has quit IRC21:50
*** mtanino has quit IRC21:52
*** sdake has joined #openstack-oslo21:53
*** mtanino has joined #openstack-oslo21:55
*** sdake_ has quit IRC21:57
*** yamahata has joined #openstack-oslo21:58
*** stevemar has quit IRC22:00
*** harlowja is now known as harlowja_away22:00
*** harlowja_away is now known as harlowja22:05
*** yamahata has quit IRC22:13
*** andreykurilin__ has quit IRC22:14
openstackgerritJoshua Harlow proposed openstack/taskflow: Use a lru cache to limit the size of the internal file cache  https://review.openstack.org/17610422:14
*** jungleboyj has joined #openstack-oslo22:14
*** jaosorior has quit IRC22:22
*** gordc has quit IRC22:23
*** cdent has quit IRC22:35
*** joesavak has quit IRC22:39
*** jecarey has quit IRC22:39
*** bknudson has quit IRC22:41
*** sigmavirus24 is now known as sigmavirus24_awa22:46
*** sdake_ has joined #openstack-oslo23:35
*** sdake has quit IRC23:38
*** exploreshaifali has quit IRC23:44
*** sdake_ has quit IRC23:46
*** bknudson has joined #openstack-oslo23:54

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!