Monday, 2016-10-03

jmccrorymriedem : i'm seeing that in openstack-ansible's testing as well. http://logs.openstack.org/64/380464/2/check/gate-openstack-ansible-os_cinder-ansible-func-ubuntu-trusty/09951c6/console.html#_2016-10-02_23_51_42_46935000:03
mriedemhmm, i don't see any recent new packages that would blow this up00:03
mriedemno new releases of things00:03
*** cknight has quit IRC00:04
mriedemnow it does look like pycparser was updated on pypi today https://pypi.python.org/pypi/pycparser00:04
mriedemfor whatever reason00:04
*** jwcroppe has joined #openstack-cinder00:05
*** cknight has joined #openstack-cinder00:07
*** jungleboyj has quit IRC00:15
jmccrorymriedem: looks like that's it. i just tried installing pycparser 2.14 from source and the db sync succeeded00:17
*** akshai has quit IRC00:23
*** jwcroppe has quit IRC00:33
*** jwcroppe has joined #openstack-cinder00:33
mriedemhmm maybe the new version was recompiled against something different00:33
mriedemi don't know how to even open an issue against pycparser00:34
mriedemoh i see00:34
mriedemyup00:34
mriedemhttps://github.com/eliben/pycparser/issues/14800:34
mriedemtracking in openstack with this bug https://bugs.launchpad.net/cinder/+bug/162972600:38
openstackLaunchpad bug 1629726 in Cinder "cinder-manage db sync fails with recompiled pycparser 2.14" [Undecided,Confirmed]00:38
mriedemi'll get something in e-r00:38
mriedemit's at least reported upstream00:38
*** ducttape_ has joined #openstack-cinder00:46
*** mriedem has quit IRC00:55
*** haplo37_ has quit IRC01:23
*** haplo37_ has joined #openstack-cinder01:25
*** kaisers_ has joined #openstack-cinder01:37
*** aohuanxuan has joined #openstack-cinder01:37
*** zul has joined #openstack-cinder01:40
*** kaisers_ has quit IRC01:41
*** alonma has joined #openstack-cinder01:46
*** alonma has quit IRC01:51
*** salv-orlando has joined #openstack-cinder02:04
*** salv-orlando has quit IRC02:08
*** Lee1092 has joined #openstack-cinder02:14
*** sdake has quit IRC02:29
*** mtanino has joined #openstack-cinder02:55
*** cknight has quit IRC02:58
*** salv-orlando has joined #openstack-cinder03:04
*** jungleboyj has joined #openstack-cinder03:07
*** salv-orlando has quit IRC03:09
*** akshai has joined #openstack-cinder03:18
*** sudipto has joined #openstack-cinder03:20
*** sudipto_ has joined #openstack-cinder03:20
*** sdake has joined #openstack-cinder03:21
*** akshai has quit IRC03:21
*** kaisers_ has joined #openstack-cinder03:26
*** kaisers_ has quit IRC03:31
*** bks has joined #openstack-cinder03:32
*** sdake_ has joined #openstack-cinder03:38
*** sdake has quit IRC03:40
*** coolsvap has joined #openstack-cinder03:41
*** laughterwym has joined #openstack-cinder03:42
*** zhengyin has joined #openstack-cinder03:43
*** laughterwym has quit IRC03:47
*** links has joined #openstack-cinder03:49
*** diablo_rojo has joined #openstack-cinder03:50
jungleboyjThe diablo is here!03:51
diablo_rojojungleboyj, patrickeast So... Ibiza?03:51
jungleboyjAhh, Ibiza.03:51
jungleboyjpatrickeast: So, I looked and flights there are like 35 to 54 bucks.  So, not a big deal.03:52
jungleboyjIt is the off season but it looks like Pacha is still pretty active.03:52
patrickeastI'm game03:53
jungleboyjThere is hiking and good food over there.03:53
diablo_rojoThe hitch would be we would have to stay up all night and get a flight back to BCN at like 7 AM03:53
jungleboyjdiablo_rojo: There is that.03:54
patrickeastHaha03:54
jungleboyjOr leave at 11 .03:54
jungleboyjPm.03:54
patrickeastWell, there are worse things than staying up all night03:54
diablo_rojoWhich is like before the party even starts03:54
jungleboyjdiablo_rojo: Yeah.03:54
diablo_rojoAlmost did it last night lol03:54
*** alonma has joined #openstack-cinder03:55
patrickeastlol03:55
diablo_rojoWith proper caffination it should be easy.03:55
patrickeastWhat day were you guys thinking to go?03:55
jungleboyjA sweet balance of alcohol and caffeine.03:55
diablo_rojoThats still up for a little debate03:57
diablo_rojokinda depends on what your other plans are and what we are interested in going to see03:57
jungleboyjSo, Sankey's has Unusual Suspects on Thursday night: http://www.ibiza-spotlight.com/night/promoters/unusual-suspects03:57
jungleboyjMinimal House and Techno.03:58
jungleboyjOr Sankeys has Tribal Sessions on Saturday but that is less likely to be up our alley Music wise.03:59
*** alonma has quit IRC04:00
jungleboyjOtherwise we could just head there Friday, do some hiking, etc and find clubs at night.04:00
jungleboyjSee where it takes us.04:00
patrickeastI'm pretty open for those any of those dates, I had booked a tour on Saturday but can move that around whenever04:01
jungleboyjdiablo_rojo: What do you think?04:02
*** alonma has joined #openstack-cinder04:02
diablo_rojoFriday or Saturday would be fine by me04:03
*** oak has quit IRC04:03
jungleboyjdiablo_rojo patrickeast Ok, I will do a little more looking at what is available Friday/Saturday and come up with a plan.04:04
*** salv-orlando has joined #openstack-cinder04:05
diablo_rojoSounds good to me04:05
jungleboyjpatrickeast: I can book whatever once we agree and you can shoot me more money/04:05
*** haplo37_ has quit IRC04:05
patrickeastjungleboyj: sounds good04:06
*** alonma has quit IRC04:06
diablo_rojopatrickeast, I am doing some introductory looking into going to Marrakesh (I was inspired by that episode of house hunters we watched) and its not looking cheap..04:07
*** haplo37_ has joined #openstack-cinder04:08
patrickeastdiablo_rojo: dang, like how not cheap?04:08
*** pgadiya has joined #openstack-cinder04:09
*** salv-orlando has quit IRC04:10
diablo_rojopatrickeast, Basically it will take a day to get there if you dont fly and flying is like 500ish04:10
diablo_rojopatrickeast, I can do some more research tomorrow and see if I can find something better, but I'm not holding my breath. Guess the foundation should hold an OSD event there so we can go lol04:11
patrickeastOuch04:11
diablo_rojoYeah.04:11
patrickeastdiablo_rojo: haha yea, probably the best strategy04:11
jungleboyj:-)04:12
patrickeastOpenStack days Casablanca04:12
diablo_rojoYES04:12
diablo_rojoCan they do one in Egypt too?04:12
diablo_rojoSo many places I want to travel.04:12
jungleboyjdiablo_rojo: That would be awesome!04:12
openstackgerritTuan Luong-Anh proposed openstack/cinder-specs: Fix typo: remove redundant 'the'  https://review.openstack.org/38090604:23
*** diablo_rojo has quit IRC04:23
*** aohuanxuan has quit IRC04:28
*** bks has quit IRC04:35
*** laughterwym has joined #openstack-cinder04:41
*** laughterwym has quit IRC04:46
*** pdeore has joined #openstack-cinder04:49
*** shausy has joined #openstack-cinder04:49
*** gouthamr has quit IRC04:56
*** salv-orlando has joined #openstack-cinder05:04
*** kaisers_ has joined #openstack-cinder05:15
*** kaisers_ has quit IRC05:19
*** GB21 has joined #openstack-cinder05:30
*** links has quit IRC05:31
*** e0ne has joined #openstack-cinder05:31
*** e0ne has quit IRC05:35
*** raunak has joined #openstack-cinder05:37
openstackgerritKarthik Prabhu Vinod proposed openstack/cinder: Switch service capabilities to ovo  https://review.openstack.org/31904005:38
*** links has joined #openstack-cinder05:44
*** raunak has quit IRC05:45
*** mugsie has quit IRC05:50
*** mugsie has joined #openstack-cinder05:51
*** GB21 has quit IRC05:52
*** raunak has joined #openstack-cinder05:52
*** sandanar has joined #openstack-cinder05:53
*** raunak has quit IRC05:55
*** lpetrut has joined #openstack-cinder05:57
*** raunak has joined #openstack-cinder05:57
*** Poornima has joined #openstack-cinder06:00
*** salv-orlando has quit IRC06:00
*** GB21 has joined #openstack-cinder06:03
*** rcernin has joined #openstack-cinder06:04
*** GB21 has quit IRC06:08
*** raunak has quit IRC06:09
*** raunak has joined #openstack-cinder06:11
*** alonma has joined #openstack-cinder06:17
*** raunak has quit IRC06:20
*** alonma has quit IRC06:21
*** sdake has joined #openstack-cinder06:22
*** sdake_ has quit IRC06:23
*** links has quit IRC06:31
openstackgerritTuan Luong-Anh proposed openstack/cinder: Fix typo: remove redundant 'that'  https://review.openstack.org/38094106:32
*** GB21 has joined #openstack-cinder06:33
*** haplo37_ has quit IRC06:41
*** haplo37_ has joined #openstack-cinder06:43
*** links has joined #openstack-cinder06:44
*** salv-orlando has joined #openstack-cinder06:50
*** pcaruana has joined #openstack-cinder06:53
*** ldeptula has joined #openstack-cinder06:53
*** e0ne has joined #openstack-cinder06:54
*** GB21 has quit IRC06:59
*** lpetrut has quit IRC07:01
*** tesseract- has joined #openstack-cinder07:03
*** sudipto_ has quit IRC07:06
*** sudipto has quit IRC07:06
openstackgerritOpenStack Proposal Bot proposed openstack/cinder: Imported Translations from Zanata  https://review.openstack.org/38081207:09
*** lpetrut has joined #openstack-cinder07:10
*** GB21 has joined #openstack-cinder07:11
*** sdake has quit IRC07:15
*** MohammadSayyed has joined #openstack-cinder07:17
*** GB21 has quit IRC07:17
*** MohammadSayyed is now known as msayyed07:20
*** sdake has joined #openstack-cinder07:26
*** GB21 has joined #openstack-cinder07:29
*** tgtanya has joined #openstack-cinder07:34
*** lpetrut has quit IRC07:35
*** bks has joined #openstack-cinder07:37
*** zigo has quit IRC07:39
openstackgerritSharat Sharma proposed openstack/cinder: Stop adding ServiceAvailable group option  https://review.openstack.org/38096007:40
*** alonmarx has joined #openstack-cinder07:41
*** zigo has joined #openstack-cinder07:42
*** zigo is now known as Guest2475607:42
*** tgtanya has quit IRC07:42
*** salv-orl_ has joined #openstack-cinder07:44
*** kaisers_ has joined #openstack-cinder07:47
*** salv-orlando has quit IRC07:48
*** sdake has quit IRC07:58
*** gcb has joined #openstack-cinder07:59
*** zzzeek has quit IRC08:00
*** zzzeek has joined #openstack-cinder08:00
openstackgerritSharat Sharma proposed openstack/cinder: Stop adding ServiceAvailable group option  https://review.openstack.org/38096008:05
*** bks has quit IRC08:10
*** jordanP has joined #openstack-cinder08:22
*** gaurangt has joined #openstack-cinder08:24
*** sandanar_ has joined #openstack-cinder08:30
*** sudipto_ has joined #openstack-cinder08:31
*** sudipto has joined #openstack-cinder08:31
*** sandanar__ has joined #openstack-cinder08:31
*** alonma has joined #openstack-cinder08:32
*** sandanar has quit IRC08:34
*** sandanar_ has quit IRC08:35
*** sudipto_ has quit IRC08:36
*** sudipto has quit IRC08:36
*** alonma has quit IRC08:37
*** alonma has joined #openstack-cinder08:40
*** GB21 has quit IRC08:41
*** alonma has quit IRC08:44
*** GB21 has joined #openstack-cinder08:48
*** Guest24756 is now known as zigo08:54
GB21Hi, I am writing Searchlight plugin for consistency groups and I am using python client, I need to index the project and user id associated with that consistency group.09:00
GB21apparently, the python-cinder client does not provide the user id or project id in to_dict() method for consistency group09:01
GB21can anyone tell me, how can I extract user_id/project_id; thanks in advance09:02
*** Poornima has quit IRC09:05
*** lpetrut has joined #openstack-cinder09:06
*** zhengyin has quit IRC09:08
*** ManishD has joined #openstack-cinder09:38
*** hoonetorg has quit IRC09:39
*** haplo37_ has quit IRC09:41
*** haplo37_ has joined #openstack-cinder09:43
*** sdake has joined #openstack-cinder09:44
*** hoonetorg has joined #openstack-cinder09:52
openstackgerritSharat Sharma proposed openstack/cinder: Stop adding ServiceAvailable group option  https://review.openstack.org/38096009:54
*** hoonetorg has quit IRC10:00
*** alonma has joined #openstack-cinder10:03
*** alonma has quit IRC10:07
*** alonma has joined #openstack-cinder10:10
*** hoonetorg has joined #openstack-cinder10:13
*** alonma has quit IRC10:15
*** sudipto has joined #openstack-cinder10:18
*** sudipto_ has joined #openstack-cinder10:18
*** Poornima has joined #openstack-cinder10:20
*** sudipto has quit IRC10:21
*** sudipto_ has quit IRC10:21
*** sudipto_ has joined #openstack-cinder10:21
*** sudipto has joined #openstack-cinder10:21
*** sudipto has quit IRC10:21
*** sudipto_ has quit IRC10:21
*** zhangguoqing has joined #openstack-cinder10:36
*** sudipto has joined #openstack-cinder10:43
*** sudipto_ has joined #openstack-cinder10:44
odyssey4meanyone around who's seeing this error with a new install of stable/newton? http://logs.openstack.org/99/380999/1/check/gate-openstack-ansible-os_cinder-ansible-func-ubuntu-trusty/618d3d8/console.html#_2016-10-03_09_36_24_42948110:47
odyssey4meie cinder-manage db sync is erroring out with 'sorry, but this version only supports 100 named groups'10:47
*** openstackgerrit has quit IRC10:48
odyssey4meit may related to https://github.com/eliben/pycparser/issues/147 - busy confirming10:49
*** openstackgerrit has joined #openstack-cinder10:49
e0neodyssey4me: here is a reported issue https://bugs.launchpad.net/cinder/+bug/162972610:50
openstackLaunchpad bug 1629726 in OpenStack Compute (nova) "recompiled pycparser 2.14 breaks Cinder db sync and Nova UTs" [Critical,Confirmed]10:50
odyssey4meah, thanks e0ne10:51
*** kaisers_ has quit IRC10:52
*** cdelatte has joined #openstack-cinder10:55
*** Poornima has quit IRC10:56
*** nikeshm has joined #openstack-cinder10:59
*** nicolasbock has joined #openstack-cinder11:04
nikeshmhi11:07
nikeshmfacing this issue in CI11:07
nikeshmfrom morning11:07
nikeshmhttp://paste.openstack.org/show/583974/11:07
nikeshmany idea?11:07
duleknikeshm: https://bugs.launchpad.net/cinder/+bug/162972611:08
openstackLaunchpad bug 1629726 in OpenStack Compute (nova) "recompiled pycparser 2.14 breaks Cinder db sync and Nova UTs" [Critical,Confirmed]11:08
*** Poornima has joined #openstack-cinder11:08
duleknikeshm: And http://lists.openstack.org/pipermail/openstack-dev/2016-October/104909.html11:08
nikeshmdulek: thnks11:09
*** sudipto has quit IRC11:12
*** sdake_ has joined #openstack-cinder11:12
*** sudipto has joined #openstack-cinder11:12
*** salv-orl_ has quit IRC11:14
*** sdake has quit IRC11:14
nikeshmdulek: so we have to stop our CI11:16
nikeshmright11:16
nikeshmuntil the issue is resolved11:16
nikeshmor is there any work around11:17
*** salv-orlando has joined #openstack-cinder11:18
*** Poornima has quit IRC11:23
duleknikeshm: A possible patch is here: https://review.openstack.org/#/c/381011/11:25
*** zhengyin has joined #openstack-cinder11:29
openstackgerritBartek Żurawski proposed openstack/cinder: Add backup notification to cinder-volume-usage-audit  https://review.openstack.org/37954611:30
nikeshmdulek: thnks11:32
*** jrollen has joined #openstack-cinder11:35
*** ManishD has quit IRC11:39
*** gouthamr has joined #openstack-cinder11:41
*** alyson_ has joined #openstack-cinder11:49
*** gaurangt has left #openstack-cinder11:51
*** sdake_ is now known as sdake11:52
*** alonma has joined #openstack-cinder11:56
*** alonma has quit IRC12:00
*** tpsilva has joined #openstack-cinder12:03
*** Poornima has joined #openstack-cinder12:03
*** alonma has joined #openstack-cinder12:04
*** alonma has quit IRC12:08
*** chlong has joined #openstack-cinder12:08
*** alonma has joined #openstack-cinder12:11
*** MohammadSayyed has joined #openstack-cinder12:13
*** chlong has quit IRC12:14
*** alonma has quit IRC12:16
*** msayyed has quit IRC12:16
*** pdeore has quit IRC12:16
*** jrollen has quit IRC12:18
*** alonma has joined #openstack-cinder12:19
*** jungleboyj has quit IRC12:19
*** edmondsw has joined #openstack-cinder12:23
*** alonmarx has quit IRC12:23
*** alonma has quit IRC12:24
*** chlong has joined #openstack-cinder12:26
*** alonma has joined #openstack-cinder12:27
*** alonma has quit IRC12:32
*** dave-mccowan has joined #openstack-cinder12:36
*** pgadiya has quit IRC12:37
*** erlon has joined #openstack-cinder12:40
*** kaisers_ has joined #openstack-cinder12:40
*** jrollen has joined #openstack-cinder12:41
*** kaisers_ has quit IRC12:45
*** jrollen has quit IRC12:45
*** jrollen has joined #openstack-cinder12:45
nikeshmjgriffith: hi12:50
*** sudipto has quit IRC12:51
*** sudipto_ has quit IRC12:51
*** porrua has joined #openstack-cinder12:53
*** jroll has quit IRC12:53
*** pgadiya has joined #openstack-cinder12:54
*** jrollen is now known as jroll12:55
*** kaisers_ has joined #openstack-cinder12:57
*** zhangguoqing has quit IRC13:00
*** mtanino has quit IRC13:02
*** lpetrut has quit IRC13:02
*** ckonstanski has quit IRC13:08
*** pgadiya has quit IRC13:12
*** ducttape_ has quit IRC13:12
*** rraja has joined #openstack-cinder13:16
*** venkat has joined #openstack-cinder13:16
*** adrianofr has joined #openstack-cinder13:17
*** pgadiya has joined #openstack-cinder13:24
*** akerr has joined #openstack-cinder13:25
*** akshai has joined #openstack-cinder13:26
*** venkat has quit IRC13:31
*** gaurangt has joined #openstack-cinder13:32
*** mvk has quit IRC13:35
*** ducttape_ has joined #openstack-cinder13:39
*** eharney has joined #openstack-cinder13:41
*** salv-orl_ has joined #openstack-cinder13:44
*** pdeore has joined #openstack-cinder13:44
*** bjolo has joined #openstack-cinder13:46
*** salv-orlando has quit IRC13:47
nikeshmjgriffith: can you review these https://review.openstack.org/#/c/380737/ https://review.openstack.org/#/c/38069413:48
*** eharney has quit IRC13:49
*** khushbu has joined #openstack-cinder13:49
*** mtanino has joined #openstack-cinder13:52
*** enriquetaso has joined #openstack-cinder13:52
*** GB21 has quit IRC13:52
*** khushbu has quit IRC13:55
*** cknight has joined #openstack-cinder13:58
*** alonmarx has joined #openstack-cinder14:00
*** eharney has joined #openstack-cinder14:01
*** mvk has joined #openstack-cinder14:02
*** diablo_rojo has joined #openstack-cinder14:07
*** mriedem has joined #openstack-cinder14:07
*** shausy has quit IRC14:10
*** pdeore has quit IRC14:11
*** pdeore has joined #openstack-cinder14:26
*** zhengyin has quit IRC14:27
*** rajinir has joined #openstack-cinder14:29
*** oak has joined #openstack-cinder14:31
*** timcl has quit IRC14:31
*** laughterwym has joined #openstack-cinder14:31
*** timcl has joined #openstack-cinder14:32
*** rooneym has joined #openstack-cinder14:32
*** merooney has joined #openstack-cinder14:32
*** crose has joined #openstack-cinder14:32
*** alonma has joined #openstack-cinder14:35
*** pdeore has quit IRC14:36
*** pgadiya has quit IRC14:37
*** jungleboyj has joined #openstack-cinder14:38
*** xyang1 has joined #openstack-cinder14:39
*** alonma has quit IRC14:40
*** ldeptula has quit IRC14:44
*** alonma has joined #openstack-cinder14:45
*** alonma has quit IRC14:49
*** diogogmt has joined #openstack-cinder14:50
*** alonma has joined #openstack-cinder14:53
*** narayrak has joined #openstack-cinder14:54
*** alonma has quit IRC14:58
*** lprice has joined #openstack-cinder15:00
openstackgerritEric Harney proposed openstack/cinder: Hacking: Remove C305 contextlib.nested check  https://review.openstack.org/38117315:02
*** alonma has joined #openstack-cinder15:03
*** links has quit IRC15:04
*** mvk has quit IRC15:06
*** alonma has quit IRC15:07
openstackgerritMerged openstack/cinder: Speed up kaminario's drivers tests  https://review.openstack.org/37632915:12
openstackgerritMerged openstack/cinder: Disable API v1 by default  https://review.openstack.org/37965715:14
*** sandanar__ has quit IRC15:15
e0ne smcginnis: hi. just to confirm: did you verify that nobody uses api v1?15:17
smcginnise0ne: Absolutely not15:17
jgriffithe0ne: they don't now :)15:17
*** pcaruana has quit IRC15:18
e0ne smcginnis, jgriffith: I remember how was it painful to remove it last year15:18
jgriffithunless they surive the arduous task of configuring it :)15:18
*** crose has quit IRC15:18
smcginnise0ne: Right, that's why I didn't try to remove it again. ;)15:18
jgriffithe0ne: yeah, that was mostly because Nova was using it15:18
jgriffithe0ne: it's not removed here though, just not the default15:18
jgriffithsmcginnis: :)15:18
e0neI hope, we won't re-enable it15:18
*** narayrak has quit IRC15:19
*** mvk has joined #openstack-cinder15:19
jgriffithe0ne: people can configure and run it forever AFAIC15:19
jgriffithe0ne: but we don't have to support it :)15:19
smcginnisThis at least makes it a little more obvious that it's not supported.15:19
jgriffithe0ne: or better yet, we don't have to promote it any further by leaving it as default15:19
smcginnisOr at least not the preferred API.15:19
*** rcernin has quit IRC15:19
smcginnisjgriffith: +115:19
jgriffithsmcginnis: +1 nice work15:19
e0neI'm all for disabling it by default15:20
smcginnisjgriffith: Now if we can do that to v2 and just get down to only one API enabled by default.15:20
smcginnisMight be a little premature for that though.15:20
e0ne smcginnis: did you change devstack https://github.com/openstack-dev/devstack/blob/master/lib/cinder#L378 part too?15:20
jgriffithsmcginnis: yeah, I was actually *thinking* about that while reviewing your patch15:20
smcginnise0ne: No, I left that part. Do you think we should pull that out too?15:21
smcginnisIt actually doesn't do any good now unless you enable it on the cinder side.15:21
smcginnisBut you could.15:21
e0nesmcginnis: let's see what happens with your patch first15:22
e0nesmcginnis: if nobody uses v1 now - will be OK to disable it via devstack too15:22
jgriffithsmcginnis:  not sure but there is likely value in making sure we don't break creation of the endpoint15:22
smcginnisjgriffith: True15:22
jgriffithsmcginnis: e0ne depends on if anything in dsvm full uses v1 in the grenade test maybe... I dunno15:23
*** Guest92 has joined #openstack-cinder15:23
jgriffithsmcginnis: e0ne remove it and find out who yells :)15:23
e0ne:)15:23
*** Guest92 is now known as rushil15:23
jgriffithsmcginnis: so I tried something interesting last night....  regarding the attach/detach stuff15:24
e0nejgriffith, smcginnis: if you both are in TC, you could vote for removing deprecated APIs :)15:24
smcginnise0ne: Hah!15:24
smcginnisjgriffith: What was that?15:24
jgriffithsmcginnis: I've been messing with johnthetubaguy 's suggestions about an ack after making the connection call out to os-brick on the Nova side15:24
smcginnisBeen meaning to update my test deployment but haven't had the chance yet.15:25
jgriffithsmcginnis: that in some ways pushes us back to the model we have already with initialize--->attach15:25
openstackgerritMerged openstack/cinder: Hacking: remove check for "tests/unit/integrated"  https://review.openstack.org/37654215:25
jgriffithsmcginnis: so I rewrote those last night, but refactored everything to simplify them and make them use attachment ID's15:25
smcginnisjgriffith: So same basic calls, but simpler than what we have now?15:26
jgriffithsmcginnis: so now you call initialize_connection (I renamed it for now) and it either does the refresh or creates a connection just like today, but it creates a place-holder attachment15:26
jgriffithsmcginnis: yes15:26
jgriffithsmcginnis: I'm not sure if there's advantages to one over the other or not15:26
jgriffithsmcginnis: but I am taking my time and looking at varying options because I want to get this *right*15:27
smcginnisjgriffith: I really hate the naming if initialize_connection doesn't actually initialize anything.15:27
smcginnisjgriffith: But that could be fixed.15:27
jgriffithsmcginnis: I changed it to "connection_create_or_update"15:27
smcginnisjgriffith: Good call on taking time and getting it right!15:27
smcginnisjgriffith: OK, I like that. That makes it obvious.15:27
jgriffithsmcginnis: and I made the "attach" "finalize_attachment"15:27
jgriffithor something like that15:27
smcginnis+115:27
jgriffithanyway, I'm wondering if I should flush out both options a little bit and post them for people to compare?15:28
smcginnisI've noticed we've had issues in various areas just because of naming and then false assumptions down the road based on those unclear names.15:28
jgriffithsmcginnis: indeed!15:28
jgriffithsmcginnis: it turns out things like terminate and detach right now are even worse :(15:28
jgriffithsmcginnis: there's some goofy stuff that has been hacked in there, and some of it doesn't actually even work I don't think15:29
*** haplo37__ has joined #openstack-cinder15:29
smcginnisjgriffith: Which is your take as a better approach? The only problem I have with doing both (other than the extra time spent) is then we don't have one clear path.15:29
jgriffithsmcginnis: anyway... wanted to get your opinion on if there's value in posting two alternatives or not?15:29
smcginnisBut maybe that's fine.15:29
jgriffithsmcginnis: yep, that's the problem :(15:29
*** jdurgin1 has joined #openstack-cinder15:29
smcginnisI'd kind of rather all get behind one "this is what we think is right" approach, TBH.15:30
jgriffithsmcginnis: I'm mixed honestly.  So I don't like the create_attachment only because it still calls the shitty methods we already have, but I do plan to fix that in follow up work15:30
jgriffithsmcginnis: the other thing is that adding in the ack for the connection kind of forces it back to looking like what the other approach anyway15:31
jgriffithsmcginnis: the only difference is I made that optional so Nova could use it, but others could ignore that piece if they wanted15:31
jgriffithI don't know... there's pros/cons to each of them15:31
smcginnisjgriffith: I'm actually worried about what would happen if the ack gets lost.15:31
jgriffithhmmm...15:32
smcginnisBut I guess at the same time we have issues because we have fire and forget calls that just assume things worked out.15:32
jgriffithsmcginnis: then we're back to the infamous "hung in attaching"15:32
jgriffithsmcginnis: well, we used to have all sorts of weird cases where we would get "stuck" there15:32
jgriffithsmcginnis: I don't want to repeat that :(15:32
smcginnisYeah, that'd be bad.15:33
smcginnisDistributed systems are hard. Let's go shopping.15:33
jgriffith:)15:33
jgriffithI'll work out a few more things on the alternate approach this morning and let people vote on things15:34
smcginnisjgriffith: Don't have a good answer for you. If you think putting up both will help get the input to find the best path, then maybe that is the right answer.15:34
jgriffiththe one good thing is I don't have to carry around the mess that's in the existing attach and terminate; it's also probably less churn on the Caller side15:34
smcginnis+115:34
jgriffithsmcginnis: alright... I'll give it a go and see15:34
smcginnisjgriffith: Thanks for working on that. Really hope we can end up with something cleaner than what we have now.15:35
jgriffithsmcginnis: +10000000000000......................0000000015:35
smcginnis:)15:35
hemnamornin15:40
smcginnishemna: Howdy15:40
jgriffithhola15:40
*** lprice1 has joined #openstack-cinder15:40
hemnajgriffith, is there any merit to creating a new volume state15:41
hemnaand then after initialize_connection, set the volume state to exported15:41
hemnaif nova/anyone wants to ack that it's attached, they can15:41
jgriffithhemna: :)15:41
hemnaCinder doesn't really care15:41
hemnaI dunno if this helps anything or makes it more complicated15:41
jgriffithhemna: so yes, I was putting that in to the attachment15:41
*** e0ne has quit IRC15:42
jgriffithhemna: but the problem is it leads us back to the problem of "if i have to receive and ack back from Nova, there's not a ton of value on condensing the calls"15:42
jgriffithhemna: so that's why I started messing with a new approach again15:42
hemnaso, I guess my point was15:42
hemnadoes nova really even have to ack it ?15:42
*** lprice has quit IRC15:42
*** mfisch` is now known as mfisch15:43
hemnaI dunno15:43
hemnaI see what you are saying15:43
jgriffithhemna: my opinion was no, but folks that share connections do have a potential problem.  I don't and LVM doesn't so... I wasn't worried about it.  But for those that do, there is a a race there15:43
*** mfisch is now known as Guest722315:43
jgriffithand a potential issue15:43
hemnaif we have to have the same workflow, then what are we accomplishing by this rewrite15:43
jgriffithhemna: multi-attach and simplicity15:44
hemnaso for multi-attach we already have the flag for drivers to prevent it15:44
hemnaif that backend can't do it15:44
jgriffithhemna: well, sure but that's not what I think needs solved15:45
jgriffithhemna: the problem IMO is that what's there is packed in and a bit brittle in terms of the logic.  And there's still a big issue with disconnects (shared targets)15:45
jgriffithhemna: but if you think it's good like it is then maybe I don't need to do anything on this15:46
hemnathe shared targets is definitely an issue15:46
jgriffithhemna: I might be missing something15:46
jgriffithok... good :)15:46
openstackgerritMerged openstack/python-cinderclient: Updated from global requirements  https://review.openstack.org/37613115:46
hemnait's starting to seem like we are coming full circle on the existing api15:46
jgriffithhemna: maybe... thanks to shared targets :(15:47
jgriffithhemna: although I still argue that that sort of thing should be handled by the caller15:47
jgriffithhemna: targets should always be "dumb"15:47
jgriffithjust do what you ask them to do, and if you asked them to do something stupid... well then you were wrong15:47
hemnait is possible for nova to find out if it's shared or not15:48
jgriffithhemna: wait... now you're talking about the terminate?15:48
jgriffithhemna: or are you talking about acking the connection piece?15:48
hemnasorry I was thinking about the problem on the nova side of calling brick or not to disconnect_volume15:48
*** Guest7223 is now known as mfisch15:49
*** mfisch has quit IRC15:49
*** mfisch has joined #openstack-cinder15:49
hemnare: multi-attach and a shared target vs individual target15:49
jgriffithhemna: yeah, you'd have thought that was an easy thing to figure out15:49
jgriffithhemna: but apparantly it's not15:49
hemnait is.  nova has that data in it's bdm's15:50
* smcginnis grumbles about nova keeping track of attachments...15:50
jgriffithhemna: yeah, so why has this been such a cluster *&^$ all this time then?15:50
hemnaI dunno.  I raised this at the beginning.15:50
*** bardia has joined #openstack-cinder15:51
jgriffithI'm confused15:51
hemnaI think there are 2 issues wrt to shared vs non shared volumes15:51
hemna1) calling brick on the compute host or not15:51
hemna2) calling terminate_connection or not15:51
hemna#2 is easy to me.  if you call it, cinder terminates.   if you don't mean to....don't.15:52
hemna#1 the data is there in the nova db.   it should make sure it does the right thing....I guess it's just like #2 in that respect15:52
jgriffithhemna: well, I've been saying #2 for a couple years now but that doesn't seem to resonate15:53
*** harlowja_at_home has joined #openstack-cinder15:53
jgriffithhemna: an in fact you submitted a patch to Cinder that puts that in Cinders control15:53
jgriffithhemna: so I'm really kinda confused now15:53
hemnaok crap, I didn't mean to confuse anyone15:54
jgriffithhaha15:54
hemna:(15:54
jgriffithhemna: no, don't worry about that15:54
jgriffithbetter to talk through this stuff15:54
hemnaI do like the new API though15:54
hemnaI think we should be saving the connector in the attachments15:55
hemnathat's good stuffs15:55
jgriffithIt's just I'm not getting the impression from you that this all a big waste of time :)15:55
jgriffithhemna: oh, well yeah for sure!15:55
jgriffithhemna: I'm doing that no matter what15:55
hemnaand I do like being explicit and passing around the attachment id.15:55
hemnabut honestly, if all we end up doing is saving the connector out of all of this15:56
hemnait wasn't a waste of time15:56
jgriffithhemna: so if nothing else then we'll just call this a "cleanup"15:56
jgriffithnothing more, nothing less :)15:56
hemnaeven an exercise that leads us to the same api....isn't a waste of time.15:56
*** tesseract- has quit IRC15:56
hemnabut I think the new API is a better approach IMO15:57
jgriffithhemna: let me finish coding up this other option a bit and post it.  I'd like to get your thoughts on it15:58
*** mriedem1 has joined #openstack-cinder15:59
hemnaok coolio15:59
*** mriedem has quit IRC16:00
*** jungleboyj has quit IRC16:01
*** mriedem1 is now known as mriedem16:01
*** chlong has quit IRC16:06
*** sudipto has joined #openstack-cinder16:08
*** sudipto_ has joined #openstack-cinder16:08
*** chris_morrell has joined #openstack-cinder16:11
*** leeantho has joined #openstack-cinder16:15
*** alonmarx has quit IRC16:15
*** jungleboyj has joined #openstack-cinder16:17
*** chris_morrell has quit IRC16:22
*** chris_morrell has joined #openstack-cinder16:22
*** laughterwym has quit IRC16:25
openstackgerritMike Perez proposed openstack/cinder: Removing deprecated Dell EqualLogic config options  https://review.openstack.org/38086916:26
*** chris_morrell has quit IRC16:26
*** bardia has quit IRC16:28
*** bardia has joined #openstack-cinder16:29
*** sdake has quit IRC16:30
*** merooney has quit IRC16:33
*** sdake has joined #openstack-cinder16:34
*** merooney has joined #openstack-cinder16:34
*** mriedem has quit IRC16:35
*** mriedem has joined #openstack-cinder16:38
*** lpetrut has joined #openstack-cinder16:40
*** harlowja_still_a has joined #openstack-cinder16:40
*** sbezverk has quit IRC16:42
*** harlowja_at_home has quit IRC16:44
*** harlowja_still_a has quit IRC16:46
*** jungleboyj has quit IRC16:48
*** clenimar has joined #openstack-cinder16:51
*** crose has joined #openstack-cinder16:54
*** alonma has joined #openstack-cinder16:56
*** GB21 has joined #openstack-cinder16:58
*** crose has quit IRC16:59
*** jungleboyj has joined #openstack-cinder17:00
*** alonma has quit IRC17:00
*** Poornima has quit IRC17:00
gaurangtjgriffith: hi17:01
jgriffithgaurangt: hey17:01
ildikovsmcginnis: hemna: johnthetubaguy: meeting time if you're around :)17:02
*** alonma has joined #openstack-cinder17:03
gaurangtWhen a particular host is down (cinder service-list shows the c-vol service as down), one cannot run the failover-host command for that host. It expects c-vol service to be running, right?17:03
gaurangtThe reason I'm asking is - GPFS runs c-vol service on multiple hosts which may be failover targets for each other. When one of them is down, c-vol will also be down. But with current implementation, we would not be able to failover the volumes to the other host.17:04
jgriffithgaurangt: correct17:05
*** jdurgin1 has quit IRC17:05
jgriffithgaurangt: well, that seems right to me17:05
jgriffithgaurangt: I mean, how can you fail-over to a service that is "down" ?17:05
jgriffithgaurangt: if you want a new state I guess that could be considered, but I don't see that it's particularly valuable17:06
*** merooney_ has joined #openstack-cinder17:06
gaurangtjgriffith: no, I want to failover to the host which is running.17:06
jgriffithgaurangt: oh17:06
jgriffithgaurangt: sure you can17:06
jgriffithgaurangt: at least you used to be able to, and you *should* be able to17:07
*** sudipto has quit IRC17:07
*** sudipto_ has quit IRC17:07
jgriffithgaurangt: the whole premise was that the initial source died in a fire17:07
gaurangtLet me explain with example. Consider two sites (site A and site B) and we have same GPFS filesystem across two sites and we have cinder configured with GPFS as storage backend.17:07
*** sudipto_ has joined #openstack-cinder17:08
*** sudipto has joined #openstack-cinder17:08
gaurangtHere, cinder volume service running on one of the nodes on each site.17:08
*** alonma has quit IRC17:08
jgriffithgaurangt: so you're not talking about a DR Replication scheme, you're doing a geo-rep17:08
gaurangtjgriffith: cinder service-list shows the c-vol on both the sites.17:08
gaurangtyeah. kind of. GPFS takes care of the replication here.17:09
*** merooney has quit IRC17:09
jgriffithgaurangt: go on... I'm listening :)17:10
gaurangtjgriffith: now, when one of the sites fail, c-vol on that particular site would also be down. The volumes which are managed by that host would no longer be manageable.17:10
*** alyson_ has quit IRC17:10
*** alonma has joined #openstack-cinder17:10
jgriffithyep17:11
gaurangtjgriffith: with failover, we want to change the host parameter for those volumes which are tagged with host on the failed site so that they become manageable.17:11
jgriffithgaurangt: yep17:11
*** alyson_ has joined #openstack-cinder17:11
*** salv-orl_ has quit IRC17:12
gaurangtjgriffith: but current implementation is such that we cannot run failover-host command if the service is down. It would just wait for the c-vol service to come up until driver runs the failover_host method code.17:12
jgriffithgaurangt: I don't think you're correct17:12
jgriffithgaurangt: if there's been something added that does that it's a blatant bug17:12
jgriffithgaurangt: the failover command should not require any interaction with the primary what so ever17:13
jgriffithgaurangt: it's all just db changes/updates17:13
gaurangtjgriffith: Last time when I talked with Patrick East on this, he mentioned that we would need c-vol service running always17:14
gaurangtI've tested it with GPFS. It doesn't work for me unless the host comes back up.17:14
*** rushil has quit IRC17:14
jgriffithgaurangt: than that's a bug IMO17:15
*** alonma has quit IRC17:15
jgriffithgaurangt: I'd have to test it out and make sure I follow exactly, unless I can see something specific in the code17:15
gaurangtjgriffith: yeah sure.17:15
jgriffithgaurangt: do you by chance know why/where it's trying to get to the primary driver and failing?17:15
gaurangtif I understand the code correct, the failover_host method in cinder/volume/api.py makes a call self.volume_rpcapi.failover_host(ctxt, host, secondary_id)17:16
gaurangtjgriffith: where the host is the actual host which is down and needs to be failed over, right?17:17
jgriffithgaurangt: hmm17:17
jgriffithgaurangt: well it's evident this is one of those *features* that has never actually been used apparantly :)17:18
gaurangtjgriffith: now, in cinder/volume/rpcapi.py it has a call "cctxt = self._get_cctxt(host, version)" in failover_host method.17:18
jgriffithgaurangt: I'll have a look at it here shortly and file a bug etc17:18
gaurangtjgriffith: this will only be run on the host which is already down17:19
jgriffithgaurangt: yeah, I see exactly what you're talking about, but that should pass through because we don't do the require_initialized check17:19
jgriffithgaurangt: but I'll have to look.  We could/should route it to the secondary we are failing over to, that's how I thought I wrote it originally17:19
jgriffithbut apparantly not17:19
gaurangtjgriffith: yeah.. It doesn't work with GPFS at least. I'm not sure if it's the case with other shared filesystem drivers.17:20
*** e0ne has joined #openstack-cinder17:20
*** salv-orlando has joined #openstack-cinder17:20
jgriffithgaurangt: not sure that it would work with anything :)17:20
gaurangtjgriffith: I've anyways put this as an agenda item in upcoming cinder meeting. We can discuss more in that.17:20
jgriffithgaurangt: sounds good, thanks!17:20
gaurangtjgriffith: thanks too.17:22
smcginnisildikov: Sorry, just got back from another meeting.17:22
*** tongli has joined #openstack-cinder17:22
GB21Hi everyone, I am working on searchlight and we are implementing cinder plugins, there are some attributes that are missing from certain objects. As requested by the searchlight team earlier, we really need them implemented so that we can advance our project; some blueprints were initiated but they were never finished17:27
GB21this one is one of them17:28
GB21https://review.openstack.org/#/c/289658/17:28
mriedemdoes anyone know how conditional_updates in the cinder db api are translated to http exceptions in the cinder-api wsgi layer?17:28
*** _ducttape_ has joined #openstack-cinder17:29
GB21I request someone to please take a look, or maybe guide me to work them out, thanks in advance17:29
*** mvk has quit IRC17:29
*** rraja has quit IRC17:30
ildikovsmcginnis: no worries, we're still eagerly in it :)17:31
*** Guest92 has joined #openstack-cinder17:32
*** raunak has joined #openstack-cinder17:32
*** ducttape_ has quit IRC17:32
eharneymriedem: in the code i've seen, failure of conditional_update is manually checked by the caller which then usually raises some type of Invalid* exception which has an HTTP 400 attached to it, so nothing really special for that case17:33
mriedemeharney: ok so unreserve wouldn't actually fail if the update didn't happen https://github.com/openstack/cinder/blob/master/cinder/volume/api.py#L63217:34
mriedemhemna: scottda: ^17:34
eharneymriedem: right17:34
*** gcb has quit IRC17:35
eharneymriedem: that code should check for the result before logging that info message though17:35
*** Guest92 has quit IRC17:37
*** sdake has quit IRC17:39
*** Apoorva has joined #openstack-cinder17:43
*** sdake has joined #openstack-cinder17:44
hemnaeharney, +117:49
*** salv-orlando has quit IRC17:53
openstackgerritEric Harney proposed openstack/cinder: Unreserve volume: fix log message for failure  https://review.openstack.org/38126217:54
eharneyfixed there ^17:54
*** jordanP has quit IRC17:55
*** lpetrut has quit IRC17:59
*** sdake has quit IRC18:01
*** GB21 has quit IRC18:02
*** akerr has quit IRC18:02
jgriffithmriedem: eharney but the unreserve *is* that update, that's all it is.  So IIRC that means that it will just leave it at whatever state it's at no?18:08
jgriffiththat was the whole point of  a conditional-update, only update under conditions x, y, z18:09
mriedemjgriffith: yeah18:09
mriedemit's not an error18:09
mriedemwhich is why i -1ed18:09
mriedemto log it as an error18:09
jgriffithwell, it is an error though because it's not doing what you asked :)18:09
eharneyoh, yeah, i agree it shouldn't be logged at error level18:10
jgriffithmriedem: but I get your point18:10
mriedemit's not an error that an operator is going to do anything about though18:10
mriedemif it were an error, you'd return a 400 or 409 to the client18:10
jgriffithmriedem: Like I said, I get your point, not arguing18:11
mriedemi'm sorry i can't accept not arguing18:11
* hemna gets out the popcorn.18:11
jgriffithmriedem: yes you can!!!18:12
jgriffithmriedem: WRONG... WRONG... that's WRONG18:13
hemnayou are both wrong18:13
eharneyhemna: disagree, i wanted pretzels for this18:13
jgriffithhemna: right, but you're wrong18:13
hemnayou are right, but it doesn't mean you aren't wrong18:14
gaurangtjgriffith: did you get a chance to test replication code in your setup? Any findings?18:16
jgriffithhemna: you're wrong, and that is certainly right18:16
jgriffithgaurangt: LOL18:16
jgriffithgaurangt: you don't know me very well do you :)18:16
smcginnisQuote of the day: "you are right, but it doesn't mean you aren't wrong" :D18:16
gaurangtjgriffith: :)18:17
jgriffithgaurangt: firing up a fresh stack now18:17
gaurangtjgriffith: okay18:17
*** _ducttape_ has quit IRC18:20
patrickeastjgriffith: gaurangt: i'm just catching up on irc backlog from this morning, but from the description i think that is the expected behavior, we send the failover call to the c-vol service that is being failed over18:20
patrickeastsince cinder doesn't keep a like mapping from replication device -> cinder backend18:21
jgriffithpatrickeast: we shouldn't require that though18:21
*** ducttape_ has joined #openstack-cinder18:21
patrickeastyea sure18:21
patrickeastbut we don't support it today18:21
*** sudipto_ has quit IRC18:21
*** sudipto has quit IRC18:21
jgriffithpatrickeast: well then I should fix that :)18:21
patrickeasti think right now its built on the assumption that you run HA c-vols (hopefully active-passive)18:21
jgriffithpatrickeast: right, but the reason you *need* replication in our scenario is that if your backend takes a crap18:22
*** gaurangt has quit IRC18:22
*** kaisers_ has quit IRC18:22
*** Guest92 has joined #openstack-cinder18:22
jgriffithpatrickeast: that was kinda the whole point, and the entire use case18:22
jgriffithpatrickeast: there's nothing to keep a driver from acting on a call when a service is down18:23
jgriffithit's18:23
jgriffithpatrickeast: the actual work in the driver (in my case at least) has nothing to do with that actual service actually (or the primary backend)18:23
jgriffithpatrickeast: it just updates the driver to point to another backend, then the failover completes and the service should be back UP again18:24
patrickeastjgriffith: right18:25
patrickeastjgriffith: sorry got side tracked with someone else at my desk18:25
patrickeastjgriffith: so like the model we have now is for failing over/replicating remote backends with remote c-vol services managing them18:25
*** laughterwym has joined #openstack-cinder18:25
*** Yogi1 has joined #openstack-cinder18:26
patrickeastjgriffith: we just didn't add in code for the scenario of c-vol being the cinder service *and* storage backend you might want to failover18:26
patrickeastjgriffith: everyone who has implemented replication v2.x works on the assumption that c-vol is highly available and the driver for that cinder backend can reach both replicated devices *somehow*18:27
*** porrua has quit IRC18:28
jgriffithpatrickeast: right so what I was getting at with gaurangt is that you just make your config different18:28
jgriffithpatrickeast: so don't say "failover to another cinder", just failover to another backend18:28
jgriffithpatrickeast: ignore the fact that it just so happens to be a cinder-service18:28
openstackgerritMatan Sabag proposed openstack/cinder: Removing deprecated QoS specs.  https://review.openstack.org/37081618:29
*** pcaruana has joined #openstack-cinder18:29
patrickeastjgriffith: not sure i follow that part18:30
*** laughterwym has quit IRC18:30
patrickeastjgriffith: you mean failover to another backend that is the same cinder backend? like ala A-P HA?18:30
patrickeastthats part of this that confuses me too, since iiuc that backend is basically an NFS share mounted on the c-vol host, so why you cant just mount it elsewhere and keep going when the cinder controller dies doesn't make sense to me18:31
*** rcernin has joined #openstack-cinder18:31
*** alonma has joined #openstack-cinder18:32
jgriffithpatrickeast: yeah, I don't know18:32
jgriffithpatrickeast: the whole gpfs/nfs model is completely unique and foreign to me18:32
patrickeastjgriffith: ditto18:32
patrickeastbut i wanna help them replicate if they want to :D18:32
jgriffithpatrickeast: in his/her case I don't even know what it means for the "bakcend to fail"18:32
jgriffithpatrickeast: LOL18:33
patrickeasthah18:33
hemnapatrickeast, but failover in that case isn't the same model as replication that we've been thinking of for Cinder18:33
hemnawhere the storage backend itself pukes18:33
patrickeasthemna: exactly18:33
jgriffithhemna: wrong!18:33
jgriffithhemna: no wait... *right*18:34
hemnadamn18:34
hemna:P18:34
*** alonma has quit IRC18:36
*** ducttape_ has quit IRC18:38
*** gaurangt has joined #openstack-cinder18:38
jgriffithpatrickeast: oh well... I just tested it on my side and it still works as I thought it did so I'm good at least :)18:38
*** harlowja has quit IRC18:38
jgriffithpatrickeast: so I just "unplug" the primary backend, wait a while, then issue the failover command18:38
gaurangtjgriffith: sorry, I got disconnected.18:39
*** harlowja has joined #openstack-cinder18:39
jgriffithgaurangt: hey.. there you are18:39
jgriffithgaurangt: so it's quite possible I just don't understand what you're trying to achieve18:39
jgriffithgaurangt: I simulated my setup by setting things up, then going and shutting down the primary backend18:40
jgriffithgaurangt: waited a bit, then issued the failover cmd18:40
jgriffithgaurangt: it worked on my side18:40
gaurangtjgriffith: is c-vol service common for both the backends?18:40
jgriffithgaurangt: I don't know what that means18:41
jgriffithgaurangt: that's an a/a setup18:41
jgriffithgaurangt: completely different deal18:41
gaurangtjgriffith: yeah..I think for any clustered filesystem, it would be the same case (a/a setup).18:42
jgriffithgaurangt: I have a single backend and a config that looks like this:  http://paste.openstack.org/show/584066/18:42
jgriffithgaurangt: sure, but then what are you doing replication for?18:42
gaurangtthe data is being replicated on both the sites from GPFS perspective.18:43
jgriffithgaurangt: well... what I mean is, you have an Active/Active setup doing replication between your GPFS clusters right?18:43
jgriffithactually, I guess I can look at your driver... just a sec18:44
gaurangtfor GPFS, cinder volumes are nothing but files on the filesystem.18:44
jgriffithgaurangt: understood18:44
jgriffithgaurangt: oh, you don't have any code in github for that18:45
jgriffithnever mind, I can't look there :)18:45
*** abhi has joined #openstack-cinder18:45
gaurangtjgriffith: GPFS takes care of replication if filesystem is set accordingly.18:45
jgriffithgaurangt: so why do you need cinders replcation then?18:46
smcginnisgaurangt: So is there even a need to use Cinder replication?18:46
smcginnisjgriffith: jynx18:46
jgriffithsmcginnis: :)18:46
jgriffithgaurangt: are you trying to solve the problem of the c-vol node blowing up?18:46
gaurangtjgriffith: yep18:47
jgriffithgaurangt: ok that's different18:47
jgriffithgaurangt: cinders replication doesn't do anything for you18:48
gaurangtjgriffith: ok. So you mean the current replication use case is different then.18:48
jgriffithgaurangt: I guess maybe that's what patrickeast 's comment was and that he already know what you were doing18:48
jgriffithgaurangt: completely different18:48
jgriffithgaurangt: it's replicating a backend device18:48
jgriffithgaurangt: not the volume service18:48
gaurangtjgriffith: what might be the correct way to address the problem in my case then?18:49
jgriffithgaurangt: you can make that work as is today though, just don't use replication18:49
jgriffithgaurangt: just run another cvol-service with the same name/topic etc18:49
gaurangtsame host name?18:49
jgriffithgaurangt: then whichever one is up/available will pull requests from the queue18:49
jgriffithgaurangt: yes, same host-name18:49
jgriffithgaurangt: spoof it in your c-vol settings18:49
jgriffithgaurangt: you basically just lie18:50
jgriffithgaurangt: make the controller think there's only one c-vol service out there18:50
gaurangtyeah.. have same host entry in both cinder.conf18:50
jgriffithgaurangt:yes18:50
gaurangtjgriffith: got you.18:50
jgriffithgaurangt: You will have some different challenges than I do or my customers do probably18:51
gaurangtjgriffith: challenges like?18:51
jgriffithgaurangt: in my case it sort of relies on making sure people aren't *trying* to break things18:51
jgriffithgaurangt: given you're in a public cloud (at least I assume) you have to be a bit more dilligent18:51
gaurangtjgriffith: can it lead to db corruption or so?18:51
jgriffithgaurangt: but I don't know that it even matters with gpfs18:52
jgriffithgaurangt: so there are some races18:52
jgriffithgaurangt: I don't know your device well enough to really advise you on any of that18:53
jgriffithgaurangt: you would have to think about scenarios where different backends respond to a request and how quickly they are synced up18:53
gaurangtjgriffith: well, I've tested gpfs driver with same host entry with multiple c-vol services. It had worked earlier without any problems.18:54
jgriffithgaurangt: yeah, I've talked to quite a few people using that trick over the last few years18:54
jgriffithgaurangt: for some devices it "just works"18:55
jgriffithgaurangt: it all depends though on how the backend works18:55
gaurangtjgriffith: what I was trying to propose is - have a backup for each c-vol service configured in cinder.conf and when we issue cinder failover_host, we change the host entry for the volumes on the failed host.18:55
gaurangtso, it changes the host attribute for the volumes which are tied to failed host.18:56
gaurangtjgriffith: that would do it gracefully, IMO.18:56
gaurangtdo you agree?18:56
*** raunak has quit IRC18:57
*** ducttape_ has joined #openstack-cinder18:57
jgriffithgaurangt: well that's active/passive which there are already common solutions for using pacemaker, zmq etc18:58
jgriffithno?18:58
gaurangtjgriffith: hmm18:58
gaurangtjgriffith: consider a case where we have two controller nodes across two sites (FYI, we run all cinder services on controller node) accessing the same common database.19:00
jgriffithgaurangt: If you want to talk A/A you should hit up geguileo, he's the expert there and in the process of doing a ton of work to make that more robust19:00
*** hoonetorg has quit IRC19:00
gaurangtso, users on each site would access cinder endpoint on their site respectively.19:00
jgriffithgeguileo: yes, I have customers that do that today19:00
jgriffithgeguileo: but I do it by replicating in the background, without Cinder or any OpenStack knowledge/involvement19:01
jgriffitherrr.. sorry geguileo19:01
geguileojgriffith: np19:01
jgriffithI meant that ^^ for gaurangt19:01
gaurangtjgriffith: if site A goes down, then the volumes created from Site A host would not be manageable, right?19:02
jgriffithgaurangt: frankly it's MUCH better that way19:02
openstackgerritEric Harney proposed openstack/cinder: Error message for image conversion failure  https://review.openstack.org/38127219:02
jgriffithgaurangt: incorrect19:02
gaurangtjgriffith: how?19:02
jgriffithgaurangt: site A creates a volume.... in the background that volume is duplicated/presented/replicated at site B19:02
jgriffithgaurangt: you can't access site-b from site-a's cinder though if that's what you mean19:03
gaurangtjgriffith: can I fire a command from site B and delete the volume then?19:03
jgriffithgaurangt: yes19:03
jgriffithgaurangt: it's deleted at site-b, then the backend syncing deletes it at site-a19:03
jgriffithgaurangt: BUT19:04
gaurangtjgriffith: I don't think so. It waits till the c-vol at site A comes back up.19:04
jgriffithgaurangt: you don't think what?19:04
gaurangtjgriffith: cinder list will show the volume state as deleting.19:04
jgriffithgaurangt: and?19:05
jgriffithgaurangt: I'm honestly unclear what you're looking for here, sorry.19:05
gaurangtjgriffith: unless c-vol service comes back up on site A, the volume doesn't get deleted.19:05
jgriffithgaurangt: I'm just saying I've helped people build mirroed sites19:05
jgriffithgaurangt: they run a daemon that detects changes in cinder and issues replication commands to the remote site19:06
gaurangtjgriffith: I'm talking from GPFS point of view here.19:06
jgriffithgaurangt: I don't know shit about GPFS19:06
jgriffithgaurangt: so I'm not really going to be very helpful in that context probably19:06
*** haplo37__ has quit IRC19:06
jgriffithgaurangt: but I don't think the principles are that much different19:07
jgriffithanyway19:07
jgriffithsounds like you have a unique use case, that we don't currently cover19:07
jgriffithgaurangt: you should write up a spec and propose it perhaps19:07
gaurangtjgriffith: yeah, but I think this might be the case with all shared filesystems.19:08
jgriffithgaurangt: or figure out how to write your own custom daemon if it's not something that will translate to anything but gpfs19:08
jgriffithgaurangt: you should use manila then :)19:08
jgriffithgaurangt: sorry... that was a joke19:08
jgriffithgaurangt: I realize that's a different problem to solve19:08
gaurangtjgriffith: :)19:08
*** lpetrut has joined #openstack-cinder19:08
jgriffithgouthamr: or I should say "solves a different problem"19:08
jgriffithgrrrrr19:08
jgriffithtoo many people with nics starting with 'g'19:09
gaurangtjgriffith: haha19:09
openstackgerritEric Harney proposed openstack/cinder: Unreserve volume: fix log message for failure  https://review.openstack.org/38126219:09
*** merooney_ has quit IRC19:10
gaurangtjgriffith: I think for now faking host identity with same hostname solves the problem for GPFS. We can improve it a bit with what I am thinking.19:10
*** gaurangt has left #openstack-cinder19:14
*** gaurangt has joined #openstack-cinder19:14
jgriffithgaurangt: yeah... FWIW in the replication work we did initially have an idea/approach to try and do the failover to another service that would work well in your case19:15
jgriffithgaurangt: we decided to start simple though, and cut that out.  I believe it was part of rep V2 (not V2.1) if you wanted to dig it up and have a look19:15
jgriffithand on that note, I'm going to lunch, pretty much blew my morning :(19:16
jgriffithtoo many phone calls and meetings, and I talk too much :)19:16
gaurangtjgriffith: sorry about that !!19:16
jgriffithgaurangt: not at all... wasn't referring to you19:17
jgriffithI've had people at my desk, asked to go to meeting and had a couple phone calls19:17
jgriffithgaurangt: at least this is a topic I enjoy :)19:17
jgriffithgaurangt: the other ones... not so much19:17
gaurangtjgriffith: Oh ok.. But it was good discussion ..at least I won't too much time supporting current replication spec with GPFS :)19:18
gaurangtspend*19:18
*** catintheroof has joined #openstack-cinder19:22
*** merooney has joined #openstack-cinder19:24
*** kfarr has joined #openstack-cinder19:25
*** gaurangt has left #openstack-cinder19:26
*** hoonetorg has joined #openstack-cinder19:27
*** Guest92 has quit IRC19:27
*** Guest92 has joined #openstack-cinder19:27
*** Guest92 has quit IRC19:27
*** salv-orlando has joined #openstack-cinder19:30
* gouthamr thinks jgriffith though of manila and replication and tagged the right guy19:41
gouthamrs/though/thought19:41
*** salv-orl_ has joined #openstack-cinder19:44
*** Lee1092 has quit IRC19:45
*** david-lyle has joined #openstack-cinder19:47
*** salv-orlando has quit IRC19:47
jgriffithgouthamr: ha... lucky coincidence :)19:47
gouthamrjgriffith: :)19:48
*** jungleboyj has quit IRC19:50
*** bjolo has quit IRC19:51
openstackgerritEric Harney proposed openstack/cinder: WIP: New test for long volume names  https://review.openstack.org/38128719:57
*** raunak has joined #openstack-cinder19:58
*** enriquetaso has quit IRC20:01
*** Cibo has joined #openstack-cinder20:03
*** e0ne has quit IRC20:05
*** sdake has joined #openstack-cinder20:05
openstackgerritMerged openstack/cinder: Fix pep8 E501 line too long  https://review.openstack.org/37524820:05
*** Suyi has joined #openstack-cinder20:05
*** kaisers_ has joined #openstack-cinder20:11
*** Cibo has quit IRC20:15
openstackgerritSzymon Wróblewski proposed openstack/cinder: Cleanup RCP API versioning  https://review.openstack.org/37513720:15
*** Apoorva has quit IRC20:15
*** kaisers_ has quit IRC20:15
openstackgerritMerged openstack/cinder: Fix a typo in manager.py,test_common.py and emc_vmax_utils.py  https://review.openstack.org/37904820:20
openstackgerritMerged openstack/cinder: Fix typo: remove redundant 'that'  https://review.openstack.org/38094120:21
*** Yogi1 has quit IRC20:22
*** Apoorva has joined #openstack-cinder20:23
*** lpetrut has quit IRC20:23
openstackgerritMerged openstack/cinder: Fix backup NFS share mount with default backup_mount_options  https://review.openstack.org/35189020:26
*** Apoorva has quit IRC20:28
*** catintheroof has quit IRC20:29
*** david-lyle has quit IRC20:30
*** asselin__ has joined #openstack-cinder20:34
*** david-lyle has joined #openstack-cinder20:35
*** asselin_ has quit IRC20:35
*** alonmarx has joined #openstack-cinder20:36
*** alonma has joined #openstack-cinder20:40
*** alonma has quit IRC20:44
*** david-lyle has quit IRC20:44
*** pcaruana has quit IRC20:45
*** alonma has joined #openstack-cinder20:47
*** ccesario has quit IRC20:48
*** alonma has quit IRC20:51
*** jwcroppe has quit IRC20:58
*** jwcroppe has joined #openstack-cinder20:59
*** jwcroppe has quit IRC20:59
jgriffith:q20:59
*** jwcroppe has joined #openstack-cinder20:59
hemnahas anyone converted their driver to use the ovo volume object?21:04
hemnaI'm having a problem with unit tests for fake volumes21:04
smcginnisjgriffith: You can't quit.21:04
jgriffithsmcginnis: apparently not :)21:05
jgriffithsmcginnis: at least not in IRC window21:05
smcginnisjgriffith: Trust me, I've tried many times.21:05
jgriffithsmcginnis: haha21:05
smcginnisjgriffith: And chromium tabs don't work either.21:05
jgriffithsmcginnis: pro-tip of the day!21:05
* smcginnis wonders how long it would take to write an extension...21:05
jgriffithsmcginnis: if you want to write a really super useful extension, make the gerrit review page searchable again for me :)21:06
jgriffithsmcginnis: I'll pay you!21:06
smcginnisHah!21:06
smcginnisNow that would be funny, and extension that scrapes the page and shows it in the old format! :)21:06
jgriffithsmcginnis: well I don't care how it's implemented :)  I just want to be able to search again21:07
smcginnisjgriffith: You mean in the side by side diff window?21:07
jgriffithsmcginnis: it's rather annoying that to do a proper review I have to download the patch21:07
jgriffithsmcginnis: yeah... is there a trick maybe that I'm missing?21:07
smcginnisjgriffith: You might like this :)21:07
smcginnisjgriffith: Click on the gear in the  top left corner.21:08
jgriffithdon't say gerrty, don't say gertty....21:08
smcginnisOn the Diff Preferences window, change Render to Slow.21:08
smcginnisNo! :D21:08
jgriffithsmcginnis: get the heck out... changing render to slow fixes it?21:08
eharneydoesn't hitting ctrl-F twice do it too?21:08
smcginnisjgriffith: Cuz why wouldn't you want to render something fast.21:08
jgriffitheharney: I dunno but I'm about to find out21:08
smcginniseharney: Haven't tried that, but I think it only loads portions of the page at a time so it might not.21:09
smcginnisIn fast mode that is.21:09
smcginnisSlow mode also gets rid of the annoying jumping around that you get sometimes when you scroll through the diff.21:09
jgriffitheharney: smcginnis you both win a prize from me21:09
smcginnisWoot21:09
eharneyi wrote this thing just so i could read the reviewers list: https://userstyles.org/styles/125355/21:09
jgriffitheharney: oh nice!21:10
jgriffitheharney: you've been holding out all this time!21:10
smcginniseharney: I still have your spec on installed. Not sure if that even does anything now.21:11
eharneysmcginnis: i think not21:11
smcginniseharney: Ooo, I like21:12
smcginnisIt's at least greying out some of the CIs. :)21:12
eharneysmcginnis: yeah, i was hesitant to blacklist people who used their own email address for them, but i'm probably going to eventually21:12
smcginniseharney: Does green mean core?21:13
eharneysmcginnis: well, it's supposed to mean they have +W rights (that's what the CSS checks)21:13
smcginnisAh, OK.21:13
eharneysmcginnis: but it occasionally turns other random people green for people that make no sense to me (silly web nonsense)21:13
eharneyfor reasons that make no sense to me*21:13
*** edmondsw has quit IRC21:14
eharneyit's like 95% correct though, good enough :P21:14
jgriffitheharney: smcginnis well you've both made me very happy today FWIW21:14
jgriffitheharney: smcginnis I know not much, but still21:14
jgriffithgood for me anyway21:14
smcginnisjgriffith: It's the little things...21:14
jgriffithsmcginnis: it truly is21:15
jgriffithhemna: so it looks like brick is doing it's own magic to determine if/when to delete an entry for an attachment on it's own already?21:15
jgriffithhemna: or am I just not finding the right code somewhere?21:16
jgriffithhemna: ahh... wait, I see nova is checking the lun numbers for shared target case21:16
eharneyjgriffith: good :)21:17
hemnabrick doesn't know anything about shared vs non shared21:18
*** lpetrut has joined #openstack-cinder21:18
hemnait just tries to remove whatever is passed into it21:18
*** openstackgerrit has quit IRC21:19
*** david-lyle has joined #openstack-cinder21:19
*** openstackgerrit has joined #openstack-cinder21:20
openstackgerritMatan Sabag proposed openstack/cinder: Removing deprecated QoS specs.  https://review.openstack.org/37081621:20
*** auggy has joined #openstack-cinder21:23
*** eharney has quit IRC21:25
*** Apoorva has joined #openstack-cinder21:25
*** hoonetorg has quit IRC21:26
*** Apoorva has quit IRC21:30
*** diogogmt has quit IRC21:32
*** hoonetorg has joined #openstack-cinder21:32
*** mvk has joined #openstack-cinder21:32
*** Apoorva has joined #openstack-cinder21:34
*** akshai has quit IRC21:36
*** cknight has quit IRC21:39
*** lprice1 has quit IRC21:42
*** dave-mccowan has quit IRC21:45
*** alonmarx has quit IRC21:48
*** ducttape_ has quit IRC21:49
*** gouthamr has quit IRC21:49
*** leeantho has quit IRC21:56
*** xyang1 has quit IRC22:00
*** kaisers_ has joined #openstack-cinder22:00
*** kaisers_ has quit IRC22:05
*** lpetrut has quit IRC22:05
*** david-lyle has quit IRC22:09
*** jwcroppe has quit IRC22:10
*** akshai has joined #openstack-cinder22:12
*** mriedem has quit IRC22:13
*** jwcroppe has joined #openstack-cinder22:14
*** sdake has quit IRC22:16
*** jwcroppe has quit IRC22:19
*** rcernin has quit IRC22:25
*** gcb has joined #openstack-cinder22:27
*** salv-orlando has joined #openstack-cinder22:28
*** salv-orl_ has quit IRC22:28
*** oak has quit IRC22:31
*** oak has joined #openstack-cinder22:32
*** kfarr has quit IRC22:33
*** sdake has joined #openstack-cinder22:34
*** akshai has quit IRC22:36
*** david-lyle has joined #openstack-cinder22:36
*** asselin__ has quit IRC22:42
*** lprice has joined #openstack-cinder22:46
*** lprice has quit IRC22:51
*** salv-orlando has quit IRC22:55
*** alonma has joined #openstack-cinder22:55
*** tongli has quit IRC22:57
*** alyson_ has quit IRC22:59
*** alonma has quit IRC23:00
*** Suyi has quit IRC23:01
*** asselin__ has joined #openstack-cinder23:02
*** lprice has joined #openstack-cinder23:14
*** tpsilva has quit IRC23:14
*** diablo_rojo has quit IRC23:19
*** mtanino has quit IRC23:22
*** rajinir has quit IRC23:25
*** laughterwym has joined #openstack-cinder23:27
*** kaisers1 has quit IRC23:29
*** kaisers has quit IRC23:29
*** markvoelker has quit IRC23:29
*** kaisers has joined #openstack-cinder23:29
*** kaisers1 has joined #openstack-cinder23:30
*** laughterwym has quit IRC23:31
openstackgerritAbhilash Divakaran proposed openstack/cinder: Fix for Tegile driver failing to establish volume connection  https://review.openstack.org/38134123:34
*** jamielennox is now known as jamielennox|away23:35
*** merooney has quit IRC23:41
*** david-lyle has quit IRC23:44
*** cj has joined #openstack-cinder23:46
cjo/23:46
*** jamielennox|away is now known as jamielennox23:48
*** kaisers_ has joined #openstack-cinder23:49
*** kaisers_ has quit IRC23:54

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!