Wednesday, 2018-10-03

KeithMnemonicsmcginnis: jungleboyj: thanks i think those ScaleIO are ready for +A00:08
*** markvoelker has quit IRC00:17
*** markvoelker has joined #openstack-cinder00:17
*** markvoelker has quit IRC00:21
*** erlon has quit IRC00:35
*** raunak12 has quit IRC00:58
*** mhen has quit IRC01:05
*** mhen has joined #openstack-cinder01:07
*** ianychoi has quit IRC01:17
*** moshele has joined #openstack-cinder01:26
*** ianychoi has joined #openstack-cinder01:28
*** Dinesh_Bhor has joined #openstack-cinder01:31
*** Dinesh_Bhor has quit IRC01:38
*** ianychoi has quit IRC01:42
*** Dinesh_Bhor has joined #openstack-cinder01:47
openstackgerritXianjin Shao proposed openstack/cinder master: This adds the missing 'is_public' parameter for volume type details  https://review.openstack.org/60711501:54
*** dave-mccowan has joined #openstack-cinder01:56
*** markvoelker has joined #openstack-cinder02:18
*** ianychoi has joined #openstack-cinder02:19
*** moshele has quit IRC02:30
*** markvoelker has quit IRC02:51
*** Dinesh_Bhor has quit IRC03:00
*** Dinesh_Bhor has joined #openstack-cinder03:01
*** dave-mccowan has quit IRC03:08
*** raunak12 has joined #openstack-cinder03:27
openstackgerritMerged openstack/os-brick master: Remove meanless debug log  https://review.openstack.org/60538003:43
*** markvoelker has joined #openstack-cinder03:48
*** penick has quit IRC03:52
*** Dinesh_Bhor has quit IRC04:01
*** Dinesh_Bhor has joined #openstack-cinder04:05
*** Dinesh_Bhor has quit IRC04:20
*** markvoelker has quit IRC04:21
openstackgerritRajat Dhasmana proposed openstack/cinder master: Fix: storage_pools key in Huawei Driver  https://review.openstack.org/60729904:23
*** gkadam has joined #openstack-cinder04:32
*** gkadam has quit IRC04:32
*** raunak12 has quit IRC04:49
*** Dinesh_Bhor has joined #openstack-cinder04:58
*** Bhujay has joined #openstack-cinder05:01
*** Bhujay has quit IRC05:01
*** Bhujay has joined #openstack-cinder05:02
*** markvoelker has joined #openstack-cinder05:18
*** markvoelker has quit IRC05:52
*** moshele has joined #openstack-cinder06:00
*** vivsoni has joined #openstack-cinder06:10
*** Dinesh_Bhor has quit IRC06:11
*** dims has quit IRC06:24
*** e0ne has joined #openstack-cinder06:24
*** dims has joined #openstack-cinder06:26
*** raghavendrat has joined #openstack-cinder06:29
raghavendrathello06:29
*** dims has quit IRC06:34
*** Dinesh_Bhor has joined #openstack-cinder06:35
*** vivsoni has quit IRC06:35
*** dims has joined #openstack-cinder06:35
*** dpawlik has joined #openstack-cinder06:39
*** pcaruana has joined #openstack-cinder06:51
whoami-rajatraghavendrat: Hi, any queries?06:53
*** vivsoni has joined #openstack-cinder06:53
*** sdin has joined #openstack-cinder07:10
*** raghavendrat has left #openstack-cinder07:10
*** Dinesh_Bhor has quit IRC07:19
*** jroll has quit IRC07:19
*** logan- has quit IRC07:19
*** sdinescu has quit IRC07:19
*** pocketprotector has quit IRC07:19
*** odyssey4me has quit IRC07:19
*** amoralej|off is now known as amoralej07:27
*** jroll has joined #openstack-cinder07:32
*** logan- has joined #openstack-cinder07:32
*** odyssey4me has joined #openstack-cinder07:32
*** gkadam has joined #openstack-cinder07:44
*** rcernin has quit IRC07:55
*** vivsoni has quit IRC08:07
*** jroll has quit IRC08:13
*** logan- has quit IRC08:13
*** odyssey4me has quit IRC08:13
*** markvoelker has joined #openstack-cinder08:18
*** vivsoni has joined #openstack-cinder08:18
*** jroll has joined #openstack-cinder08:27
*** logan- has joined #openstack-cinder08:27
*** odyssey4me has joined #openstack-cinder08:27
*** Emine has joined #openstack-cinder08:41
openstackgerritIvaylo Mitev proposed openstack/cinder master: Implement volume capacity stats for VMware  https://review.openstack.org/60576208:41
*** jroll has quit IRC08:51
*** logan- has quit IRC08:51
*** odyssey4me has quit IRC08:51
*** markvoelker has quit IRC08:51
*** jroll has joined #openstack-cinder09:05
*** logan- has joined #openstack-cinder09:05
*** odyssey4me has joined #openstack-cinder09:05
*** Emine has quit IRC09:12
*** Emine has joined #openstack-cinder09:25
*** sdin has quit IRC09:52
*** sdin has joined #openstack-cinder09:52
*** sdin has quit IRC09:52
*** sdinescu has joined #openstack-cinder09:53
*** Emine has quit IRC10:07
*** kukacz has quit IRC10:12
*** kukacz has joined #openstack-cinder10:13
*** gkadam has quit IRC10:24
*** luizbag has joined #openstack-cinder10:31
*** luizbag has quit IRC10:35
*** luizbag has joined #openstack-cinder10:36
*** mvkr has quit IRC10:48
*** markvoelker has joined #openstack-cinder10:49
*** ganso has joined #openstack-cinder10:52
*** mvkr has joined #openstack-cinder11:02
*** erlon has joined #openstack-cinder11:03
*** moshele has quit IRC11:03
*** markvoelker has quit IRC11:22
openstackgerritHelen Walsh proposed openstack/cinder master: VMAX Driver - Failover Unisphere Support  https://review.openstack.org/57040111:23
*** gary_perkins has joined #openstack-cinder11:30
openstackgerritRajat Dhasmana proposed openstack/cinder master: API-REF:os-quota-sets v2 API reference has the wrong parameters  https://review.openstack.org/60753311:33
*** lemko has joined #openstack-cinder11:35
lemkoHi. I'm using queens and iscsi with multipath (IBM storwize is the backend). Creating volumes from images is extremely slow. Same as detaching volumes. Any idea what might be wrong? I'm talking about something like 10 mins to detach a volume from an instance.11:43
*** abishop_ has joined #openstack-cinder11:45
lemkoCould it be due to old open-iscsi package?11:46
*** e0ne has quit IRC12:04
*** Emine has joined #openstack-cinder12:14
*** moshele has joined #openstack-cinder12:15
*** e0ne has joined #openstack-cinder12:16
*** amoralej is now known as amoralej|lunch12:18
*** dave-mccowan has joined #openstack-cinder12:19
smcginnislemko: There should be something in the logs at least showing the timestamps of different operations. You might be able to tell from there where the delay is.12:36
smcginnislemko: It definitely should not take that long.12:36
*** rosmaita has joined #openstack-cinder12:38
*** rosmaita has quit IRC12:45
openstackgerritSean McGinnis proposed openstack/cinder master: Add missing 'is_public' volume type parameter  https://review.openstack.org/60711512:46
*** rosmaita has joined #openstack-cinder12:46
*** moshele has quit IRC12:53
openstackgerritSean McGinnis proposed openstack/cinder master: Add missing 'is_public' volume type parameter  https://review.openstack.org/60711513:09
*** mvkr has quit IRC13:23
*** eharney has joined #openstack-cinder13:23
*** dustins has joined #openstack-cinder13:32
*** amoralej|lunch is now known as aoralej13:32
*** mriedem has joined #openstack-cinder13:37
*** aoralej is now known as amoralej13:41
*** mvkr has joined #openstack-cinder13:50
openstackgerritJay Rubenstein proposed openstack/cinder master: During volume extend scaling io by volume size did not occur  https://review.openstack.org/60695913:52
*** ThomasWhite has quit IRC14:03
*** woojay has quit IRC14:04
jungleboyjsmcginnis:  If you are bored do you want to take a look at this:  https://review.openstack.org/60705814:05
jungleboyjI am going to have rebase the second one I realized.  Probably should have chained them.14:05
jungleboyjOnce this goes in I can propose the third:  https://review.openstack.org/60706214:06
*** Bhujay has quit IRC14:06
smcginnisjungleboyj: Was there a follow up to my comments?14:06
*** hoangcx has quit IRC14:06
*** Bhujay has joined #openstack-cinder14:07
jungleboyjsmcginnis:  Doh, did I miss comments?14:07
smcginnisApparently. :)14:07
jungleboyj:-p14:08
jungleboyjSorry about that.  Will address those now.14:08
jungleboyjI had been watching the patch and totally missed the -1.14:09
smcginnisJust minor things, then I think we should get those through right away.14:09
openstackgerritJay Bryant proposed openstack/cinder master: Remove the ITRI DISCO driver  https://review.openstack.org/60705814:11
openstackgerritJay Bryant proposed openstack/cinder master: Remove the HGST Flash Storage Driver  https://review.openstack.org/60706214:14
jungleboyjCool.  Thanks.14:14
*** Bhujay has quit IRC14:16
lemkothanks for your answer smcginnis14:18
lemkoHere's the log : http://paste.openstack.org/show/731376/14:18
smcginnislemko: That doesn't appear to be a detach. Looks like it's creating a volume from image, and the image conversion and writing process is taking some time to complete.14:19
lemkoImage download and conversion have a normal speed. But it took 11 minutes to do the rest. the log is about creating a volume from cirrus.14:20
lemkoand with detach it's basically the same14:20
lemkoit seems to take ages to detach the volume14:20
lemkocirros* :)14:21
smcginnislemko: It would be interesting to take a look at what is happening with the detach, but for the create from image, after it converts the image it needs to actually write that data out to the new volume. That is somewhat dependent on your data path speed.14:21
smcginnisOne thing that can be done to help mitigate this is to use image caching.14:21
*** hoangcx has joined #openstack-cinder14:22
smcginnislemko: Might want to try setting this up - https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html14:22
lemkoI'm a but surprised that writing 11 MB image to a 1 or 2 GB volume would take that much time14:22
lemkobtw I'm using openstack-helm that I tunnel to support iscsi storage.14:22
smcginnisMe too, that's a really long time for such a small amount of data.14:22
smcginnisNot sure I understand that last statement.14:22
lemkoMy openstack is containerized. But I think the container layer doesn't matter with iscsi14:23
smcginnisDepending how the networking is set up, it could cause that iSCSI traffic to have very poor performance.14:24
smcginnisBut all of that should not affect detach, only that initial volume create.14:24
lemkoI've done some tests like unmaping the volume from host with my storage interface after  a sort time (1 min) and then checking the volume manually.14:24
lemkothe volume is fine14:24
lemkoSo to me the volume has the good data14:25
lemkoand this from the begining14:25
lemkothe rest of the time is cinder-volume doing nothing14:25
lemkoand finally unmapping it14:25
lemkoAlso I've tried mouting iscsi volumes in my container, and it's fine.14:26
lemkoSo I'm really wondering what I'm missing14:26
smcginnisDetach has nothing to do with your data on the volume. Other than whatever nova needs to do to prepare for the device to go away.14:26
smcginnisAnd mount/unmount usually isn't a very long operation. Are you seeing delays there too?14:26
openstackgerritJay Rubenstein proposed openstack/cinder master: SF: Handle qos values on extend volume  https://review.openstack.org/60695914:27
*** dpawlik has quit IRC14:27
smcginnislemko: Anyway, I would first set up image caching (linked to above) and see if that improves things for you. Then if the initial create is faster, we can see what other delays are left.14:29
lemkoAttachment of volume is fast14:31
lemkoand my problem is typically the same when I attach a volume to an instance and then detach it.14:32
lemkoI'll try to find some log14:32
*** woojay has joined #openstack-cinder14:33
openstackgerritMiriam Yumi proposed openstack/cinder stable/rocky: NetApp SolidFire: Fix force_detach  https://review.openstack.org/60759514:35
*** raunak12 has joined #openstack-cinder14:40
lemkosmcginnis http://paste.openstack.org/show/731378/ and then it took ten more minutes to have it detached14:45
lemkohttp://paste.openstack.org/show/731381/14:47
smcginnisWeird it's double logging everything.14:48
smcginnisBut that message is that it's starting the detach. Nova then needs to do several steps after that to remove the volume, flush IO, and tear down the local device path.14:49
smcginnisSo some or all of those steps are taking awhile to complete.14:49
lemkoCould it be linked to multipath.conf?14:50
*** KeithMnemonic has quit IRC14:51
lemkoor iscsi package version?14:51
smcginnislemko: Hmm, potentially. Multipath handling can cause some more work to happen.14:52
smcginnislemko: What version of the services are you running?14:52
lemkoopen-iscsi and multipath?14:52
lemkoor you mean openstack services?14:52
smcginnisSorry, OpenStack.14:52
lemkoQueens14:52
smcginnisI know there have been various improvements for multipath, device scanning, and other optimizations in the os-brick library. But I think most of those came in rocky.14:55
lemkobecause open-iscsi package for ubuntu is kinda old14:55
smcginnislemko: You could look in syslog to see if there are any device removal or iscsi messages during that period after the nova message that it's detaching the volume and when you see things actually call into Cinder to do the detach.14:55
smcginnislemko: Yeah, seeing if you can upgrade open-iscsi *shouldn't* hurt and might bring some improvements.14:56
*** Emine has quit IRC14:58
*** eharney has quit IRC15:09
*** raunak12 has quit IRC15:10
*** dpawlik has joined #openstack-cinder15:19
openstackgerritHelen Walsh proposed openstack/cinder master: VMAX docs - Restructure of content  https://review.openstack.org/60761515:22
openstackgerritHelen Walsh proposed openstack/cinder master: VMAX docs - Replace serial_number  https://review.openstack.org/60761515:24
*** dpawlik has quit IRC15:24
*** eharney has joined #openstack-cinder15:24
*** pcaruana has quit IRC15:33
*** Luzi has joined #openstack-cinder15:47
smcginnisimacdonn: I saw you pinged me last night. Need something yet?15:48
*** raunak12 has joined #openstack-cinder15:48
openstackgerritMerged openstack/os-brick stable/rocky: Succeed on iSCSI detach when path just went down  https://review.openstack.org/60704115:50
openstackgerritMerged openstack/python-cinderclient master: refactor the getid method base.py  https://review.openstack.org/58969215:51
*** penick has joined #openstack-cinder15:56
*** _erlon_ has joined #openstack-cinder15:59
*** tpsilva has joined #openstack-cinder16:08
openstackgerritSean McGinnis proposed openstack/cinder master: Update docs landing page to follow guideline  https://review.openstack.org/60762716:12
*** hoangcx has quit IRC16:12
*** hoangcx has joined #openstack-cinder16:13
smcginnistbarron: Any interest/availability to participate in something like this if it comes together? https://etherpad.openstack.org/p/cinder-stein-mid-cycle-planning16:20
smcginnistbarron: Mostly just making sure you are aware of it. Maybe a joint cinder/manila event?16:20
tbarronsmcginnis: I'd likely participate and maybe some of the other local manila folks too.  We have enough remotees in manila that we'll probabbly go on and do a virtual mid-cycle as well.16:22
smcginnistbarron: Makes sense.16:22
tbarronsmcginnis: you'll get eric to go if you have it in colorado or next to a pool hall16:24
smcginnis:)16:27
openstackgerritElise Gafford proposed openstack/os-brick stable/queens: Succeed on iSCSI detach when path just went down  https://review.openstack.org/60763216:28
*** dims has quit IRC16:28
openstackgerritMerged openstack/python-brick-cinderclient-ext master: Removed older version of python added 3.5  https://review.openstack.org/60639116:32
*** dims_ has joined #openstack-cinder16:35
*** e0ne has quit IRC16:47
*** asbishop has joined #openstack-cinder16:55
*** abishop_ has quit IRC16:58
*** Luzi has quit IRC17:02
imacdonnsmcginnis: re. ping last night ... I've been looking into online data migrations ... I got past the immediate hurdle, but there are some issues here ... looks like cinder uses the same mechanism as nova, and has a couple of the same bugs17:02
imacdonnis there a designated database guru in the cinder team?17:04
smcginnisHmm, not sure really.17:04
imacdonnk, well, I guess I'll just open bugs then ... one of the issues is a bit tricky, and it seems that nova hasn't quite figured out how to address it yet17:06
imacdonnissue is that, when a migration fails for some reason, the exception gets swallowed, a fairly useless warning is logged, and it moves one (user may or not even be aware) - https://review.openstack.org/#/c/600085/17:08
smcginnisThat's not good.17:11
imacdonnmriedem: any update on that from the nova side?17:14
jungleboyjsmcginnis:  I was just wondering about a DB guru too given the issues we were talking about during the meeting.17:17
jungleboyjimacdonn:  You should sit in on our meetings if you can.17:17
imacdonnjungleboyj: I was in "fly on the wall mode" for the last couple ;)17:17
jungleboyjimacdonn:  That is good.17:31
openstackgerritRajat Dhasmana proposed openstack/cinder master: API-REF:os-quota-sets v2 API reference has the wrong parameters  https://review.openstack.org/60753317:36
*** mvkr has quit IRC17:39
openstackgerritMerged openstack/python-cinderclient master: Fix encoding of query parameters  https://review.openstack.org/60248617:54
openstackgerritMerged openstack/os-brick stable/rocky: Ignore volume disconnect if it is not connected  https://review.openstack.org/60457817:55
*** asbishop has quit IRC17:56
openstackgerritErlon R. Cruz proposed openstack/cinder stable/queens: NetApp SolidFire: Fix CG snapshot deletion  https://review.openstack.org/60766217:58
*** asbishop has joined #openstack-cinder17:58
*** e0ne has joined #openstack-cinder18:02
*** raunak12_ has joined #openstack-cinder18:05
*** raunak12 has quit IRC18:07
*** raunak12_ is now known as raunak1218:07
openstackgerritErlon R. Cruz proposed openstack/cinder stable/pike: NetApp SolidFire: Fix CG snapshot deletion  https://review.openstack.org/60766518:08
mriedemimacdonn: on my patch? no, i haven't updated it yet.18:18
imacdonnmriedem: On the approach, I guess ... I hacked a change into cinder's version to hopefully log the exception (but still move on) ... so at least I have a chance of being able to diagnose the failure18:20
imacdonnmriedem: but that's maybe only half of the problem ... other half is how to catch the situation that "something went a bit wrong"18:20
*** imacdonn has quit IRC18:21
*** imacdonn has joined #openstack-cinder18:21
imacdonnguh, http proxy ditched my connection ... did I miss anything ?18:22
jungleboyjimacdonn:  Every cell phone in the US just simultaneously blew up with a Presidential Text.18:24
*** pcaruana has joined #openstack-cinder18:24
*** itlinux has joined #openstack-cinder18:24
jungleboyj#TimeToSolveRealProblems18:24
imacdonnjungleboyj: hah, yeah, I knew that was happening .. but I don't think it's related .. this stupid proxy dumps me a couple of times every day ... suspect they have some sort of max session lifetime, to try to manage load18:25
jungleboyj:-)  I didn't think it was related.  You asked what you missed though.  ;-)18:25
*** mvkr has joined #openstack-cinder18:34
*** e0ne has quit IRC18:45
*** e0ne has joined #openstack-cinder18:48
*** pcaruana has quit IRC18:50
*** lemko has quit IRC18:58
*** amoralej is now known as amoralej|off19:03
*** abishop_ has joined #openstack-cinder19:08
*** e0ne has quit IRC19:10
*** asbishop has quit IRC19:10
openstackgerritMerged openstack/cinder master: Add missing 'is_public' volume type parameter  https://review.openstack.org/60711519:20
openstackgerritMerged openstack/cinder-tempest-plugin master: Fix consistency groups test credentials  https://review.openstack.org/59972019:20
*** spartakos has joined #openstack-cinder19:24
imacdonnquestion for anyone, in lieu of a DB guru :P .... is it expected/required for cinder services to be running when online_data_migrations are applied? That does not seem to be the case for nova, but I'm attempting to do it on cinder, and getting a MessagingTimeout (as if it's trying to talk to something)19:36
smcginnisI thought so. Hence the name "online_".19:37
imacdonnwell, I was lead to interpret that as "these *CAN* be done online", not that they HAVE TO be19:37
imacdonn(for nova, at least)19:38
imacdonnthis is where it seems to need services online: https://github.com/openstack/cinder/blob/master/cinder/cmd/manage.py#L103-L10919:50
imacdonnnow I wonder if it needs the cinder-volume services to be online .. it's going to be make life interesting, if so19:51
smcginnisAh, right. There was this because of that - https://review.openstack.org/#/c/601684/19:52
imacdonnaha!19:52
imacdonnso .. that attempts to prevent the problem in the future, but is there a fix for the one that already snuck in?19:54
imacdonnthat is the one I tripped on, FWIW: ERROR cinder.cmd.manage [req-e335c38e-83ca-4e09-9318-4f9c9a7f93b9 - - - - -] Error attempting to run shared_targets_online_data_migration: MessagingTimeout: Timed out waiting for a reply to message ID fb8a5301e73d499bb9c0e706107dffbb19:56
*** luizbag has quit IRC20:01
openstackgerritSean McGinnis proposed openstack/cinder master: Update unit test debug instructions  https://review.openstack.org/60770020:07
smcginnisimacdonn: No, no fix that I am aware of.20:07
smcginnisimacdonn: But you're welcome to try to tackle that. ;)20:07
imacdonnyeah. hmmm. I either tackle it, or tackle restructuring my deployment/upgrade process :/20:08
smcginnisSounds like a win-win.20:09
smcginnis:)20:09
imacdonnhah20:09
imacdonnfirst I'd have to understand what the migration is trying to accomplish ... seems like an "after lunch" sort of thing20:09
smcginnisOr after morning coffee.20:10
imacdonnthat may have a better chance of success20:10
mriedemi have a semi interesting question....20:15
imacdonnuh oh20:15
mriedemlet's say i have a volume attached to a server with an attachment record created for that volume,20:15
jungleboyjGo on.20:15
mriedemthen i logically detach the volume from the server and replace it with a new volume, but the old attachment is still there,20:15
mriedemthen when i update the attachment record with the host connector, that attachment record is for the old volume, not the new one20:16
mriedemthat would be....bad and stuff yeah?20:16
jungleboyjWhat do you mean 'Logically detach' ?20:16
*** spartakos has quit IRC20:17
mriedemsec20:18
mriedemhttps://review.openstack.org/#/c/600628/5/specs/stein/approved/detach-boot-volume.rst@10320:21
mriedemthis is the spec to allow swapping root volume on a server while it's shelved offloaded20:21
mriedemwhen we shelve offload an instance, we create an empty attachment record for the volume(s) attached to the server so they remain 'reserved' while the instance is shelved,20:22
mriedemand then on unshelve, we spawn the guest on a new compute host and update all of those attachments with new host connectors20:22
mriedemin this spec, they propose being able to detach the root volume on the instance and then attach a new root volume - but we have to do something with those attachment records20:22
smcginnisIn that case, there's no need to reserve the volume since you probably should just delete the original boot volume. Then when the new boot volume is created, just attach like normal.20:23
mriedemright i'm thinking if you detach the reserved root volume while the instance is shelved, we delete the attachment record for that volume as well,20:24
mriedemthen if you attach a new root volume to the shelved instance, we create a new empty attachment to reserve the volume before you unshelve it20:24
jungleboyjmriedem:  I am thinking the same thing.20:24
smcginnismriedem: Yeah, that sounds logical.20:24
jungleboyjYeah.  That makes sense.20:24
mriedemthanks spock :)20:25
smcginnismriedem: Live long and prosper.20:26
jungleboyjhttps://gph.is/1DzWYFR20:26
jungleboyjsmcginnis:  jinx20:26
smcginnis\/20:26
jungleboyjThis one is good too:  https://gph.is/1DzWYFR20:27
jungleboyjHmm, the other unsupported driver right now is the datacore and it looks like they have been running.20:29
jungleboyjTheir FC job hasn't been successful still in like 225 days.  Maybe need to e-mail them again.20:30
smcginnisMaybe remove the one and not the other?20:30
jungleboyjI suppose.20:30
jungleboyjLet me ping them first and see if there is a reason for it.20:31
jungleboyjHuawei hasn't run in 113 days.20:42
jungleboyjAnd looks like there might be something wrong with the Storewize jobs.20:43
smcginnisUnsupported!20:46
jungleboyjSent an note to Isaac to see what he says.  What is up with Huawei though?  Should I mark that unsupported?20:47
smcginnisI've pinged the internal team a few times now. Might be about time to do that. They are definitely past our policy on CI.20:47
mriedemhuawei is best way20:48
*** _erlon_ has quit IRC20:48
smcginnisThe best ever. We're hooje.20:48
jungleboyjOk.  Will push up that patch and see what happens.20:49
* jungleboyj giggles20:50
jungleboyjjgriffith:  No Solidfire CI in 95 days.20:50
smcginnisHe don't care20:50
jungleboyjOh, that is right.  He is at RedHat now.20:51
smcginnisHmm, how many days ago did jgriffith leave NetApp? 95?20:51
jungleboyjerlon:  You have any visibility into that at NetApp for Solidfire?20:51
smcginnisOh, there's a different CI running it now. I remember being confused until I noticed that.20:51
jungleboyjThat sounds suspiciously right.20:51
smcginnisNetApp Solidfire CI or something.20:51
gouthamrI think they forgot to update the wiki :P20:51
gouthamri reset the wiki to jgriffith's CI at some point?20:52
smcginnisjungleboyj: I've updated my report to include NetApp SolidFire CI20:53
jungleboyjsmcginnis:  Ok.  So that is was the old data?20:53
smcginnisIf it still just said "Solidfire CI"20:53
erlonjungleboyj, we actually have changed the CI name20:53
jungleboyjsmcginnis:  Yeah.  It did.  There are a few others in here that I think need to be removed.20:53
smcginnisProbably. I haven't touched it for awhile.20:54
erlonjungleboyj, solidfire CIN has been reporting consistently in the last 1/2 months20:54
* jungleboyj sighs Both our FCZM CIs are in a bad state.20:54
jungleboyjerlon:  Ok.   Sounds like I just need sean to update the report.20:54
*** spartakos has joined #openstack-cinder20:54
*** eharney has quit IRC20:57
openstackgerritMerged openstack/cinder stable/queens: nimble storage: retype support  https://review.openstack.org/60733120:57
openstackgerritMerged openstack/cinder master: VMAX docs - Replace serial_number  https://review.openstack.org/60761520:57
jungleboyjsmcginnis:  IBM GPFS CI, Infortrend CI, SolidFire CI20:58
jungleboyjThose looke like they should be removed from the report.20:58
smcginnisCool, gone.20:58
*** asbishop has joined #openstack-cinder20:58
jungleboyjGoing to send nasty grams to Cisco and Brocade again and then propose a few 'Unsupported' patches.20:58
smcginnisWe should probably talk about what we want to do with FCZM in the event we no longer have any drivers for it.20:59
* jungleboyj shudders21:00
jungleboyjIt is a notable selling point with a number of users I have talked to.21:00
smcginnisI love removing code, but that would suck to lose that.21:00
jungleboyjWonder what a FC Switch costs on ebay?21:00
jungleboyjI have a server now.  Just need a couple of storage backends and switches.21:01
*** abishop_ has quit IRC21:01
* jungleboyj probably doesn't make enough to pay the electric bill.21:01
smcginnisWhen I went through with Angela, there were only like 12 tests that actually needed to be run to test FC connectivity. Theoretically, it should be easy to set something up if you have a switch, HBAs, and cables.21:01
jungleboyjhttps://www.ebay.com/itm/Brocade-Silkworm-3850-16-Port-Fibre-Channel-Switch-works/253766054512?hash=item3b15a2b270:g:wBcAAOSwf0hbUlHp:sc:FedExHomeDelivery!55904!US!-121:03
jungleboyjDon't need storage behind it?21:03
jungleboyjhttps://www.ebay.com/itm/Cisco-Fibre-Channel-Multilayer-Fabric-Switch-with-24-SFPs/283092392212?hash=item41e99f1914:g:mg8AAOSwtbBbamAr21:04
jungleboyjWould have to put HBAs in the x3650 though.21:05
* jungleboyj can't believe I am googling this.21:05
*** erlon has quit IRC21:06
jungleboyjHmm, the HBAs are pretty cheap.21:07
* jungleboyj should be e-mailing Cisco and Brocade instead.21:07
*** spartakos has quit IRC21:08
jungleboyjThe topic is on the MidCycle agenda21:08
openstackgerritMiriam Yumi proposed openstack/cinder stable/pike: NetApp SolidFire: Fix force_detach  https://review.openstack.org/59982121:13
imacdonnsmcginnis jungleboyj: (now that it's after lunch) ... it seems like the implementation doesn't match the spec here: https://specs.openstack.org/openstack/cinder-specs/specs/queens/add-shared-targets-to-volume-ref.html21:25
*** spartakos has joined #openstack-cinder21:26
jungleboyjimacdonn:  That wouldn't surprise me.  We have that problem in a number of places.21:26
smcginnisHmm, could be changes were made over the course of reviewing the implementation that didn't get put back into the spec once it was finalized.21:26
imacdonneither way, it seems bad21:27
imacdonnif the spec was implemented, we wouldn't need the RPC dependency in the migration21:27
imacdonnis jgriffith around these days ?21:32
*** asbishop has quit IRC21:34
*** rosmaita has left #openstack-cinder21:35
jungleboyjimacdonn:  If you say his name 3 times he will show up.  ;-)21:36
* imacdonn is almost gullible enough21:37
jungleboyjimacdonn:  Only almost gullible?  :-(21:42
smcginnisjgriffith, jgriffith, jgriffith21:43
* imacdonn watches intently21:43
* jungleboyj holds my breath21:44
smcginnisLettin' me down man.21:44
jgriffithsmcginnis: :)21:47
imacdonnooooh21:47
smcginnis\o/21:47
jgriffithI was composing myself :)21:47
jungleboyj@!j21:47
smcginnis:)21:48
jungleboyj(._.) ~ ︵ ┻━┻21:48
jungleboyjimacdonn:  See.  You doubted me.21:48
jgriffithimacdonn: what specifiacally are you having issues with21:48
imacdonnjungleboyj: :)21:49
* jungleboyj makes note that pinging jgriffith on Twitter helps.21:49
jgriffithjungleboyj: twitter?21:49
jgriffithOh.. lookie there21:49
imacdonnjgriffith: I'm trying to do an offline upgrade, and the shared_targets_online_data_migration is failing because the cinder-volume service is not running21:49
jgriffithgeesh21:49
imacdonnjgriffith: per https://review.openstack.org/#/c/601684/ , that's a naughty migration21:49
imacdonnjgriffith: also, it doesn't match the spec21:49
* smcginnis pictures jgriffith running away21:50
jgriffithimacdonn: yeah... sigh; let me pull up the review comment that got me a -1 until I modified it21:50
jgriffithOh, no need it's in that link you just sent21:50
jungleboyjjgriffith: OMG, so it really did work.  Ha ha ha!21:51
jungleboyjk979250h21:51
jungleboyjAhhhh! Please ignore that.21:51
jgriffithI glance at IRC every once in a while thank you very much :)21:51
jgriffithlooks like a password to me21:52
jungleboyjNothing to see there.21:52
* imacdonn wonders what NSFW site it's for21:52
jungleboyjHey now!21:53
jgriffithimacdonn: so anyway... I'm not sure what I should do here on this21:53
imacdonnjgriffith: I'm not either ... was it determined that having the volume service handle it on startup (per the spec) was not suitable after all ?21:54
jgriffithWell, reviewers didn't seem to like that approach21:55
jgriffithI *should* have updated the spec, but that's kinda the bogus nature of our specs IMHO21:55
smcginnisWhat do reviewers know.21:55
*** dustins has quit IRC21:55
jgriffithYou can try and design everything under the sun, but when it comes time to actually review and merge, things tend to change21:56
*** dustins has joined #openstack-cinder21:56
jgriffithimacdonn: so to make sure I understand completely, the issue is that you want to be able to perform this updgrade/migration when the services are "down"21:56
imacdonnjgriffith: yes... especially since it's the cinder-volume service, which I kindof think of like an agent, not a control service21:57
jgriffithI believe we should be able to handle "both" ways, online DM requiring the services and some way to stash it until the service is available21:57
jgriffithimacdonn: yeah, the problem is you have the other camp that says "never take anything down to upgrade" "we must have live upgrades"21:58
jgriffithSatisfying both isn't trivial21:58
jgriffithIMO at least21:58
imacdonnjgriffith: I think the key message should be "your migrations should not REQUIRE services to be down"21:58
jgriffithI guess we could add another migration that handles this offline?21:58
jgriffithwait, now I'm confused21:59
jgriffithimacdonn: ahh.. you mean that's the key, not one or the other21:59
jgriffithsigh21:59
imacdonnright :)21:59
jgriffithdo you have an example of such a unicorn :)22:00
jgriffithI might be able to work on an update if I have something to work off of22:01
jgriffithbut given until this migration we didn't have a live-migration at all I got to be first and make the first mistakes )22:01
imacdonnany migration that only looks at the database should work whether the services are up or not22:02
openstackgerritJay Bryant proposed openstack/cinder master: Mark Veritas HyperScale Driver Unsupported  https://review.openstack.org/60772822:03
imacdonnproblem here, as you said (I think), is that the DB doesn't record the info we need for this one22:03
imacdonnas a side-note; doesn't cinder support multiple backends per host now? How would it handle one drive that uses shared targets and another that doesn't ?22:04
imacdonndriver*22:04
openstackgerritMerged openstack/cinder-tempest-plugin master: fix tox python3 overrides  https://review.openstack.org/60570622:05
jgriffithimacdonn: it's driver based, not node based22:06
imacdonnoh, I guess "host" includes the backend name22:07
jgriffithaffirm22:07
imacdonnbut wait ... how does RPC handle that ?22:07
jgriffithhost@driver22:07
jgriffithwe have some funky biz in there22:08
imacdonnheh22:08
jgriffithThat's super dooper old by the way (multi-backend)22:08
imacdonnanother side-note ... this migration is in cmd/manage.py ... all the others, and all of the nova ones, are under db, not cmd22:08
jgriffithBecaues it's "mine" and it's "special" :)22:09
imacdonnmmmkay22:09
*** itlinux has quit IRC22:09
jgriffithseriously though, I'm unclear on the issue there22:10
jgriffithonline migrations seemed like a "manage" thing so that's where I stuffed it22:10
jgriffithNova has the same pattern FWIW22:11
imacdonnI guess "cmd" is the user-interface (CLI) part22:11
imacdonnthe nova migrations are under db22:11
jgriffithhttps://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L38722:11
imacdonnoh, I mean, the actual code ... the list is there, yeah22:12
jgriffithUmmmm so is the one in Cinder22:12
jgriffithhttps://github.com/openstack/cinder/blob/master/cinder/db/sqlalchemy/migrate_repo/versions/115_add_shared_targets_to_volumes.py22:13
imacdonnthat's the schema migration22:13
imacdonnI'm talking about the online data migrations22:13
jgriffithimacdonn: I get the impression that regardless of what I say you'll have an unsatisfied response :)22:14
imacdonnNot trying to be argumentative ... just sharing my observations .... there does seem to be a lot of confusion around this whole online migrations topic, in nova too22:15
jgriffithimacdonn: yeah...  objects, rpc-versions, online-migrations and now placement22:15
jgriffithit's sort of how new things grow22:15
jgriffithSo I'm certainly not claiming (nor did I ever claim) to be an expert on online-migrations22:16
smcginnisI we did realize placement wouldn't work for our needs, so at least that's one less. For now.22:16
jgriffithsmcginnis: for now :)22:16
jgriffithIt'll be  back next year :)22:16
*** spartakos has quit IRC22:17
jungleboyjjgriffith:  I don't know.  I think it is gone.  At least for a while.22:17
jgriffithimacdonn: I see what you're saying WRT to the impl though22:20
imacdonnjgriffith: OK, so long as I'm right ... :-)22:20
*** Zer0Byte_ has joined #openstack-cinder22:20
Zer0Byte_hi22:20
Zer0Byte_i have a issue22:20
Zer0Byte_only when i migrate22:20
jgriffithZer0Byte_: me too, but it's all the time22:21
Zer0Byte_tryting to migrate a volume from one backend storage to another22:21
imacdonnkinda strikes me that, if the volume service could handle this on startup, we wouldn't need a migration at all, since the column has a default value22:21
jungleboyjZer0Byte_:  Ok.22:21
Zer0Byte_BadRequest: Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-ed02c1c3-8905-4c4f-bee1-4789596bf6a5)22:21
* imacdonn has 99 issues22:21
Zer0Byte_i check the openrc file22:21
jgriffithimacdonn: TBF there are nova migrations implemented in cmd/manage.py as well.. but anyway22:21
Zer0Byte_and is everything right i can do any another command on cinder22:22
Zer0Byte_except migrate  even from horizon fail22:22
jgriffithor at least not implemented under db.xxxx; but regardless.22:22
jgriffithI guess I should try to figure out a way to fix this22:22
Zer0Byte_what do you mean with db.xxx?22:23
imacdonnjgriffith: OK .. that was really just a side-note ... but yeah, I think we need to figure something out about the RPC/service dependency22:23
jgriffithimacdonn: kind of a broader topic in general I think; might be worth raising at next weeks meeting, and perhaps even follow up and coordinate via an ML posting.22:24
jgriffithsee if everybody is on the same page or not22:24
Zer0Byte_https://pastebin.com/YHWd7v8a22:25
Zer0Byte_this is the full stacktrace22:25
*** rcernin has joined #openstack-cinder22:25
imacdonnjgriffith: OK, yeah. Maybe jungleboyj can help facilitate the discussion22:26
*** ganso has quit IRC22:27
jungleboyjimacdonn:  Sure.  Can you add it to the meeting info?22:27
imacdonnjungleboyj: ok, will do22:28
Zer0Byte_i guess i have a idea22:29
Zer0Byte_if you see the client22:30
Zer0Byte_sorry teh stacktrace you see is using novaclientv222:30
jungleboyjSomething weird in your environment there.22:30
Zer0Byte_i guess novaclient don't use domain_name right22:30
Zer0Byte_?22:30
Zer0Byte_why?22:30
jungleboyjIt more appears to be something wrong with communicating with keystone given where it is failing.22:31
imacdonnZer0Byte_: do you have a keystone_authtoken section in your cinder.conf ? Does it have project_domain_name and user_domain_name in it ?22:34
Zer0Byte_yeah have the default that juju set22:35
Zer0Byte_all of them have setted up service domain22:35
Zer0Byte_project_domain_name = service_domain22:35
Zer0Byte_user_domain_name = service_domain22:35
*** dustins has quit IRC22:36
openstackgerritMerged openstack/cinder master: Remove cinder-tox-compliance job  https://review.openstack.org/59319222:39
Zer0Byte_keystone show this error only22:40
Zer0Byte_(keystone.common.wsgi): 2018-10-03 22:39:41,268 WARNING Expecting to find domain in project. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error.22:40
Zer0Byte_if the22:42
Zer0Byte_volume is attached to nova instance send the error22:42
Zer0Byte_volume unattached22:43
Zer0Byte_works great22:43
Zer0Byte_looks like something in the nova client22:51
Zer0Byte_of cinder22:51
*** spartakos has joined #openstack-cinder23:01
Zer0Byte_well guys23:03
Zer0Byte_question23:03
Zer0Byte_is that23:03
Zer0Byte_when im trying touse nova api with keystone v2 doesn;t work throwing the same error23:04
Zer0Byte_now the question is23:04
Zer0Byte_exist a way if i have a domain to use keystone v223:04
*** Zer0Byte_ has quit IRC23:37
*** itlinux has joined #openstack-cinder23:48
*** mchlumsky has quit IRC23:49
*** mriedem has quit IRC23:51

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!