Friday, 2016-01-22

*** crose has quit IRC00:00
*** haomaiwang has quit IRC00:01
*** haomaiwang has joined #openstack-cinder00:01
*** martyturner has quit IRC00:01
*** angela-s has quit IRC00:04
*** daneyon_ has joined #openstack-cinder00:06
*** daneyon has quit IRC00:09
*** markvoelker has quit IRC00:11
*** daneyon_ has quit IRC00:11
*** laughterwym has joined #openstack-cinder00:12
*** alonma has joined #openstack-cinder00:13
*** laughter_ has joined #openstack-cinder00:16
*** zhangjn has joined #openstack-cinder00:17
*** laughterwym has quit IRC00:17
*** laughter_ has quit IRC00:18
*** markvoelker has joined #openstack-cinder00:18
*** alonma has quit IRC00:18
*** laughterwym has joined #openstack-cinder00:18
*** alonma has joined #openstack-cinder00:19
*** sombrafam has quit IRC00:19
*** sombrafam has joined #openstack-cinder00:20
*** alonma has quit IRC00:24
*** merooney has quit IRC00:24
*** laughterwym has quit IRC00:24
*** laughterwym has joined #openstack-cinder00:28
*** sombrafam has quit IRC00:30
*** sombrafam has joined #openstack-cinder00:31
openstackgerritMerged openstack/os-brick: Add reno for release notes management  https://review.openstack.org/25320700:31
*** zhangjn has quit IRC00:39
openstackgerritXinXiaohui proposed openstack/cinder: Calculate virtual free capacity and notify  https://review.openstack.org/20692300:40
*** zhangjn has joined #openstack-cinder00:41
*** zhangjn has quit IRC00:41
*** qeelee has joined #openstack-cinder00:43
*** salv-orlando has quit IRC00:43
*** Julien-zte has joined #openstack-cinder00:52
*** lixiaoy1 has joined #openstack-cinder00:54
*** winston-1 has quit IRC00:58
*** akerr has joined #openstack-cinder00:59
*** jidar has quit IRC01:00
*** qeelee has left #openstack-cinder01:00
*** haomaiwang has quit IRC01:01
*** haomaiwa_ has joined #openstack-cinder01:01
*** akerr_ has joined #openstack-cinder01:02
*** yhayashi has quit IRC01:02
*** daneyon has joined #openstack-cinder01:02
*** jidar has joined #openstack-cinder01:04
*** salv-orlando has joined #openstack-cinder01:04
*** akerr has quit IRC01:05
*** winston-d has joined #openstack-cinder01:06
*** cheneydc has joined #openstack-cinder01:06
*** akerr has joined #openstack-cinder01:06
*** akerr__ has joined #openstack-cinder01:08
*** markvoelker has quit IRC01:09
*** davechen has joined #openstack-cinder01:09
*** salv-orlando has quit IRC01:09
*** akerr_ has quit IRC01:09
*** yhayashi has joined #openstack-cinder01:10
*** leeantho has quit IRC01:10
*** mriedem has joined #openstack-cinder01:11
*** akerr has quit IRC01:11
*** pratap has quit IRC01:11
*** pratap__ has quit IRC01:11
*** wilson has joined #openstack-cinder01:16
*** wilson is now known as Guest4091801:16
*** challenge has joined #openstack-cinder01:16
challengeHi guys01:16
*** haomaiwa_ has quit IRC01:17
challengeI would like to cinder an NFS backend to store my vm01:17
*** Guest40918 has quit IRC01:17
challengeI'm having this message Failed to created Cinder secure environment indicator file: [Errno 13] Permission denied: '/var/lib/cinder/mnt/2ef7a6e750a5e5ba5a3aa6623bc8a01f/.cinderSecureEnvIndicator'"01:17
challengeFailed to created Cinder secure environment indicator file: [Errno 13] Permission denied: '/var/lib/cinder/mnt/2ef7a6e750a5e5ba5a3aa6623bc8a01f/.cinderSecureEnvIndicator'01:18
*** alonma has joined #openstack-cinder01:20
*** wilson-liu has joined #openstack-cinder01:21
wilson-liuhi01:22
wilson-liuThe jenkins gate always fails01:22
wilson-liuMy patch fails the the gate twice01:22
wilson-liuI have checked that the failure has no relation with my patch01:23
wilson-liuShould I just 'recheck no bug' again?01:23
*** alonma has quit IRC01:24
*** cheneydc has quit IRC01:25
*** Lee1092 has joined #openstack-cinder01:25
*** cheneydc has joined #openstack-cinder01:25
*** davechen1 has joined #openstack-cinder01:28
*** davechen has quit IRC01:30
*** dslev has joined #openstack-cinder01:30
*** zhonghua-lee has joined #openstack-cinder01:30
lixiaoy1challenge: the nfs authority is not set correctly01:30
*** dslev_ has joined #openstack-cinder01:32
*** cheneydc has quit IRC01:34
*** hunters1094 has joined #openstack-cinder01:35
*** dslev has quit IRC01:35
lixiaoy1challenge: in test env, you can set nas_secure_file_permissions as false, and nas_secure_file_operations as false01:38
*** apoorvad has quit IRC01:46
challengelixiaoy1, thanks you01:49
openstackgerritMitsuhiro Tanino proposed openstack/cinder: Support cinder_img_volume_type in image metadata  https://review.openstack.org/25864901:55
*** dslev_ has quit IRC01:55
*** chhavi has quit IRC01:59
*** dims__ has joined #openstack-cinder02:04
*** haomaiwang has joined #openstack-cinder02:05
*** dims_ has quit IRC02:06
*** mtanino has quit IRC02:07
*** mriedem has quit IRC02:12
*** cheneydc has joined #openstack-cinder02:15
*** alonma has joined #openstack-cinder02:20
*** davechen1 has quit IRC02:23
*** alonma has quit IRC02:24
*** alonma has joined #openstack-cinder02:26
*** laughterwym has quit IRC02:29
*** laughterwym has joined #openstack-cinder02:30
*** alonma has quit IRC02:30
*** laughter_ has joined #openstack-cinder02:31
*** alonma has joined #openstack-cinder02:34
*** [1]Thelo has joined #openstack-cinder02:34
*** laughterwym has quit IRC02:34
*** laughterwym has joined #openstack-cinder02:36
*** Thelo has quit IRC02:36
*** [1]Thelo is now known as Thelo02:36
*** alonma has quit IRC02:38
openstackgerritXinXiaohui proposed openstack/cinder: Calculate virtual free capacity and notify  https://review.openstack.org/20692302:39
*** laughter_ has quit IRC02:39
*** alonma has joined #openstack-cinder02:41
*** sombrafam has quit IRC02:41
*** sombrafam has joined #openstack-cinder02:42
*** alonma has quit IRC02:46
openstackgerritRyan McNair proposed openstack/cinder: Rework Storwize/SVC protocol to fix add_vdisk_copy  https://review.openstack.org/26989502:46
*** sombrafam has quit IRC02:47
*** sombrafam has joined #openstack-cinder02:48
*** alonma has joined #openstack-cinder02:49
*** sheel has joined #openstack-cinder02:51
*** mudassirlatif has quit IRC02:51
*** sheel has quit IRC02:51
*** gcb has quit IRC02:53
*** alonma has quit IRC02:53
*** alonma has joined #openstack-cinder02:56
*** except has joined #openstack-cinder02:59
*** laughterwym has quit IRC02:59
*** except is now known as Guest376503:00
*** laughterwym has joined #openstack-cinder03:00
*** haomaiwang has quit IRC03:01
*** laughterwym has quit IRC03:01
*** alonma has quit IRC03:01
*** 77CAAARUP has joined #openstack-cinder03:01
*** laughterwym has joined #openstack-cinder03:01
*** challenge has quit IRC03:02
*** laughter_ has joined #openstack-cinder03:02
*** bardia has quit IRC03:03
*** laughterwym has quit IRC03:06
*** gcb has joined #openstack-cinder03:07
*** cknight has joined #openstack-cinder03:14
*** gcb has quit IRC03:18
*** links has joined #openstack-cinder03:26
*** wN has quit IRC03:40
*** alonma has joined #openstack-cinder03:40
*** links has quit IRC03:40
*** gcb has joined #openstack-cinder03:42
*** wN has joined #openstack-cinder03:44
*** wN has joined #openstack-cinder03:44
*** alonma has quit IRC03:45
*** bill_az has quit IRC03:46
*** Guest3765 has quit IRC03:46
*** alonma has joined #openstack-cinder03:47
*** links has joined #openstack-cinder03:48
*** alonma has quit IRC03:51
*** alonma has joined #openstack-cinder03:54
*** alonma has quit IRC03:59
*** cknight has quit IRC03:59
*** 77CAAARUP has quit IRC04:01
*** daneyon has quit IRC04:01
*** haomaiwa_ has joined #openstack-cinder04:01
openstackgerritOpenStack Proposal Bot proposed openstack/cinder: Updated from global requirements  https://review.openstack.org/26931604:02
*** links has quit IRC04:04
*** bardia has joined #openstack-cinder04:04
*** huanan_L has quit IRC04:05
*** laughterwym has joined #openstack-cinder04:06
*** dims__ has quit IRC04:08
*** vgridnev has joined #openstack-cinder04:08
*** bardia has quit IRC04:08
*** laughter_ has quit IRC04:10
*** shyama has joined #openstack-cinder04:13
*** akshai has joined #openstack-cinder04:13
*** links has joined #openstack-cinder04:16
*** laughterwym has quit IRC04:16
*** laughterwym has joined #openstack-cinder04:17
*** laughterwym has quit IRC04:18
*** gouthamr has joined #openstack-cinder04:18
*** laughterwym has joined #openstack-cinder04:18
*** akshai has quit IRC04:18
*** laughterwym has quit IRC04:18
*** gouthamr_ has joined #openstack-cinder04:19
*** coolsvap|away has joined #openstack-cinder04:21
*** coolsvap|away is now known as coolsvap04:22
*** gouthamr has quit IRC04:22
*** links has quit IRC04:27
*** links has joined #openstack-cinder04:28
*** bardia has joined #openstack-cinder04:39
*** links has quit IRC04:50
*** alonma has joined #openstack-cinder04:55
*** links has joined #openstack-cinder04:57
*** alonma has quit IRC04:59
*** haomaiwa_ has quit IRC05:01
*** hunters1094 has quit IRC05:01
*** esker has joined #openstack-cinder05:01
*** haomaiwang has joined #openstack-cinder05:01
*** alonma has joined #openstack-cinder05:02
*** alonma has quit IRC05:07
*** alonma has joined #openstack-cinder05:10
*** links has quit IRC05:13
*** haomaiwang has quit IRC05:13
*** akshai has joined #openstack-cinder05:14
*** alonma has quit IRC05:14
*** akshai has quit IRC05:19
*** links has joined #openstack-cinder05:21
*** sgotliv has joined #openstack-cinder05:23
*** vgridnev has quit IRC05:24
*** zhangjn has joined #openstack-cinder05:27
*** zhangjn has quit IRC05:28
*** chhavi has joined #openstack-cinder05:31
*** esker has quit IRC05:32
*** winston-d_ has joined #openstack-cinder05:34
*** alonma has joined #openstack-cinder05:35
*** links has quit IRC05:36
openstackgerritPeter Wang proposed openstack/cinder: Enhance migration start verification  https://review.openstack.org/27114305:37
*** links has joined #openstack-cinder05:37
*** gouthamr_ has quit IRC05:40
*** markvoelker has joined #openstack-cinder05:44
*** vgridnev has joined #openstack-cinder05:47
openstackgerritMerged openstack/cinder: Check min config requirements for rbd driver  https://review.openstack.org/25891005:47
*** alonma has quit IRC05:56
*** alonma has joined #openstack-cinder05:57
*** links has quit IRC05:59
*** links has joined #openstack-cinder06:01
*** alonma has quit IRC06:01
*** markvoelker_ has joined #openstack-cinder06:04
*** alonma has joined #openstack-cinder06:05
*** markvoelker has quit IRC06:07
*** alonma has quit IRC06:10
*** alonma has joined #openstack-cinder06:11
*** bardia has quit IRC06:14
*** akshai has joined #openstack-cinder06:15
*** alonma has quit IRC06:16
*** erez has joined #openstack-cinder06:16
*** akshai has quit IRC06:20
*** links has quit IRC06:22
*** alonma has joined #openstack-cinder06:22
*** links has joined #openstack-cinder06:23
*** erez has quit IRC06:25
*** ChubYann has quit IRC06:26
*** coolsvap is now known as coolsvap|away06:26
*** alonma has quit IRC06:27
*** alonma has joined #openstack-cinder06:29
*** alonma has quit IRC06:33
*** vgridnev has quit IRC06:33
*** bardia has joined #openstack-cinder06:33
*** hunters1094 has joined #openstack-cinder06:33
*** alonma has joined #openstack-cinder06:36
chhaviildikov:Please review this and provide comments if any v06:38
chhavihttps://review.openstack.org/#/c/25559506:38
*** akerr__ is now known as akerr_away06:38
*** nkrinner has joined #openstack-cinder06:38
chhavithis is with respect to the issue faced during shared volume06:38
*** alonma has quit IRC06:40
*** salv-orlando has joined #openstack-cinder06:44
*** links has quit IRC06:46
openstackgerritChhavi Agarwal proposed openstack/cinder: Don't call remove_export with attachments left  https://review.openstack.org/25559506:49
*** links has joined #openstack-cinder06:50
*** bardia has quit IRC06:51
*** vgridnev has joined #openstack-cinder06:54
*** anshul has joined #openstack-cinder06:58
*** links has quit IRC07:09
*** links has joined #openstack-cinder07:09
*** salv-orlando has quit IRC07:09
*** lpetrut has joined #openstack-cinder07:16
*** akshai has joined #openstack-cinder07:16
*** akshai has quit IRC07:21
*** EinstCrazy has quit IRC07:21
openstackgerrityifan403 proposed openstack/cinder: Add ZTE Block Storage Driver  https://review.openstack.org/25888007:22
*** links has quit IRC07:31
*** rcernin has joined #openstack-cinder07:31
*** belmoreira has joined #openstack-cinder07:34
*** alonma has joined #openstack-cinder07:36
*** sombrafam has quit IRC07:39
*** sombrafam has joined #openstack-cinder07:40
*** alonma has quit IRC07:41
*** houming has joined #openstack-cinder07:42
*** ociuhandu has joined #openstack-cinder07:43
*** tpatzig has joined #openstack-cinder07:43
*** links has joined #openstack-cinder07:48
*** sombrafam has quit IRC07:48
*** sombrafam has joined #openstack-cinder07:48
openstackgerritXinXiaohui proposed openstack/cinder: Calculate virtual free capacity and notify  https://review.openstack.org/20692307:50
*** links has quit IRC07:54
*** lpetrut has quit IRC07:55
*** markus_z has joined #openstack-cinder08:04
*** alonma has joined #openstack-cinder08:06
*** alonma has quit IRC08:10
*** alonma has joined #openstack-cinder08:12
*** boris-42 has quit IRC08:13
*** alonma has quit IRC08:16
*** akshai has joined #openstack-cinder08:17
*** andymaier has joined #openstack-cinder08:19
*** vgridnev has quit IRC08:20
*** akshai has quit IRC08:22
*** links has joined #openstack-cinder08:22
*** laughterwym has joined #openstack-cinder08:23
*** davechen has joined #openstack-cinder08:26
*** laughterwym has quit IRC08:27
*** alonma has joined #openstack-cinder08:28
*** alonma has quit IRC08:32
*** alonma has joined #openstack-cinder08:35
*** links has quit IRC08:35
*** markvoelker_ has quit IRC08:38
*** alonma has quit IRC08:39
*** alonma has joined #openstack-cinder08:41
*** hunters1094 has quit IRC08:43
*** baojg has joined #openstack-cinder08:44
*** laughterwym has joined #openstack-cinder08:45
*** alonma has quit IRC08:45
*** alonma has joined #openstack-cinder08:47
*** suresh has joined #openstack-cinder08:48
sureshhii  I want scaleio driver for openstack kilo version please someone help08:51
*** alonma has quit IRC08:51
*** alonma has joined #openstack-cinder08:53
*** alonma has quit IRC08:57
*** gcb has quit IRC08:59
*** alonma has joined #openstack-cinder09:00
*** ndipanov has joined #openstack-cinder09:00
*** jordanP has joined #openstack-cinder09:04
*** alonma has quit IRC09:05
*** yhayashi has quit IRC09:05
*** sahu has joined #openstack-cinder09:07
*** alonma has joined #openstack-cinder09:07
sahuHi all... How can I get scaleio driver of openstack kilo version on ubuntu, Please someone help...09:08
duleksuresh, sahu: ScaleIO driver got merged in Liberty, so there's no Kilo version I'm aware of. You can try to backport it on your own.09:11
duleksuresh, sahu: https://wiki.openstack.org/wiki/CinderSupportMatrix09:11
*** alonma has quit IRC09:12
sureshdulek: Please tell me how to backport09:12
*** gcb has joined #openstack-cinder09:13
*** alonma has joined #openstack-cinder09:14
*** jistr has joined #openstack-cinder09:15
duleksuresh: You take the driver patch and apply it to the stable/kilo. git blame to find out the patch. Same goes for Nova connector. But that's probably not a trivial task, so if you don't have someone experienced in Cinder you may have a hard time doing that.09:16
*** akshai has joined #openstack-cinder09:18
*** laughterwym has quit IRC09:18
*** alonma has quit IRC09:19
sureshdulek:  please explian elobrately about this issue09:23
*** akshai has quit IRC09:23
*** chhavi has quit IRC09:23
*** alonma has joined #openstack-cinder09:23
openstackgerritLiucheng Jiang proposed openstack/cinder: Correct capabilities match method '<is>'  https://review.openstack.org/27119709:24
*** e0ne has joined #openstack-cinder09:24
*** vgridnev has joined #openstack-cinder09:25
duleksuresh: You need to identify the patch that added the driver to Liberty release. Then you need to try to apply it to stable/kilo git branch and work out any merge conflict that will arise.09:25
duleksuresh: Same goes for Nova patch adding scaleio connector.09:26
*** e0ne has quit IRC09:27
duleksuresh: Sorry, but if I need to get into deeper details I would be elaborating on how to use git.09:27
*** alonma has quit IRC09:27
sureshdulek: Thank you09:28
duleksuresh: But had you tried to google "scaleio openstack kilo"?09:28
duleksuresh: Seems like EMC provided patches in their git repo.09:29
duleksuresh: https://community.emc.com/community/connect/everything-openstack/blog/2015/10/23/scaleio-and-openstack09:29
duleksuresh: I guess for more details you need to contact EMC.09:29
*** alonma has joined #openstack-cinder09:29
sureshdulek: I googled it but i didn't find patch for scaleio driver kilo version09:30
*** Julien-zte has quit IRC09:30
duleksuresh: Sure, the site doesn't seem to provide a link. I guess your best shot is contact someone from EMC. xyang is normally in this channel around 18:00 UTC (she's in one of US timezones).09:32
*** boris-42 has joined #openstack-cinder09:33
*** markvoelker has joined #openstack-cinder09:34
*** alonma has quit IRC09:34
*** chhavi has joined #openstack-cinder09:35
*** markvoelker has quit IRC09:39
*** markvoelker has joined #openstack-cinder09:39
*** smoriya_ has quit IRC09:40
sureshdulek: Thank you for your answers and your patience....!!!!!!!09:43
*** tpsilva has joined #openstack-cinder09:44
*** markvoelker has quit IRC09:44
*** markvoelker has joined #openstack-cinder09:45
*** markvoelker has quit IRC09:50
*** baojg has quit IRC09:52
*** baojg has joined #openstack-cinder09:54
*** Fdaisuke has quit IRC09:58
*** hyakuhei_ has joined #openstack-cinder09:58
*** haomaiwang has joined #openstack-cinder09:59
*** haomaiwang has quit IRC10:01
*** haomaiwa_ has joined #openstack-cinder10:01
*** cheneydc has quit IRC10:01
*** zhonghua-lee has quit IRC10:02
openstackgerritSzymon Borkowski proposed openstack/cinder: Update create_snapshot to use volume object  https://review.openstack.org/26061810:03
*** Julien-zte has joined #openstack-cinder10:16
*** akshai has joined #openstack-cinder10:18
*** akshai has quit IRC10:23
*** aix has joined #openstack-cinder10:23
*** alonma has joined #openstack-cinder10:30
*** haomaiwa_ has quit IRC10:32
*** esker has joined #openstack-cinder10:33
*** sombrafam has quit IRC10:34
*** alonma has quit IRC10:35
*** sombrafam has joined #openstack-cinder10:35
*** markvoelker has joined #openstack-cinder10:35
*** sgotliv has quit IRC10:37
*** alonma has joined #openstack-cinder10:37
*** lpetrut has joined #openstack-cinder10:38
*** esker has quit IRC10:38
*** markvoelker has quit IRC10:39
*** lpetrut has quit IRC10:40
*** e0ne has joined #openstack-cinder10:40
*** lpetrut has joined #openstack-cinder10:40
openstackgerritMerged openstack/cinder: NexentaStor 5 NFS backend driver.  https://review.openstack.org/19027310:40
*** sombrafam has quit IRC10:40
*** winston-d_ has quit IRC10:42
*** alonma has quit IRC10:42
*** sombrafam has joined #openstack-cinder10:42
*** hyakuhei_ has quit IRC10:44
*** vgridnev has quit IRC10:44
*** vgridnev has joined #openstack-cinder10:48
*** baojg has quit IRC10:58
*** dims has joined #openstack-cinder11:00
dulekgeguileo: Hi, I was looking through create volume and manage existing stuff to identify API race conditions.11:07
dulekgeguileo: Aaaand… I wasn't able to find one. Looks like the volume is created more or less unconditionally.11:07
dulekgeguileo: At least besides quotas it's created unconditionally to the DB state.11:08
geguileodulek: I'm on a meeting11:08
geguileodulek: I'll ping you in 10-20 mins11:08
dulekgeguileo: Great!11:08
openstackgerritSzymon Borkowski proposed openstack/cinder: Update restore_backup to use volume object  https://review.openstack.org/26202411:12
*** markvoelker has joined #openstack-cinder11:12
*** markvoelker has quit IRC11:18
openstackgerritMerged openstack/cinder: Replace assertEqual(*, None) with assertIsNone in tests  https://review.openstack.org/26399711:19
openstackgerritMerged openstack/cinder: EMC VMAX - Incorrect SG selected on an VMAX3 attach  https://review.openstack.org/25044311:19
*** sombrafam has quit IRC11:21
*** sborkows has joined #openstack-cinder11:21
*** aix has quit IRC11:23
*** ociuhandu has quit IRC11:25
*** fthiagogv has joined #openstack-cinder11:26
openstackgerritTom Barron proposed openstack/cinder: Fix xtremio slow unit tests  https://review.openstack.org/27103611:30
openstackgerritTom Barron proposed openstack/cinder: Fix torpid coordinator unit tests  https://review.openstack.org/27096711:33
*** sombrafam has joined #openstack-cinder11:35
*** houming has quit IRC11:36
*** ndipanov has quit IRC11:36
openstackgerritTom Barron proposed openstack/cinder: Fix sluggish rbd unit tests  https://review.openstack.org/27092511:36
*** aix has joined #openstack-cinder11:37
*** alonma has joined #openstack-cinder11:37
*** alonma has quit IRC11:42
*** andymaier has quit IRC11:43
*** alonma has joined #openstack-cinder11:44
openstackgerritTom Barron proposed openstack/cinder: Fix laggard cisco FC zone client unit tests  https://review.openstack.org/27058911:46
*** openstackgerrit has quit IRC11:47
*** timcl has joined #openstack-cinder11:48
*** openstackgerrit has joined #openstack-cinder11:48
*** alonma has quit IRC11:49
*** lprice has joined #openstack-cinder11:51
*** lprice1 has quit IRC11:51
*** timcl1 has joined #openstack-cinder11:52
*** alonma has joined #openstack-cinder11:52
*** timcl has quit IRC11:52
*** boris-42 has quit IRC11:53
*** haomaiwang has joined #openstack-cinder11:54
*** alonma has quit IRC11:57
*** wanghao has quit IRC11:57
*** haomaiwang has quit IRC11:58
geguileodulek: I'm back11:59
*** alonma has joined #openstack-cinder11:59
*** ociuhandu has joined #openstack-cinder12:02
*** alonma has quit IRC12:04
dulekgeguileo: So I was looking through how create volume and manage existing works with sborkows. We weren't able to find any place where conditional update is needed.12:06
dulekgeguileo: What are we missing? ;>12:06
geguileodulek: Maybe nothing12:06
geguileodulek: Because when the creation fails the DB entry is still created and then changed to Error, right?12:07
*** alonma has joined #openstack-cinder12:07
dulekgeguileo: Yup, entry is in "creating" state until the c-vol succeeds.12:07
*** markvoelker has joined #openstack-cinder12:07
geguileodulek: And changed to error if it fails, right?12:08
dulekgeguileo: That's right.12:08
geguileodulek: Maybe in the API it could make the conditional update12:08
geguileoSo it isn't created if the source is not available12:08
geguileoAnd just returns the error12:08
geguileoSo first you acquire the lock12:09
geguileoOh, no, the lock is acquired in c-vol12:09
dulekgeguileo: That's right! ;)12:09
geguileodulek: So the problem is basically that we don't have any way of locking the source in the API12:09
dulekgeguileo: We're not changing state of source resources.12:10
dulekSo create is just inserting a resource into the DB in "creating" state.12:10
geguileoAnd it can be removed/changed when we start working on it in c-vol12:10
*** andymaier has joined #openstack-cinder12:10
geguileoBut I think we were checking in the API and the c-vol that the source exists12:10
dulekgeguileo: Ah, yeah, this is true.12:10
*** erlon has joined #openstack-cinder12:10
*** JoseMello has joined #openstack-cinder12:11
geguileoBut there's no possibility of conditional inserts (not supported by all DBs)12:11
*** alonma has quit IRC12:11
geguileoSo we would need to use a transaction for the checks and the insertion12:11
geguileoThat's how I would do it12:11
geguileoif it isn't being done like that already12:11
dulekgeguileo: It isn't. A possible race condition is for example - create a volume from a volume type, volume type get's removed before we add entry to the DB.12:12
duleks/get's/gets12:12
*** markvoelker has quit IRC12:12
geguileodulek: It won't if you set the right transaction isolation12:13
dulekgeguileo: Yeah, same with sources.12:13
geguileolevel12:13
dulekgeguileo: Yes, yes.12:13
dulekgeguileo: But right now we fetch vol_type in api.v2.volumes12:13
dulekBut create the volume waaaay after inside create_volume flow.12:13
geguileoThat's not a problem12:13
geguileoYou just need to check that the type still exists when you are doing the create flow12:14
geguileoWith a check query in the conditional update12:14
geguileoSorry, it's insert12:14
*** alonma has joined #openstack-cinder12:14
geguileoSo doing a transaction that checks it again12:14
dulekgeguileo: That would work. But should we remove the checks from api.v2.volumes then?12:15
geguileoOr just move it to the flow12:15
geguileoYes, I think that would be the best thing to do12:15
geguileoOr use a transaction in the api.v2.volumes12:15
geguileoAnd store it in the context12:15
geguileoAnd then use it in the flow with subtransaction=True12:15
geguileoBut I wouldn't do that12:16
dulekgeguileo: Yeah, this would also work, but I would prefer to move checks.12:16
geguileoBecause the transaction would be longer than needed12:16
geguileodulek: I agree, it's better to move the check12:16
dulekgeguileo: Okay, we'll look into that12:18
geguileodulek: Thanks!12:18
dulekgeguileo: Is this blocking c-vol A/A actually? I think we can go forward with that in parallel.12:18
geguileodulek: No, it's not blocking it12:19
*** alonma has quit IRC12:19
geguileoI'm currently working on the job distribution patches12:19
*** hyakuhei_ has joined #openstack-cinder12:19
dulekHurray! :)12:19
geguileodulek: And I've decided not to rename host to cluster, like you suggested  ;-)12:19
dulekDo you plan to join the mid-cycle hangout and talk a little on the status?12:19
*** akshai has joined #openstack-cinder12:20
geguileodulek: I have updated the specs, just need to push them for review12:20
dulekgeguileo: So what's the new name for host-member.12:20
dulek?12:20
geguileonode12:20
geguileoXD XD12:20
dulekSo host consists of several nodes. We'll need to document it to be clear, but I think it's fine.12:21
geguileoI added a little more detail to the host configuration option to explain it, but indeed this needs proper documentation12:22
dulek(like everything :D)12:22
dulekSo about the mid-cycle?12:22
geguileoI'll try to join to as much meetings as I can12:22
geguileoA good time for me to give the status report would be Mon-Wed12:23
dulekWe can try to schedule something on A/A on Tuesday's or Wednesday's morning.12:23
dulekThat would be ~15 your time.12:24
dulek3 PM I mean. ;)12:24
*** akshai has quit IRC12:24
dulekOkay, at Wednesday morning were talking with Nova's meetup, so maybe Tuesday just after the start of the meetup?12:25
* dulek going to grab lunch, I'll be there in an hour or so12:27
geguileodulek: Sounds good to me12:27
geguileodulek: Enjoy lunch12:27
*** openstackgerrit has quit IRC12:33
*** openstackgerrit has joined #openstack-cinder12:34
*** [1]Thelo has joined #openstack-cinder12:34
*** Thelo has quit IRC12:36
*** [1]Thelo is now known as Thelo12:36
*** ildikov has quit IRC12:37
*** jordanP has quit IRC12:40
*** markvoelker has joined #openstack-cinder12:42
chhavihemna,johnthetubaguy: Please review: https://review.openstack.org/#/c/255595/12:43
*** ildikov has joined #openstack-cinder12:49
*** markvoelker has quit IRC12:54
*** haomaiwang has joined #openstack-cinder12:55
*** bill_az has joined #openstack-cinder12:59
*** haomaiwang has quit IRC13:00
*** dims is now known as dimsum__13:00
*** vgridnev has quit IRC13:01
*** julim has joined #openstack-cinder13:08
*** suresh has quit IRC13:12
*** timcl1 has quit IRC13:12
*** esker has joined #openstack-cinder13:15
*** alonma has joined #openstack-cinder13:15
*** sahu has quit IRC13:16
*** vgridnev has joined #openstack-cinder13:16
*** e0ne has quit IRC13:16
*** hyakuhei_ has quit IRC13:16
*** e0ne has joined #openstack-cinder13:17
*** vgridnev has quit IRC13:19
*** alonma has quit IRC13:20
*** edmondsw has joined #openstack-cinder13:20
*** esker has quit IRC13:20
*** laughterwym has joined #openstack-cinder13:21
openstackgerritMerged openstack/cinder: Return BadRequest for invalid Unicode names  https://review.openstack.org/26603613:23
*** vgridnev has joined #openstack-cinder13:23
*** timcl has joined #openstack-cinder13:32
*** diogogmt has quit IRC13:35
*** jordanP has joined #openstack-cinder13:35
*** diogogmt has joined #openstack-cinder13:36
*** akerr_away is now known as akerr__13:37
*** markvoelker has joined #openstack-cinder13:38
*** akerr__ is now known as akerr13:39
*** markvoelker has quit IRC13:43
*** ildikov has quit IRC13:44
*** diogogmt has quit IRC13:47
*** rlrossit has joined #openstack-cinder13:52
*** rlrossit_ has joined #openstack-cinder13:56
*** esker has joined #openstack-cinder13:56
*** diablo_rojo has joined #openstack-cinder13:57
*** sombrafam has quit IRC13:58
*** markvoelker has joined #openstack-cinder13:59
*** rlrossit has quit IRC13:59
*** belmoreira has quit IRC14:00
*** winston-d_ has joined #openstack-cinder14:01
*** markvoelker_ has joined #openstack-cinder14:01
*** anshul has quit IRC14:01
*** nkrinner has quit IRC14:01
*** crose has joined #openstack-cinder14:02
*** esker has quit IRC14:03
*** markvoelker has quit IRC14:04
*** esker has joined #openstack-cinder14:06
*** dslev_ has joined #openstack-cinder14:06
*** hyakuhei_ has joined #openstack-cinder14:12
*** haomaiwang has joined #openstack-cinder14:14
*** alonma has joined #openstack-cinder14:15
*** porrua has joined #openstack-cinder14:17
*** haomaiwang has quit IRC14:18
*** alonma has quit IRC14:20
*** akshai has joined #openstack-cinder14:21
*** Julien-zte has quit IRC14:22
*** mriedem has joined #openstack-cinder14:22
*** porrua has quit IRC14:22
*** shyama_ has joined #openstack-cinder14:23
*** gouthamr has joined #openstack-cinder14:24
*** shyama has quit IRC14:24
*** shyama_ is now known as shyama14:24
*** gouthamr_ has joined #openstack-cinder14:25
*** porrua has joined #openstack-cinder14:25
*** kaisers1 has quit IRC14:27
*** kaisers has joined #openstack-cinder14:27
*** gouthamr has quit IRC14:28
*** esker has quit IRC14:31
*** akshai has quit IRC14:31
*** cknight has joined #openstack-cinder14:33
*** jgregor has joined #openstack-cinder14:34
*** haomaiwang has joined #openstack-cinder14:35
*** xyang1 has joined #openstack-cinder14:35
*** dustins has joined #openstack-cinder14:35
*** lcurtis has joined #openstack-cinder14:38
*** dslev_ has quit IRC14:39
*** haomaiwang has quit IRC14:39
*** jungleboyj has joined #openstack-cinder14:39
*** xyang has joined #openstack-cinder14:41
openstackgerritIvan Kolodyazhny proposed openstack/cinder: Duplicate code in volume manager and base driver  https://review.openstack.org/27133114:43
openstackgerritxiaoqin proposed openstack/cinder: Clone volume between different volume size  https://review.openstack.org/26674314:44
*** gcb has quit IRC14:45
*** gcb has joined #openstack-cinder14:46
*** davechen has left #openstack-cinder14:46
*** ildikov has joined #openstack-cinder14:47
*** rlrossit_ has quit IRC14:52
*** dslev_ has joined #openstack-cinder14:52
*** obutenko has quit IRC14:55
*** haomaiwang has joined #openstack-cinder14:56
*** markvoelker has joined #openstack-cinder14:57
bswartzsmcginnis: ping14:57
smcginnisbswartz: Morning!14:57
bswartzgood morning!14:57
*** erhudy has joined #openstack-cinder14:57
bswartzI wanted to make sure you knew that the projectors in the netapp conference rooms only take VGA inputs14:58
bswartzso people who plan to present on their notebooks next week need to make sure they have VGA outputs14:58
smcginnisbswartz: Ah, thanks for the heads up!14:58
smcginnisI have adapters so I should be good.14:58
smcginnisHopefully we have enough for whoever needs to presetn.14:59
smcginnisbswartz: Thanks for the warning.14:59
bswartzI know there was a conversation about the wireless too14:59
bswartzsome outgoing ports, including IRC, are blocked14:59
bswartzbut it's easy to route IRC through an SSH tunnel or other form of VPN15:00
*** haomaiwang has quit IRC15:00
smcginnisbswartz: Ruh roh. IRC could be a bigger issue.15:00
akerrbswartz: ssl port 7070 to freenode isn't blocked15:00
smcginnisI SSH to a host I always have up, but I know not everyone can do that.15:01
bswartzakerr: really? on the netapp SSID?15:01
akerrbswartz: yes, it's what i use15:01
*** markvoelker_ has quit IRC15:01
smcginnisWhew15:01
bswartzakerr: you should be using corp ssid!15:01
akerrbswartz: oh right, not sure about netapp ssid, but why would it have more blocked ports?15:02
*** amoturi has joined #openstack-cinder15:02
bswartzakerr: because it's open to the whole world15:02
bswartzcorp ssid requires a password to login so they trust it more15:02
bswartzI'm not claiming it makes sense, but the security guys have their rules15:02
smcginnis;)15:02
*** timcl has quit IRC15:03
smcginnisbswartz: How did things go with the manila midcycle? Were there issues from this?15:03
bswartzit would not surprise me if the netapp ssid blocked all ports related to IRC -- but you can still SSH out, VPN out, and access the web so a variety of workarounds are possible15:03
bswartzsmcginnis: it was pretty good -- the audio in the room was fantastic compared to other midcycles15:04
bswartzpeople on the phone could actually hear us and vice versa15:04
*** esker has joined #openstack-cinder15:04
bswartzthe projector situation was a little less great, because the presenter has to sit over in the corner15:04
bswartzand use VGA15:04
bswartzbut it's not a big deal15:04
akerrtbarron: did you have issues with irc on the netapp ssid?15:04
akerrdustins: ^15:05
dustinsakerr: Yeah, I couldn't connect to freenode from that ssid15:06
tbarronakerr: I couldn't connect to freenode directly at whatever port I normally use.15:06
dustinsI had to use the webclient/ssh to an internal server to use Freenode15:06
tbarronakerr: dustins: so I did ssh to a machine that could.15:06
akerrdustins: so the webclient worked?15:07
dustinsakerr: It did for me, but I was connected to my VPN as well, so YMMV15:07
dustinsMight want to do a quick test15:07
tbarronakerr: you can do http direct to freenode http service, I just don't like using a web client15:08
akerrdustins: well i would, but there's ice on the road15:08
*** edtubill has joined #openstack-cinder15:08
*** timcl1 has joined #openstack-cinder15:08
tbarronakerr: i'm pretty sure i did freenode webclient without tunneling through vpn15:08
smcginnisI'm sure we'll figure it out and get by.15:08
dustinsakerr: Yeah, same reason why I'm operating from the apartment office today :D15:08
tbarronakerr: guest network has to allow http15:09
smcginnisThere's always jungleboyj's cell phone. :P15:09
tbarronsmcginnis: I think the workarounds are pretty straightforward.15:09
akerrsmcginnis: if it comes to that we can use mine as a hotspot :)15:09
dustinssmcginnis: I'm sure he won't mind :P15:09
timcl1akerr: bswartz: port 7070 works fine even with corp SSID15:10
jungleboyjsmcginnis: Yeah, I have 15 Gig left for the month and I roll into a new month, while there so we will be good.15:10
*** cknight has quit IRC15:10
smcginnishaha, we're golden then. :)15:10
timcl1bottom line is 7070 should work for everyone without issue15:11
smcginnisOK, timcl1 just signed up to use his cellphone if that doesn't work. :D15:11
timcl1smcginnis: sure no worries, have hotspot and unlimited data so we should be good :)15:12
*** diogogmt has joined #openstack-cinder15:12
*** markvoelker has quit IRC15:12
bswartztimcl1: guests can't use corp ssid -- they have to use netapp15:12
smcginnistimcl1: Hah, thanks. I'm sure we'll work things out.15:12
*** rlrossit has joined #openstack-cinder15:13
*** cknight has joined #openstack-cinder15:13
timcl1bswartz: that is true but the ports for IRC will still work15:14
*** alonma has joined #openstack-cinder15:16
openstackgerritIvan Kolodyazhny proposed openstack/cinder: Duplicate code in volume manager and base driver  https://review.openstack.org/27133115:16
e0neerlon: hi! how can I start your CI for my patch ^^?15:16
*** haomaiwang has joined #openstack-cinder15:17
*** crose has quit IRC15:18
*** jistr has quit IRC15:18
*** alonma has quit IRC15:20
*** haomaiwang has quit IRC15:22
*** diablo_rojo has quit IRC15:30
*** mragupat has joined #openstack-cinder15:30
*** arch-nemesis has joined #openstack-cinder15:31
*** cebruns_ has quit IRC15:32
*** baumann has joined #openstack-cinder15:33
*** cebruns has joined #openstack-cinder15:33
*** jgregor has quit IRC15:34
*** kfarr has joined #openstack-cinder15:34
*** jgregor has joined #openstack-cinder15:35
openstackgerritSzymon Wróblewski proposed openstack/cinder: Replace locks in volume manager  https://review.openstack.org/18564615:35
*** shyama has quit IRC15:37
*** haomaiwang has joined #openstack-cinder15:38
*** haomaiwang has quit IRC15:43
*** markvoelker has joined #openstack-cinder15:44
cfriesenwhen using lvm/iscsi, if I set lvm_type=thin should that change cinder accounting at all?  The reason I ask is that I have a 105GB thin pool, and cinder let me create 20 1G volumes, a 100GB volume, and a snapshot of the 100GB volume.  shouldn't it have cut me off due to lack of space?15:47
cfriesen(this is on stable/kilo)15:47
openstackgerritIvan Kolodyazhny proposed openstack/cinder: Duplicate code in volume manager and base driver  https://review.openstack.org/27133115:49
*** laughterwym has quit IRC15:50
guitarzancfriesen: thin lets you overprovision, is that what you're asking?15:51
*** e0ne has quit IRC15:52
*** e0ne has joined #openstack-cinder15:53
openstackgerritEric Harney proposed openstack/cinder: Tests: Strengthen assertFalse assertions  https://review.openstack.org/26340415:54
*** dslev_ has quit IRC15:57
*** haomaiwang has joined #openstack-cinder15:58
*** timcl1 has quit IRC15:59
*** sombrafam has joined #openstack-cinder15:59
*** lprice1 has joined #openstack-cinder15:59
*** haomaiwang has quit IRC16:01
*** crose has joined #openstack-cinder16:01
*** lprice has quit IRC16:02
*** thangp has joined #openstack-cinder16:03
*** sborkows has quit IRC16:04
*** timcl has joined #openstack-cinder16:06
*** vgridnev has quit IRC16:07
*** crose has quit IRC16:07
*** salv-orlando has joined #openstack-cinder16:08
openstackgerritDmitry Guryanov proposed openstack/os-brick: Add vzstorage protocol for remotefs connections  https://review.openstack.org/27141116:08
*** cknight1 has joined #openstack-cinder16:11
*** salv-orlando has quit IRC16:12
*** mragupat has quit IRC16:12
*** cknight has quit IRC16:14
*** mragupat has joined #openstack-cinder16:15
*** pratap has joined #openstack-cinder16:15
*** rcernin has quit IRC16:15
*** wilson-1 has joined #openstack-cinder16:16
*** ociuhandu has quit IRC16:19
*** wilson-liu has quit IRC16:20
*** diablo_rojo has joined #openstack-cinder16:20
*** alonma has joined #openstack-cinder16:21
*** sheel has joined #openstack-cinder16:21
*** pratap_ has joined #openstack-cinder16:25
*** alonma has quit IRC16:26
kmartin_heck, if we can run a midcycle with no wifi we can run one without IRC, google hangout?16:27
kmartin_jungleboyj, ^^ :)16:27
*** diablo_rojo has quit IRC16:28
*** alonma has joined #openstack-cinder16:28
Swansonkmartin_, don't try to go under that bar.  This isn't limbo.16:28
jungleboyjkmartin_: :-)  We can do anything!16:28
*** pratap has quit IRC16:28
*** alonma has quit IRC16:32
*** yuriy_n17 has quit IRC16:33
*** daneyon has joined #openstack-cinder16:38
*** simondodsley has joined #openstack-cinder16:39
hemnamornin16:39
*** arch-nemesis has quit IRC16:40
openstackgerritMitsuhiro Tanino proposed openstack/cinder: WIP:Support restore target configration during driver initialization  https://review.openstack.org/27142416:41
simondodsleyQuestion regarding QoS. If we attach a volume that has a QoS association to a running instance, is the QoS meant to apply dynamically, or is a hard reboot of the instance required to update the libvirt.xml? Looks like it need a reboot at the moment - not optimal if that is the case. Also if we change a QoS spec will that get dynamically applied to the16:41
simondodsleyvolume, or will too need a hard reboot?16:41
*** jdurgin1 has joined #openstack-cinder16:41
simondodsleyto clarify - I'm talking front-end QoS here...16:42
*** jistr has joined #openstack-cinder16:43
*** arch-nemesis has joined #openstack-cinder16:43
*** mtanino has joined #openstack-cinder16:43
*** timcl has quit IRC16:45
hemnahttps://www.united.com/CMS/en-US/travel/news/Pages/travelnotices.aspx16:46
tbarronsimondodsley: front-end-QoS is a question for #openstack-nova, no?  Though I'm also curious what the answer will be here :-)16:46
*** salv-orlando has joined #openstack-cinder16:47
openstackgerritNate Potter proposed openstack/cinder: Remove access_mode 'rw' setting in drivers  https://review.openstack.org/26544316:48
kmartin_simondodsley, I haven't tested it but I believe it applied by the hypervisor when the volume is attached, as tbarron mention it's really a nova question16:49
cfriesenguitarzan: sorry, got pulled away.  yeah, I was wondering about overprovisioning.  I have max_over_subscription_ratio=1.0 in cinder.conf, but the cinder-volume startup logs show lvm.max_over_subscription_ratio = 20.0.  I guess I need to set the ratio for the lvm backend separately?16:50
kmartin_and the different hypervisors may act differently16:50
simondodsleyok - i'll go ask nova - i thought I would ask here first16:50
simondodsleyi know it it only applies to KVM at the moment16:51
*** laughterwym has joined #openstack-cinder16:51
simondodsleytbarron: I'll let you know what they say16:53
tbarronsimondodsley: ty!16:53
*** rcernin has joined #openstack-cinder16:55
*** e0ne has quit IRC16:55
*** dslevin has joined #openstack-cinder16:57
*** esker has quit IRC16:59
cfriesenokay, I've specified lvm.max_over_subscription_ratio = 1 and max_over_subscription_ratio = 1 and restarted cinder-volume and cinder-scheduler, but it's still letting me create cinder volumes totalling more space than the size of my thin pool.  Any ideas?16:59
*** hyakuhei_ has quit IRC17:00
eharneycfriesen: can you find the debug line in the scheduler that shows the free_capacity_gb and max_over_subscription_ratio reported for that backend?17:02
simondodsleytbarron: Is the mid-cycle still going ahead? Should have cleared up in RTP by Monday.17:02
cfrieseneharney: free_capacity_gb is showing the currently free space (ie amount of unused space in the thin pool).  max_over_subscription_ratio is 1.17:03
cfrieseneharney: the capacity filter only looks at max_over_subscription_ratio if it's > 117:04
*** bardia has joined #openstack-cinder17:04
*** ildikov has quit IRC17:04
cfrieseneharney: so as long as I ask to create a new volume that is smaller than the remaining space in the thin pool it'll let me, even though allocated_capacity_gb ends up way bigger than total_capacity_gb17:06
kmartin_simondodsley, yeah it's still on...just bring your ice skates, Sunday and Monday look clear.17:06
simondodsleycool - let's hope LGA has recovered - assuming its as epic as they say...unlike last time when it was a complete bust :)17:07
*** crose has joined #openstack-cinder17:07
tbarronkmartin_: we should have a cinder hockey game, or kickball at least :-)17:08
*** jistr has quit IRC17:08
*** laughterwym has quit IRC17:09
kmartin_yep, just make sure you have rental car insurance17:09
*** jistr has joined #openstack-cinder17:09
*** jdurgin1 has quit IRC17:10
hemnatbarron, I'll bring my skates17:11
*** markus_z has quit IRC17:13
*** jistr has quit IRC17:14
cfriesenIn the capacity filter, why do we only consider max_over_subscription_ratio if it's greater than 1?  Looks like that logic should work fine for less than 1 as well.17:14
bswartzDuncanT: ping17:17
*** jistr has joined #openstack-cinder17:17
bswartzDuncanT: do you know how much longer HP cloud will continue to donate resources to openstack-infra for zuul job nodes?17:17
hemnabswartz, I'm guessing not very much longer17:20
*** timcl has joined #openstack-cinder17:21
*** haomaiwa_ has joined #openstack-cinder17:21
xyangcfriesen: if the ratio < 1, you can't provision more than the physical capacity, so it does not really allow over subscription17:23
cfriesenxyang: my ratio is 1 exactly, so it fails the "host_state.max_over_subscription_ratio > 1" check in the capacity filter, and falls through to the "if free < volume_size" check17:24
bswartzhemna: well what will happen to the node pool? right now it seems to have a capacity of 1000 jobs17:24
hemnaI really have no idea :(17:24
bswartzif that number goes down then the will go from bad to worse...17:24
cfriesenxyang: however "free" is the currently free value, which doesn't factor in the provisioned size17:25
bswartzthe gate* will go from bad to worse17:25
*** timcl has left #openstack-cinder17:25
*** leeantho has joined #openstack-cinder17:25
xyangcfriesen: 1 means you can only provision exactly the same amount of capacity as the physical capacity.17:25
cfriesenxyang: yes17:25
cfriesenxyang: that is my intent17:25
xyangcfriesen: that's why it goes the other path17:25
cfriesenxyang: but the "free" value is reporting the actual free amount, which in the thin-provisioned case doesn't account for provisioned-but-not-yet-allocated space17:26
*** haomaiwa_ has quit IRC17:26
cfriesenxyang: so it lets you massively overcommit17:26
*** rlrossit has quit IRC17:26
cfriesenxyang: at a minimum, the check should be17:26
cfriesenif (host_state.thin_provisioning_support and17:27
cfriesen                host_state.max_over_subscription_ratio > 1):17:27
cfriesenbah, it should be host_state.max_over_subscription_ratio >= 117:27
hemnabswartz, yup.  You can thank Meg for that :(17:27
xyangcfriesen: isn't that not there?  let me check17:27
*** esker has joined #openstack-cinder17:27
eharneycfriesen: so it's sounding like the issue is that the LVM driver reports usage differently depending on whether you are using thin or thick LVM17:27
*** timcl1 has joined #openstack-cinder17:27
cfrieseneharney: yes, but also that the filter behaves differently depending on whether the allocation ratio is greater than 1 or not17:28
eharneywhich made sense before we introduced this ratio, now, maybe not so much17:28
tbarronhemna: w.r.t  https://review.openstack.org/271331 it looks like the driver method copy_volume_data is no longer called from anywhere but test code.17:29
tbarronhemna: prove us wrong :-) ^^17:29
xyangcfriesen: There were feedback on >1 vs >=1 and going back and forth.  I have to check exactly what we have changed.17:30
cfrieseneharney: xyang: I don't see any reason why we couldn't use a common capacity filter code path for "if (host_state.thin_provisioning_support)" regardless of what the over subscription ratio is.  A ratio less than 1 would just be equivalent to reserving some space.17:30
xyangcfriesen: I can consider 1 a valid ratio, but <1 is misconfiguration17:31
xyangcfriesen: that's why it  is not considered correct ratio17:31
hemnatbarron, that seems 'scary' because of those 2 drivers needing those calls to be made17:31
hemnanot sure who approved that17:32
tbarroncfriesen: i'm with xyang on this one, <1 could work but lead to having to explain too much too often17:32
cfrieseneharney: xyang: on a slightly different note, in volume/driver.py the help text for max_over_subscription_ratio says "A ratio lower than 1.0 will be ignored and the default value will be used instead", but I don't see the default value of 20 being used.17:32
cfriesentbarron: okay, fair enough17:32
tbarronhemna: don't disagree, but I think they are never called17:32
tbarronhemna: presumably they used to be17:32
hemna:(17:32
hemnathat feels bad mmmkay17:32
*** mudassirlatif has joined #openstack-cinder17:33
*** fthiagogv has quit IRC17:33
hemnanot sure the XP CI is up at the moment17:33
hemnait might be failing already due to this17:33
xyangcfriesen: automatically change something < 1 to 20 will be too dramatic17:33
hemna:(17:33
xyangcfriesen: I'll take a look of the text17:33
cfriesenxyang: agreed, but we should then update the help text to reflect reality17:33
xyangcfriesen: sure17:33
eharneyxyang: on a related note, i have this patch up: https://review.openstack.org/#/c/266986/17:34
xyangeharney: Ok, I'll take a look of that17:34
cfrieseneharney: I like it17:35
eharneyi don't really like it, but it's all i could get folks to agree to :)17:35
sheelsmcginnis: hi there17:37
xyangeharney: right.  patch on this subject tend to get stuck with controversies:)17:37
openstackgerritxiaoqin proposed openstack/cinder: Clone volume between different size for Storwize  https://review.openstack.org/26674317:38
*** sombrafam has quit IRC17:39
*** andymaier has quit IRC17:39
*** haomaiwang has joined #openstack-cinder17:42
*** jordanP has quit IRC17:42
*** hyakuhei_ has joined #openstack-cinder17:42
cfriesenxyang: looks like your commit 7595375 changed it to " > 1", but the commit message talks about allowing oversubscription if the ratio is "greater than or equal to 1", which doesn't seem to match the code.17:44
*** esker has quit IRC17:44
xyangcfriesen: I'll take a look.  as I said, we went back and forth on this one17:46
*** baumann has quit IRC17:46
*** asselin has quit IRC17:46
*** baumann has joined #openstack-cinder17:46
*** haomaiwang has quit IRC17:47
cfriesenxyang: well, if we don't change it to >=1 then we need to change how lvm reports free space when the ratio is 1.017:49
-openstackstatus- NOTICE: Restarting zuul due to a memory leak17:49
eharneyi'm inclined to think we need to change the LVM behavior anyway, but i need to look closer17:50
*** mdavidson has quit IRC17:50
xyangcfriesen: I changed from >=1 to >1 based on review comments on that patch.  So I don't want to keep changing back and forth17:50
erlone0ne: hey, I will17:52
*** esker has joined #openstack-cinder17:52
eharneyxyang: i didn't review that one, so i reserve the right to have new comments :)17:52
*** asselin has joined #openstack-cinder17:53
eharneyi am curious of the reasoning though17:53
erlone0ne: do you know if the migration tests Vincent did are approved?17:53
xyangeharney: I know you didn't:)17:53
cfrieseneharney: https://review.openstack.org/#/c/185764/  at the bottom of the page17:54
xyangeharney: I actually made some adjustments to the similar logic in Manila.  I'll see if I can bring some back to here17:54
eharneyxyang: i suspect that if we fixed the LVM driver (and any other drivers doing odd things with these stats), it shouldn't matter much17:54
xyangeharney: I hope so:)17:55
cfriesencan I leave this in your hands, or did you want a bug report for tracking?17:55
*** alonma has joined #openstack-cinder17:56
*** jistr has quit IRC17:57
xyangcfriesen: feel free to open a bug so we can keep track of it17:57
sheelcfriesen: I think its better to raise bug, if not ok we can claim it invalid later on..17:57
*** ildikov has joined #openstack-cinder17:58
*** mragupat has quit IRC17:58
*** sombrafam has joined #openstack-cinder18:00
*** alonma has quit IRC18:00
*** mudassirlatif has quit IRC18:00
*** apoorvad has joined #openstack-cinder18:02
*** 14WAAR75L has joined #openstack-cinder18:03
*** leeantho has quit IRC18:04
*** esker has quit IRC18:05
cfriesenbug opened, https://bugs.launchpad.net/cinder/+bug/153716218:06
openstackLaunchpad bug 1537162 in Cinder "accounting bug for lvm with thin provisioning and max_over_subscription_ratio=1" [Undecided,New]18:06
*** boris-42 has joined #openstack-cinder18:07
*** 14WAAR75L has quit IRC18:08
cfriesenxyang: eharney: on a related note, it seems like I'm allowed to create a snapshot of a very large volume when there is very little actual space remaining (once accouting for the over subscription ratio).  Shouldn't we only allow snapshotting if there is enough room for the snapshot?18:08
xyangcfriesen: creating snapshot does not go thru the scheduler18:09
xyangcfriesen: so that is a different issue18:09
xyangcfriesen: if host is known, a request such as snapshot or clone will be sent to manager directly18:09
cfriesenxyang: shouldn't we do a size check though?  otherwise we run the risk of filling up the LVM thin pool and causing major problems18:10
sheelcfriesen xyang :  sorry to cut into18:10
sheelcfriesen xyang: we can implement this kind of check in driver itself18:10
*** hyakuhei_ has quit IRC18:11
xyangcfriesen, sheel: there should be such checks.  if not, it can be added18:11
sheelcfriesen xyang: I saw sometime ago in lvm that we are directly running snapshot create without checking size of available backend18:11
eharneyhow are you going to decide how much space you need for an LVM snapshot?18:11
cfrieseneharney: worst case, full size of the original volume18:11
sheelcfriesen xyang eharney: it used to be equivivalent to size of volume18:11
*** Lee1092 has quit IRC18:12
sheelin lvm18:12
cfriesenI actually don't care about overcommit at all, I'm using thin provisioning to avoid the need to zero out the cinder volumes on deletion.18:12
eharneysheel: not in thin lvm18:12
eharneycfriesen: yeah, snapshots perform much better in thin lvm, which is why we're aiming to move away from thick altogether18:13
cfriesenso I'd rather have an operation be rejected than risk filling up the thin pool18:13
sheeleharney: oks18:13
eharneycfriesen: in theory we have a snapshot_gb_quota, but i've never heard of anyone using it18:14
cfriesenI'm not worried about the quota, I just want to make sure that if someone does a snapshot and then completely rewrites the original disk with new data that we don't fill up the thin pool18:14
eharneycfriesen: makes sense18:15
*** mudassirlatif has joined #openstack-cinder18:16
*** angela-s has joined #openstack-cinder18:16
sheelcfriesen: are you raising bug for same, please share id if so..18:17
cfriesenso are we looking at a check in volume.drivers.lvm.LVMVolumeDriver.create_snapshot()?18:17
cfriesensheel: yeah, I'll raise a bug, one sec18:17
*** mudassirlatif has quit IRC18:17
*** martyturner has joined #openstack-cinder18:17
*** mudassirlatif has joined #openstack-cinder18:18
sheelcfriensen: I think yes...just before self.vg.create_lv_snapshot(...18:20
cfriesendo we check free size in general before making a snapshot?  is this something that should be done for other backends too?18:20
sheelcfriensen: actually this kind of things are checked in scheduling tasks18:21
sheelin scheduler18:21
sheelcfriensen:either we should include scheduler in between to check gb_quota only for host where volume exists18:21
*** alonma has joined #openstack-cinder18:22
sheelthis may not require checking same in backends18:22
sheelbut if we are thinking about adding same at driver level, then yes we have to handle in all backends if not handled already18:22
*** haomaiwang has joined #openstack-cinder18:24
sheeleharney: what you think about it?18:25
sheeleharney: just as an expert advice..18:25
eharneythe check probably shouldn't be in the driver18:25
eharneythe driver should gather enough info about space left in the VG for it to be determined by the scheduler, right?18:25
cfriesenbug created: https://bugs.launchpad.net/cinder/+bug/153716618:25
openstackLaunchpad bug 1537166 in Cinder "snapshot of volume should check free space first" [Undecided,New]18:25
eharneyalso, does it not do that now?18:25
*** diablo_rojo has joined #openstack-cinder18:26
cfriesencouldn't we call the existing scheduler code, just on the known host and with the size of the volume being snapshotted?18:26
*** alonma has quit IRC18:26
cfriesenthat'd automatically take into account the overcommit ration18:26
eharneythe problem is, on some backends, snapshots are probably considered "free", and the scheduler isn't going to know how to calculate differently for different backends18:27
*** porrua has quit IRC18:27
eharneyso i'm not sure there's a very straightforward remedy18:27
openstackgerritKedar Vidvans proposed openstack/cinder: Volume manage/unmanage support to ZFSSA drivers  https://review.openstack.org/27146218:27
*** haomaiwang has quit IRC18:29
*** chhavi has quit IRC18:29
cfrieseneharney: add a per-backend function to query the snapshot overcommit ratio?18:30
cfriesenor report it the same way as the normal overcommit ratio18:30
eharneyi'm not sure18:31
*** leeantho has joined #openstack-cinder18:31
ildikovhemna: hi18:32
hemnaildikov, hey18:32
hemnawhat's up18:32
*** crose has quit IRC18:33
ildikovI'm just checking mriedem's tempest test for multiattach18:33
ildikovwe have a detach failure18:34
ildikovI assume that's because of the lvm driver, but I don't really know how to confirm the real cause of the problem :)18:34
ildikovlog: http://logs.openstack.org/33/266633/2/check/gate-tempest-dsvm-full/93b2e47/logs/screen-c-api.txt.gz#_2016-01-22_01_01_51_90418:35
*** jordanP has joined #openstack-cinder18:36
mriedemildikov: so i'm wondering if the problem is we're doing 2 detaches right in a row18:37
mriedemif ^ is the 2nd attachment, then that could be coming in while the first detach is still happening, and i'm assuming the volume status would be something like 'detaching'?18:38
ildikovmriedem: I think it's the lvm driver18:38
ildikovmriedem: it's the flow here: http://logs.openstack.org/33/266633/2/check/gate-tempest-dsvm-full/93b2e47/logs/screen-c-vol.txt.gz#_2016-01-22_01_01_49_78018:38
ildikovyou have the attach, then the first detach and then there's no iscsi target anymore18:39
mriedemb/c of the bug ndipanov was talking about?18:39
ildikovhttp://logs.openstack.org/33/266633/2/check/gate-tempest-dsvm-full/93b2e47/logs/screen-c-vol.txt.gz#_2016-01-22_01_01_59_41618:39
ildikovhttp://logs.openstack.org/33/266633/2/check/gate-tempest-dsvm-full/93b2e47/logs/screen-c-vol.txt.gz#_2016-01-22_01_01_59_57418:40
ildikovthat I'm not 100% sure about18:40
ildikovI don;t know where os-brick and Cinder detach meet, if you know how I mean18:40
openstackgerritAlex Meade proposed openstack/cinder: WIP: user notifications with zaqar  https://review.openstack.org/27147518:42
sheelcfriesen: there is some handling present for volume resize in volume/api.py for checking quota atleast in db18:43
sheeldef extend(self, context, volume, new_size)      raise exception.VolumeSizeExceedsAvailableQuota(18:43
ildikovmriedem: BTW there's also a log in Nova that there's no attachment_id: http://logs.openstack.org/33/266633/2/check/gate-tempest-dsvm-full/93b2e47/logs/screen-n-cpu.txt.gz?level=WARNING#_2016-01-22_01_01_58_91718:44
sheelatleast this kind of check must be there for snapshot to confirm quota for snapshot...in case there is some snapshot gb quota available in db18:44
*** haomaiwang has joined #openstack-cinder18:45
cfriesensheel: yes, I see the quota checking.  but that doesn't really help if the user is within their quota but ends up being the straw that breaks the camel's back18:45
*** harlowja has quit IRC18:46
*** jordanP has quit IRC18:46
*** harlowja has joined #openstack-cinder18:46
sheelcfriensen: yup... :) you can't gurantee backend availability from db quota ...  :)18:46
ildikovmriedem: what I don't know thoough is that what happens when the two detach requests are close, I mean the volume might still be in detaching state, which I guess could block detaching the second attachment18:46
sheelbut atleast first level of check must be added to reduce this issue up to some extent...18:47
cfriesensheel: it'd be nice if these "you don't have space" errors would cause the request to be rejected back to the client rather than accepting the request and creating a volume in the error state.18:49
*** martyturner has quit IRC18:49
sheelcfriesen : yes, exactly18:49
*** haomaiwang has quit IRC18:49
*** sombrafam has quit IRC18:51
ildikovhemna: does a detach request fail when the volume is in detaching state?18:51
eharneycfriesen: we need to implement async error reporting to be able to report back that kind of error18:52
cfrieseneharney: no, you could just do the request to the scheduler as an RPC call rather than cast18:52
*** e0ne has joined #openstack-cinder18:52
*** sombrafam has joined #openstack-cinder18:53
hemnaback sorry18:53
*** ChubYann has joined #openstack-cinder18:54
*** alonma has joined #openstack-cinder18:55
ildikovhemna: np :)18:56
hemnaildikov, so you are talking about os-detach right ?18:57
*** cfriesen is now known as cfriesen_away18:57
ildikovhemna: I assume yes :)18:57
ildikovhemna: the tempest test we have is failing, which can have multiple reasons right now18:58
*** sombrafam has quit IRC18:59
hemnahttps://github.com/openstack/cinder/blob/master/cinder/volume/api.py#L59818:59
hemnaso begin_detaching is what's raising that18:59
hemnanot os-detach18:59
hemnaFYI18:59
*** sombrafam has joined #openstack-cinder19:00
hemnaso the volume must be 'in-use'19:00
ildikova-ha, ok19:00
hemnaand attach_status must be 'attached'19:00
hemnaotherwise begin_detaching will bail19:00
*** alonma has quit IRC19:00
ildikovis that attach_status that goes to detaching?19:00
*** PsionTheory has joined #openstack-cinder19:01
hemnastatus will go to detaching19:01
hemnahttps://github.com/openstack/cinder/blob/master/cinder/volume/api.py#L59519:01
sheelcfriesen, eharney: I agree with eharney on this... but if solution is possible than "going through last stage and then reporting error", it would be better to report error to user on early hand...(but not recommended by converting cast to call)19:02
ildikovhemna: a-ha, ok19:02
eharneysheel: cfriesen: the answer for reporting the error to the user is to have the client poll for the error after the failure, not change the call to synchronous19:02
ildikovhemna: so in tempest even if we have the happy scenario working, if we have the two detach requests too close to each other, that will fail19:02
hemnayah most likely19:03
*** alonma has joined #openstack-cinder19:03
ildikovok, cool19:03
ildikovthen mriedem is right19:03
sheeleharney, cgriesen:right19:03
hemnathat could happen even w/o multiattach19:03
*** lpetrut has quit IRC19:04
ildikovbut then the error is valid as someone tries to detach the volume twice19:04
ildikov... I would assume19:04
mriedemildikov: so what do we have to wait for?19:04
mriedemin tempest i mean19:05
hemnaildikov, correct19:05
hemnait's really the same problem that exists today19:05
ildikovmriedem: I assume the volume status should go back to attached, when the detach request is finished19:05
hemnaif you try and call begin_detaching when it's already detaching19:05
*** laughterwym has joined #openstack-cinder19:05
hemnadon't do that19:05
hemna:P19:05
mriedemildikov: you mean in-use?19:05
hemnait's a valid error19:05
mriedemhemna: it's == the volume?19:05
ildikovmriedem: bah, too many options..., yeah, I meant that19:05
hemnathe exception the API is raising is valid19:06
hemnabecause the volume isn't 'in-use'19:06
hemnait's in some other state.19:06
mriedemso detaching is like a task_state right?19:06
*** haomaiwang has joined #openstack-cinder19:06
ildikovhemna: but in multiattach case the volume still can be in-use19:06
hemnalike if you happen to issue begin_detaching for the same volume back to back19:06
hemnaildikov, in the multi-attach case19:06
hemnathe volume will still get put into detaching19:06
ildikovbut for multiattach it should be same attachment as opposed to same volume19:06
hemnauntil that detach is complete19:07
hemnathen it will go back to 'in-use' if there is an attachment for that volume still outstanding.19:07
hemnaor 'available' if that detach was the last one.19:07
*** alonma has quit IRC19:07
hemnathis gates actions on the volumes basically19:07
hemnato 1 at a time.19:07
ildikovok, makes sense, so we will need to check when it gets back to in-use19:08
mriedemyeah so in _detach here https://review.openstack.org/#/c/266605/4/tempest/api/compute/volumes/test_attach_volume.py19:08
mriedemon the first detach i'm not waiting for a state change19:08
mriedemonly the last one19:08
mriedemwhere is the 'attach_status'?19:09
mriedemis that on the volume?19:09
*** laughterwym has quit IRC19:10
mriedembasically i'm not seeing anything in the rest api logs in that tempest failure that indicates a volume status change while it's detaching the first attachment19:10
ildikovI see only status: http://developer.openstack.org/api-ref-blockstorage-v2.html19:10
mriedemmaybe i need to query the volume until it's (1) in-use and (2) it's number of attachments has decremented19:10
*** haomaiwang has quit IRC19:11
hemnayes attach_status is on the volume19:11
ildikovin Nova we set the attach_status by ourselves, it does not come from Cinder19:11
hemnaas is status19:11
ildikovit's sooo confusing19:12
hemnaattach_status is just attached or not basically19:12
hemnastatus has many states for different workflows19:12
*** rcernin has quit IRC19:12
hemnafwiw19:12
mriedemblarg19:12
mriedemnova has vm_state and task_state19:13
hemnayou can be attached and do things to a volume that has nothing to do with attaching or detaching19:13
*** esker has joined #openstack-cinder19:13
ildikovok, this is what I remembered19:13
ildikovmriedem: those are more readable19:13
mriedemhttps://github.com/openstack/cinder/blob/master/cinder/api/v2/views/volumes.py#L5619:13
mriedemso yeah, i guess status is the only thing available there19:14
mriedemthere is no 'attach_status' field in the response19:14
mriedemit must be on the volume in the cinder db though b/c19:14
mriedemhttps://github.com/openstack/cinder/blob/master/cinder/api/v2/views/volumes.py#L9519:14
mriedemit's just not available in the API19:14
mriedem:(19:14
mriedemso tempest can't use it19:14
ildikovno, it's returned under the name 'status'19:14
mriedembut status is 'in-use'19:15
mriedemlook here http://logs.openstack.org/33/266633/2/check/gate-tempest-dsvm-full/93b2e47//console.html19:15
*** IlyaG has joined #openstack-cinder19:15
*** IlyaG has quit IRC19:15
mriedemthere is no volume status of 'attaching' or 'detaching'19:15
ildikov<hemna> yes attach_status is on the volume19:15
ildikov<hemna> as is status19:15
mriedemyeah, in the DB19:15
mriedemnot in the API response of the volume19:15
ameadesheel, eharney, smcginnis: https://etherpad.openstack.org/p/mitaka-cinder-midcycle-user-notifications19:15
mriedemthis is what the client sees https://github.com/openstack/cinder/blob/master/cinder/api/v2/views/volumes.py#L58-L7919:16
ameadeand i pushed up some working code that pushes said messages to the user19:16
ameadeon a zaqar queue19:16
ildikovmriedem: I saw detaching somewhere19:16
mriedemildikov: the db has this https://github.com/openstack/cinder/blob/master/cinder/db/sqlalchemy/models.py#L14719:16
hemnamriedem, which API call ?19:16
hemnaget volume ?19:16
mriedemyeah19:16
hemnawtf19:16
ildikovmriedem: in the logs I mean, although that might have been log and not API response though19:16
ameadei dont think implementation is hard for this at all, but the defining common message format across projects and the message content is key19:17
hemnathose have to be there, or there would be much bigger problems19:17
ameadeparallels with standard notifications19:17
mriedemildikov: hemna: this is the volume response we have http://logs.openstack.org/33/266633/2/check/gate-tempest-dsvm-full/93b2e47//console.html#_2016-01-22_01_08_51_81519:17
e0nehemna: hi. I've got few questions according your comment to my patch19:17
hemna"status": "in-use"19:17
*** xyang has quit IRC19:17
mriedemhemna: yeah, right :)19:17
mriedembut19:17
* hemna is confused19:18
mriedemit's always in-use as long as you have an attachment19:18
mriedembut you're not allowed to make a 2nd detach request while a first attachment is being detached19:18
eharneyameade: i'm still of the opinion that this should be handled in Cinder and its API interface19:18
mriedemInvalid input received: Invalid volume: Unable to detach volume. Volume status must be 'in-use' and attach_status must be 'attached' to detach. (HTTP 400) (Request-ID: req-989e9a96-cdab-4b90-94dd-85db1f714e7c)"19:18
hemnamriedem, well it should get set to detaching if begin_detaching is called and it's 'in-use'19:18
mriedemspecifically19:18
mriedem"and attach_status must be 'attached' to detach. "19:18
hemnahttps://github.com/openstack/cinder/blob/master/cinder/volume/api.py#L59519:18
sheelameade: yes i was looking into https://review.openstack.org/#/c/169591/2/specs/liberty/messaging-the-user.rst19:18
smcginnissheel: Hey, saw your ping from earlier. Still need something?19:19
ameadeeharney: i think that's a valid option19:19
mriedemhemna: ok, and we hit https://github.com/openstack/cinder/blob/master/cinder/volume/api.py#L59819:19
*** mragupat has joined #openstack-cinder19:19
mriedemb/c attach_status is already 'detaching'19:19
mriedemb/c of the first detach request that's still going on19:20
mriedemwhen the 2nd detach request happens19:20
sheelsmcginnis: i raised one BP : https://blueprints.launchpad.net/cinder/+spec/serviceerrorreporting19:20
ildikovmriedem: yeah, it was the cinder api log I saw: http://logs.openstack.org/33/266633/2/check/gate-tempest-dsvm-full/93b2e47/logs/screen-c-api.txt.gz#_2016-01-22_01_01_51_90019:20
sheelsmcginnis: please have a look19:20
ameadeeharney, sheel: Heat has been spinning wheels with me from the beginning of this discussion. One thing I was to avoid is users having to hit every single endpoint to see all of their messages19:20
hemnayah, you can't do 2 simultaneous19:20
ameadewant*19:20
mriedemgah, i really just with attach_status was in the volume get response19:20
mriedem*wish19:20
smcginnissheel: Will do, thanks.19:21
eharneyameade: i dunno, i think if you want a message for why a Cinder operation failed, Cinder would be the right place to ask19:21
sheelameade: hey please have a look at my BP - https://blueprints.launchpad.net/cinder/+spec/summarymessage19:21
sheelameade: just refer html tagging19:21
mriedemb/c right now, as far as i understand, tempest will need to (1) make the detach request on the first attachment, (2) wait for the volume status to go to 'detaching', then (3) wait for it to go back to in-use (if there is another pending attachment)19:21
mriedemthere is a race in (2) ^19:21
sheelsmcginnis: thank you19:21
eharneyameade: but i was also thinking of a pull model rather than push19:21
ameadesheel: will do19:21
sheelameade, eharney : we can have combination of both19:21
sheelpush and pull19:22
sheeljust notifications can be stored and when user requests show accordingly19:22
mriedemildikov: hemna: it seems like the best thing tempest can do is check (1) volume status is 'available' or (2) volume status is in-use and the attachment that we just detached is no longer part of the volume's 'attachments' list in the response19:22
hemnamriedem, hrmm  attach_status is on the volume table19:22
ildikovmriedem: we can check the status + the number of attachments19:22
mriedemhemna: right, but it's not in the api response19:22
sheelameade, eharney: I was thinking of implementing things like ceilometer19:22
ildikovthat's stable19:22
hemnawhy isn't that in the get_volume call19:22
mriedemildikov: yeah19:22
hemnawtf19:22
mriedemhemna: it's not in the view19:22
eharneysheel: i was thinking of something rather different19:23
mriedemhemna: https://github.com/openstack/cinder/blob/master/cinder/api/v2/views/volumes.py#L5619:23
hemnawell the status should be correct19:23
mriedemtempest can't rely on the status19:23
mriedemit's too racy19:23
mriedemthe client will have to rely on the volume status + the attachments list19:23
hemnawait19:23
hemnastatus can't be racy19:23
hemnaor cinder itself wouldn't work19:23
sheelameade, eharney: notifications-> notificationAgent->collector->db(mongo)->api to fetch details to show to user19:23
mriedemhemna: no cinder19:23
mriedemthe client that's polling cinder19:24
openstackgerritMitsuhiro Tanino proposed openstack/cinder: Support cinder_img_volume_type in image metadata  https://review.openstack.org/25864919:24
sheelameade, eharney: this way we can let user decide if he/she wants to see some status explicitely thorugh some horizon tab19:24
mriedemanyway, as ildikov said, i think tempest - and any client - is going to have to check the volume status + the attachments list to know when the detach is really done19:24
eharneysheel: that also means that the user can't get messages unless they can access services other than cinder19:24
hemnamriedem, correct19:24
ildikovyeah, the status itself is not enough here :(19:24
mriedemhemna: at some point cinder should add the attach_status to the volume response in a microversion19:24
sheelameade, eharney: this will be easy to have for all projects19:24
hemnaend users will have to try again19:25
sheelameade, eharney: no, i am thinking of writing one seperate service19:25
hemnaas the error is correct19:25
sheelameade, eharney: which will be accessible to all components19:25
sheelameade, eharney: only notification part will be lying in cinder19:25
sheelameade, eharney: everything else will be in seperate service handling19:25
eharneysheel: i really do not like that model19:25
ameadesheel: that's what i'm thinking with zaqar19:25
ameadeeharney: yeah i do want the capability with cinder as stand alone as well19:26
sheelameade, eharney: I am just drafting BP, may be you get idea about what exactly i am thinking19:26
bswartzsheel: s/lying/laying/19:26
ameadei was thinking that there would be the ability to dump these messages to different/many different places19:26
hemnamriedem, yah that can't hurt I suppose19:26
eharneyameade: yeah, there's no reason not to keep it in a Cinder database table w/ a reference you can ID it by, and then we can shove it out to zaqar or whatever after that part19:26
ildikovhemna: yeah, the error itself is fine, but the client cannot reliably follow the flow here without checking the number of attachments19:26
ameadelike zaqar + db table19:26
hemnaalthough 'in-use' is the same thing as attached.19:26
hemnain this case19:26
ameadeeharney: yeah, they are just two different places to shove to19:26
eharneyameade: but if we can't just query Cinder to get the info in the simple case then things are being made too complicated imo19:27
eharneyameade: right19:27
hemnaildikov, the client needs to make sure the status is 'in-use'19:27
ildikovhemna: is attach_status 'attached' -> 'detaching' -> 'attached' if there are multiple attachments?19:27
hemnaor cope with the exception and then wait19:27
*** haomaiwang has joined #openstack-cinder19:27
sheelameade, eharney: actually we can query specific service/node to get specific details19:27
hemnaildikov, correct19:27
*** markvoelker has quit IRC19:27
sheelameade, eharney: it wont be like fetching all details to know details of one component19:27
hemnaor status  'in-use' -> 'detaching' -> 'in-use'19:27
ildikovhemna: than adding that to the response does not help either19:27
sheelameade, eharney: we will be having filters to select messages specific to single coomponent, all or some19:27
eharneysheel: i want to query the service that i called 2 seconds earlier w/ the API that generated the results i'm interested in19:28
mriedemhemna: sure, but "status  'in-use' -> 'detaching' -> 'in-use'" is not exposed to the user19:28
hemnathe status field values of 'in-use' and 'available' are the same as attach_status 'attached' and 'detached'19:28
*** lpetrut has joined #openstack-cinder19:28
ildikovmriedem: I think checking the attachments or retrying what we can do currently19:28
mriedemb/c the user doesn't have the attach-status in the response19:28
ameadesheel: you coming to the midcycle?19:28
hemnamriedem, not true19:28
ildikovmriedem: obviously in tempest checking the attachments would be better19:28
mriedem"the status field values of 'in-use' and 'available' are the same as attach_status 'attached' and 'detached'"19:28
hemnaif you call get volume at the right time19:28
sheelameade: sorry, i wont be able to join19:28
hemnathe api response will show 'detaching'19:28
hemnaif you get it at the right time19:28
mriedemhemna: oh right, yeah, 'at the right time'19:29
mriedem== race19:29
hemnano19:29
hemnathat's not a race19:29
hemnathat's a valid condition for the volume to be in19:29
ameadesheel, eharney: the last day is basically a hackathon right?19:29
hemnathe volume is in the process of detaching19:29
eharneyameade: i think so19:29
mriedemhemna: i know, but the client could totally miss the time that the volume is in 'detaching' state if it's fast19:29
ameadesheel: maybe you could virtually join us for some time then to discuss?19:29
mriedemit could go from in-use > detaching > in-use while the client is polling19:29
mriedemand if the client misses the 'detaching' status, they could wait indefinitely19:30
sheelameade, eharney: sure, I will try to join19:30
mriedemb/c they are waiting for something that already happened and they missed it19:30
sheelwhen ?19:30
mriedemwhich was the 'right time'19:30
sheelmeans date?19:30
ameadesheel: what is your timezone?19:30
sheelUTC+5:3019:30
hemnaI suppose if you called get volume and got 'in-use'19:30
ildikovmriedem: yeah, checking only status is not reliable19:30
ameadesheel: Jan 29th is the hackday for the midcycle19:31
mriedemi've seen exactly these types of races in the gate before19:31
hemnaand then the volume went 'detaching' from someone else19:31
hemnaand then called begin_detaching19:31
hemnait will fail19:31
hemnabut that's still valid19:31
mriedemb/c computers are fast19:31
ameadesheel: i am sure other folks will want to discuss earlier in the week, perhaps you could be virtual then as well?19:31
mriedemanyway, i've updated https://review.openstack.org/#/c/266605/4/tempest/api/compute/volumes/test_attach_volume.py with comments19:31
mriedemjust needs code19:31
sheelameade: give me some time to draft the BP...by 27th...may be we wil be quite in sync after that...19:31
sheelameade: then it will be quite easy to explain19:31
ildikovhemna: I think we all agree that the error is correct19:32
sheelameade: so its ok to discuss after that as you suggested19:32
ameadesheel: sounds good, feel free to ping me anytime19:32
*** haomaiwang has quit IRC19:32
ameadewe definitely need to understand all approaches and lay out the pros and cons of each19:32
ildikovhemna: we just try to figure out how to do sequential detaching of attachments odf the same volume without hitting the error19:32
sheelameade: sure, thanks19:32
ameadesometimes i feel we don't do that explicitly enough as a community19:32
mriedemildikov: hemna: yup, all on the same team still :) <319:33
mriedembff's and all19:33
sheelameade, eharney: i would need some help of you guys during design19:33
ildikovhemna: I mean we were investigating the possibility of checking status as opposed to retry19:33
sheeleharney, ameade: so i am quite inclined to discuss at every step19:33
ildikovmriedem: hemna: BFFs lol <3 :)19:34
ameadesheel: yeah definitely19:34
sheelameade : will try to share implementation items and plan as well...with BP draft19:35
sheelameade: will also require 2 more BPs to implement along with this19:35
*** bardia has quit IRC19:35
sheelameade: please ping me time of discussion for 29th19:36
sheelso that i could plan accordingly19:36
ildikovhemna: I think it will worth a line of docco somewhere that if the volume has multiple attachments the user can detach only one at a time still19:36
ameadesheel, eharney: lets nail that down now19:36
*** lpetrut has quit IRC19:37
ildikovhemna: it is the same for attaching too, right?19:37
sheelameade, eharney: yes sure, share your availibility19:37
ildikovhemna: I mean a volume can be attached when it's 'available' or 'in-use' but not in 'attaching'19:38
sheelameade, eharney: I am quite eager to discuss this, so ok with any time..19:38
ameadesheel, eharney: 1500 UTC on the 29th sound good? Thats 10am EST and 8:30pm your time sheel19:38
sheel sheel, eharney: I am ok19:38
ameadesheel: if we nail down time before then to discuss during the midcycle sessions I will let you know19:38
eharneyameade: sheel: works for me but we'll have to make sure we don't have things from the week overflowing into that time19:38
ameadeeharney: true, lets just say that time is tentative19:39
sheelameade, eharney: ok, either on 29th or later as per availibilty19:39
*** krtaylor has quit IRC19:41
ameadesheel, eharney : i see 4 Pieces of a solution:19:41
ameade    1. Format of messages the user sees19:41
ameade    2. How to create the content of said messages19:41
ameade    3. Where to put messages19:41
ameade    4. How to expose them to users19:41
ameadereally 4 different problems19:42
sheelameade, eharney: all are mentioned in BP implicitely19:42
sheelameade, eharney: let me answer as per current thinking19:42
sheel1. format of message - TenantEventIDNodeName NodeIPOperationMessageLevelEventTime Details19:43
ildikovmriedem: I've just realized that we're using to gerrit topics for the BP... :)19:43
*** xyang has joined #openstack-cinder19:43
sheel2. How to create the content of said messages : through notifications... may be i am missing question here19:44
mriedemildikov: i noticed that over a week ago19:44
mriedemildikov: i adjusted my tempest and devstach patches to use the same topic as your nova seriees19:44
ildikovmriedem: I didn't understand first why I don't see both of your patches for tempest19:44
sheel3. Where to put messages : In database or zaqar - will discuss on this19:44
ameadesheel: yeah 2 i really about how do we keep out sensitive information19:44
mriedemhttps://review.openstack.org/#/q/topic:bp/volume-multi-attach19:44
ildikovmriedem: thanks :)19:44
sheel4. How to expose them to users :  through API on horizon tab19:44
eharneysheel: ameade: i think 2 is, manually, by constructing them in the code where they are needed, as far as i've been thinking19:44
ildikovmriedem: your devstack patch has the other topic19:45
eharneysheel: ameade: #4 needs something more low level -- like, i sent a request with req-id abcd-efgh, and want to look up an error that was associated with that request19:45
ameadeeharney: i have an idea there, because we dont want any mistakes, we construct all message content as constants in a single file so it's easy to see them all19:45
sheelameade : same as eharney mentioned for #219:45
mriedemildikov: so it does19:45
mriedemildikov: i couldn't update that one w/o changes19:45
mriedemildikov: and it was perfect so no changes :)19:46
ildikovmriedem: doon't bother yourself with it19:46
*** martyturner has joined #openstack-cinder19:46
ildikovmriedem: lol :)19:46
ameadeeharney, sheel: for #2 the messages also have to be well formed and make sense to a user, so a single place of definition makes it easier to review19:46
ildikovmriedem: I'm sorry, it's my bad ;)19:46
openstackgerritKendall Nelson proposed openstack/os-brick: Remove multipath -l logic from ISCSI connector  https://review.openstack.org/26708519:46
sheelameade, eharney: yup we will have event categorizations19:46
sheelameade, eharney: for each type of events19:46
sheelameade, eharney: volume, snapshot, qos19:46
ameadesheel, eharney: for my etherpad, i will change it to break it into these 4 problems and we can document the options for each one19:47
ameadesheel: agreed19:47
openstackgerritKendall Nelson proposed openstack/os-brick: Remove multipath -l logic from ISCSI connector  https://review.openstack.org/26708519:47
sheelameade: thank you19:47
eharneyameade: sheel: makes sense but i think i'm more concerned at this point about building the infrastructure for this than what the messages actually look like exactly19:47
ameadeeharney: sure19:48
openstackgerritKendall Nelson proposed openstack/os-brick: Remove multipath -l logic from ISCSI connector  https://review.openstack.org/26708519:48
*** haomaiwang has joined #openstack-cinder19:48
hemnasorry guys, I have 2 other fires going on ...19:48
hemna<ildikov> hemna: I mean a volume can be attached when it's 'available' or 'in-use' but not in 'attaching'19:48
hemnano19:48
sheeleharney: I am sorry to ask again, but could you please refer html tagging by opening in browser ..it will give some more details of my views19:48
hemnathe volume can't be in attached mode if the status is 'available'19:48
sheelameade, eharney: specifically about how message will look like19:49
eharneysheel: not sure i follow, Cinder can't generate HTML for this stuff19:51
ameadesheel: yeah i like the idea19:51
ameadeeharney: just a mockup to imagine what it could look like in horizon19:51
sheeleharney: in horizon19:51
sheeleharny: its just a depiction of what i want to use to show message to user19:51
sheeleharney: its just a depiction of what i want to use to show message to user19:52
eharneysheel: ameade: ok but that's probably outside of the scope of Cinder's part, since this has to be usable from a REST interface, CLIs, etc19:52
sheelameade, eharney: notifications-> notificationAgent->collector->db(mongo)->api to fetch details to show to user19:53
*** haomaiwang has quit IRC19:53
sheelameade, eharney : this will be usable from cli as well19:53
ildikovhemna: sorry, so I meant that the attach request will go through if the volume is 'available' or if it's multiattach then it can be 'in-use' too19:54
hemnayes19:54
ildikovmriedem: in the tests basically the cleanup that fails, right?19:54
ildikovhemna: and if it's 'attaching' the request will fail19:54
hemnacorrect19:54
sheelameade, ehareny : I will discuss this in horizon meeting as well19:54
hemnajust as it would w/o multiattach19:54
hemnaand even for other volume related workflows19:55
sheelameade, eharney: just after our discussion get finalized19:55
hemnacan't call delete volume on a volume where status isn't available19:55
hemnaetc19:55
*** sombrafam has quit IRC19:55
ildikovhemna: cool, I got a question on my Nova patch why I don't allow attaching state and I wrote the guy the exact same thing that we should not allow parrallel attaching for the same volume19:55
ildikov*parallel19:56
sheelameade, eharney: notifications-> notificationAgent->collector->db(mongo)->api to fetch details to show to user -> through CLI or Horizon tab19:56
ildikovhemna: all good, thanks19:57
*** dims has joined #openstack-cinder19:57
*** dimsum__ has quit IRC19:57
eharneysheel: ameade: i'm much more interested in the part before the notifications19:58
eharneysheel: ameade: for now anyway19:58
*** geguileo has quit IRC19:58
mriedemildikov: yeah20:00
ameadewe'll figure this out, i've seen contention at all 4 points so it'll be tough to find consensus without compromise20:00
ameade(reasons it's not done yet) lol20:00
sheeleharney: please refer my other BP as well which will cater to your interest "before notification" part upto some extent20:01
sheelhttps://blueprints.launchpad.net/cinder/+spec/serviceerrorreporting- where i am planning for reporting service errors if they stopped abnormally20:01
sheelI will be taking care of one more point- call cast combination in cinder operations for bette error tracking - going to raise seperate BP for that20:02
*** raildo is now known as raildo-afk20:02
eharneysheel: i don't think the call/cast thing is going to fly20:03
*** alonma has joined #openstack-cinder20:03
sheeleharney: I will give details, I think this will interest you20:03
sheelthis will be of your interest20:03
hemnaildikov, ok sounds good20:04
hemnaphew20:04
sheeleharney: It will server a lot of purposes,20:04
sheeleharney: without performance issues20:04
*** hyakuhei_ has joined #openstack-cinder20:04
*** lpetrut has joined #openstack-cinder20:05
*** mudassirlatif has quit IRC20:05
sheeleharney: we will discuss this part later ..20:06
sheelotherwise this is going to be huge to discuss all things at once20:06
ildikovmriedem: ok, then I understand the flow correctly20:06
ildikovhemna: :)20:06
sheel:)20:06
eharneyok20:06
sheeleharney: please keep on giving your views, i am more interested in taking constructive inputs20:07
sheeleharney: thank you!!20:07
*** e0ne has quit IRC20:08
*** alonma has quit IRC20:08
*** haomaiwang has joined #openstack-cinder20:09
sheelsmcginnis: for "https://blueprints.launchpad.net/cinder/+spec/serviceerrorreporting" -> I think "Definition" field is updated instead of "Direction:" field20:11
sheel:)20:11
*** alonma has joined #openstack-cinder20:12
smcginnissheel: For now... :)20:12
sheelsmcginnis: ohk, i though its just becuase of human err...20:13
smcginnis;)20:13
sheel:)20:13
*** haomaiwang has quit IRC20:13
*** martyturner has quit IRC20:16
*** alonma has quit IRC20:16
*** martyturner has joined #openstack-cinder20:16
*** esker has quit IRC20:16
*** esker has joined #openstack-cinder20:17
*** alonma has joined #openstack-cinder20:18
sheelameade: Thanks for creating "https://etherpad.openstack.org/p/mitaka-cinder-midcycle-user-notifications" - I have updated it with some points..20:19
*** timcl1 has quit IRC20:20
ameadesheel: awesome20:21
sheelameade: :)20:21
*** alonma has quit IRC20:23
*** e0ne has joined #openstack-cinder20:24
*** alonma has joined #openstack-cinder20:24
*** timcl1 has joined #openstack-cinder20:25
*** apoorvad has quit IRC20:28
*** alonma has quit IRC20:29
*** salv-orlando has quit IRC20:29
*** haomaiwang has joined #openstack-cinder20:30
*** martyturner has quit IRC20:32
*** haomaiwang has quit IRC20:34
Swansonasselin, CI question.  I can't seem to build a node today as nodepool chokes on vahana with a "fatal: Couldn't find remote ref master".  I'm trying to figure out where this is referenced so I can remove it.20:41
asselinSwanson, this is a job  or image build? linki to logs?20:42
Swansonasselin, image build.20:43
asselinthe full command used should be logged...try it again to make sure it's not intermittent20:44
Swansonasselin, I've tried it twice.  I've also tried it by hand.20:44
Swansonasselin, git --git-dir=/opt/dib_cache/source-repositories/vahana_16d63c569d46b7301825deb73789dae51c725c69/.git fetch --prune --update-head-ok git://git.openstack.org/openstack/vahana.git +master:master20:45
Swansonasselin, runing a git clone of git://git.openstack.org/openstack/vahana.git seems to clone an empty repo.20:46
*** hyakuhei_ has quit IRC20:46
asselinSwanson, there's a file somewhere that defines all the projects...but you might want to ask in openstack-infra in case they can fix it on the backend20:47
mriedemildikov: hemna: i took the easy route for now on checking the status https://review.openstack.org/26660520:48
mriedemif that doesn't work, i'll switch to checking the volume attachments20:48
*** pratap_ has quit IRC20:49
asselinSwanson, I just ran into it now...will ask20:49
Swansonasselin, Thanks!  It comes about 4GB in so it kinda sucks.20:50
ildikovmriedem: I don't know how long the volume is in 'detaching' state, but it looks good for now, from the logs I believe it should work20:50
*** pratap has joined #openstack-cinder20:51
ildikovmriedem: for attach it works fine to wait for 'in-use'20:52
ildikovmriedem: and after the first attachment it's the same situation as detach20:52
*** lpetrut has quit IRC20:53
*** mtanino has quit IRC20:55
hemnamriedem, so servers_client.detach_volume, calls begin_detaching, then detach ?20:58
hemnais that the novaclient object ?20:58
hemnasorry trying to follow20:58
*** e0ne has quit IRC20:58
*** timcl1 has left #openstack-cinder20:59
*** edtubill has quit IRC21:01
*** edtubill has joined #openstack-cinder21:01
mriedemhemna: the compute API calls cinderclient.begin_detaching before casting off to the compute service21:03
mriedemat which point the REST API response with the 202 to the client21:03
mriedem*responds21:03
hemnaok, so it's safe to wait for detaching then21:05
jgriffithhemna: it's just like attach, just for detach21:07
jgriffithhemna: there's an  analogous call on either side.21:07
jgriffithhemna: it's *just* a db state update21:08
hemnaI just wanted to make sure that it was safe to wait for the detaching state21:08
*** thangp has left #openstack-cinder21:08
hemnaotherwise it sit waiting and fail21:08
jgriffithhemna: yeah.. in that call you best not be waiting long :)21:08
hemna:)21:08
*** akerr_ has joined #openstack-cinder21:09
jgriffithnot avishay did throw a new case in to that logic with migration though :(21:09
jgriffitherr... s/not/note/21:09
mriedempatrickeast: heh https://review.openstack.org/#/c/229152/21:09
mriedemsoftlayer is thanking you for that21:09
mriedemthey have cinder liberty + nova kilo21:09
mriedemand iscsi multipath isn't cleaning up after itself...21:09
openstackgerritKendall Nelson proposed openstack/os-brick: Remove multipath -l logic from ISCSI connector  https://review.openstack.org/26708521:10
jgriffithmriedem: multi-path is broken anyway21:10
mriedemjgriffith: woot!21:10
jgriffithmriedem: or more correclty, on Nova it breaks everybody else :(21:10
mriedemjgriffith: in general? in kilo? in liberty?21:10
*** apoorvad has joined #openstack-cinder21:10
mriedemalright, well, idk? i know nova has a buttload of volume + multipath open bugs21:11
jgriffithmriedem: I'm being dramatic :)  But... I'm annoyed because the multi-path flag setting in nova.conf is global.  So it breaks devices that don't use multi-path21:11
mriedemthat probably no one is working on fixing21:11
jgriffithmriedem: I'm working on it on and off :)21:11
*** akerr has quit IRC21:11
jgriffithanywho...21:12
patrickeastmriedem: haha nice, glad others are getting some use from those21:12
* patrickeast still has to apply them for manually on top of Kilo deployments :(21:12
openstackgerritKendall Nelson proposed openstack/os-brick: Remove multipath -l logic from ISCSI connector  https://review.openstack.org/26708521:13
*** sheel has quit IRC21:14
mriedempatrickeast: yeah someone told me they had everything on liberty, i guess not21:15
mriedemso i'm just pushing them to upgrade21:15
mriedemi mean, it's just upgrading nova, how hard could it be?21:15
patrickeastmriedem: definitely the best approach... stuff works much better on liberty21:15
patrickeastlol21:15
patrickeasteasy as pie21:15
mriedemhence http://www.danplanet.com/blog/2015/10/05/upgrades-in-nova-the-details/21:15
*** marcusvrn_ has quit IRC21:17
*** salv-orlando has joined #openstack-cinder21:18
openstackgerritKendall Nelson proposed openstack/os-brick: Remove multipath -l logic from ISCSI connector  https://review.openstack.org/26708521:20
*** alonma has joined #openstack-cinder21:25
*** jungleboyj has quit IRC21:25
*** jwcroppe has quit IRC21:27
*** salv-orlando has quit IRC21:28
*** alonma has quit IRC21:29
openstackgerritKendall Nelson proposed openstack/os-brick: Remove multipath -l logic from ISCSI connector  https://review.openstack.org/26708521:29
hemnammm pie21:30
*** winston-d_ has quit IRC21:32
*** alonma has joined #openstack-cinder21:32
*** eharney has quit IRC21:33
*** eharney has joined #openstack-cinder21:35
openstackgerritHelen Walsh proposed openstack/cinder: EMC VMAX - VMAX driver failing to remove zones  https://review.openstack.org/24493321:35
*** alonma has quit IRC21:36
*** eharney_ has joined #openstack-cinder21:38
*** markvoelker has joined #openstack-cinder21:40
*** markvoelker has quit IRC21:40
*** markvoelker has joined #openstack-cinder21:41
*** eharney has quit IRC21:42
openstackgerritAlex O'Rourke proposed openstack/cinder: 3PAR: Add volume fail-back capabilities  https://review.openstack.org/26539921:43
*** mtanino has joined #openstack-cinder21:43
*** cebruns has quit IRC21:44
*** cebruns has joined #openstack-cinder21:46
openstackgerritAlex O'Rourke proposed openstack/cinder: LeftHand: Add volume fail-back capabilities  https://review.openstack.org/27153421:47
*** mriedem has quit IRC21:51
*** baumann has left #openstack-cinder21:52
*** rcernin has joined #openstack-cinder21:53
*** cebruns has quit IRC21:53
*** haomaiwang has joined #openstack-cinder21:53
*** cebruns has joined #openstack-cinder21:55
openstackgerritAlexey Khodos proposed openstack/cinder: NexentaStor5 iSCSI driver unit tests  https://review.openstack.org/27153721:56
*** haomaiwang has quit IRC21:57
alkhodostbarron: Hi Tom, you might want to take a look :) https://review.openstack.org/27153721:58
tbarronalkhodos: looking ...21:58
*** mragupat has quit IRC21:59
*** jgregor has quit IRC21:59
openstackgerritHelen Walsh proposed openstack/cinder: EMC VMAX - Changing PercentSynced to CopyState in isSynched  https://review.openstack.org/24699222:01
*** hyakuhei_ has joined #openstack-cinder22:01
*** diablo_rojo has left #openstack-cinder22:02
*** eharney_ has quit IRC22:05
*** erlon has quit IRC22:06
*** salv-orlando has joined #openstack-cinder22:10
*** tpatzig has left #openstack-cinder22:12
*** dustins has quit IRC22:14
openstackgerritAlex O'Rourke proposed openstack/cinder: LeftHand: Add volume fail-back capabilities  https://review.openstack.org/27153422:16
*** amoturi has left #openstack-cinder22:16
*** tpsilva has quit IRC22:16
*** xyang has quit IRC22:17
*** eharney_ has joined #openstack-cinder22:18
*** JoseMello has quit IRC22:18
openstackgerritHelen Walsh proposed openstack/cinder: EMC VMAX - get iscsi ip from port in existing MV  https://review.openstack.org/24599722:20
*** xyang1 has quit IRC22:21
*** bardia has joined #openstack-cinder22:29
openstackgerritAlex O'Rourke proposed openstack/cinder: LeftHand: Implement un/manage snapshot support  https://review.openstack.org/25501522:30
openstackgerritAlex O'Rourke proposed openstack/cinder: LeftHand: Implement v2 replication (unmanaged)  https://review.openstack.org/25554422:30
openstackgerritAlex O'Rourke proposed openstack/cinder: LeftHand: Add volume fail-back capabilities  https://review.openstack.org/27153422:30
openstackgerritAlex O'Rourke proposed openstack/cinder: LeftHand: Updating minimum client version  https://review.openstack.org/26778022:30
tbarronalkhodos: your coverage *for ns5* looks good now!22:30
angela-ssmcginnis: could you have a look at this review when you have a chance?  this is for friendly zone names changes. https://review.openstack.org/#/c/180518/22:31
*** alonma has joined #openstack-cinder22:32
*** markvoelker has quit IRC22:32
alkhodostbarron: glad to hear!22:33
*** haomaiwang has joined #openstack-cinder22:33
tbarronalkhodos: I think it's nice to have a separate test case for each assertRaises, with that as the final line.22:34
tbarronalkhodos: but some people think I'm pedantic on that, so not an issue for this review22:34
tbarronalkhodos: I like the ddt module for being able to do that with lots of different input data and expected results.22:35
tbarronalkhodos: so that you don't have to write a million test cases22:35
*** [1]Thelo has joined #openstack-cinder22:35
tbarronalkhodos: might be giving the impression that I think I'm an expert on this stuff, but far from it.22:35
tbarronalkhodos: just sharing some observations from fumbling around with unit tests over the last year or so22:36
*** alonma has quit IRC22:37
*** haomaiwang has quit IRC22:38
*** PsionTheory has quit IRC22:38
*** Thelo has quit IRC22:38
*** [1]Thelo is now known as Thelo22:38
jgriffithanybody else seeing this mess on volume attach today:  http://paste.openstack.org/show/484748/22:45
hemnaew22:46
*** hyakuhei_ has quit IRC22:46
hemnajgriffith, so a few things22:46
hemnathere was a recent cinder patch that allows passing in instance_uuid and host at the same time22:47
hemnaand Nova added host passing to os-attach22:47
hemnahttps://review.openstack.org/#/c/256273/22:47
jgriffithhemna: sigh22:47
jgriffithhemna: i saw the aio patch, but hadn't come across the others yet22:47
hemnahttps://review.openstack.org/#/c/266006/22:48
*** rajinir has joined #openstack-cinder22:50
openstackgerritMitsuhiro Tanino proposed openstack/cinder: WIP:Support restore target configration during driver initialization  https://review.openstack.org/27142422:50
openstackgerritMitsuhiro Tanino proposed openstack/cinder: WIP:[LVM] Restore target config during driver initialization  https://review.openstack.org/27142422:51
*** alonma has joined #openstack-cinder22:51
jgriffithhemna: confusing, because the error is thrown by the client it would appear22:52
jgriffithhemna: the call never even seems to go to Cinder... and making me unclear why this isn't hitting in other drivers/CI's22:52
hemnahrmm22:53
openstackgerritMerged openstack/cinder: Huawei: Refactor driver for the second time  https://review.openstack.org/25692022:53
jgriffithhemna: unless I just haven't found it on the other side yet :)22:53
hemnaraise exceptions.from_response(resp, body)22:53
hemnathat smells like it's processing the response from the API though22:53
*** rcernin has quit IRC22:53
hemnait's probably buried in your API log somewhere22:54
jgriffithhemna: well... looking at c-api and so far searching on that volume-id I don't see an attach request for it at all22:54
*** krtaylor has joined #openstack-cinder22:54
jgriffithhemna: I see create get and delete and that's it22:54
jgriffithhemna: http://54.164.167.86/solidfire-ci-logs/refs-changes-43-266743-4/logs/c-api.log.txt22:55
hemnahrmm22:55
*** dims_ has joined #openstack-cinder22:56
openstackgerritMark Sturdevant proposed openstack/cinder: W-I-P Fix 3PAR to work with nova evacuate  https://review.openstack.org/26609822:56
*** alonma has quit IRC22:56
jgriffithhemna: hmm... maybe it is burried in there deeper22:57
jgriffithsigh22:57
hemnaI'm searching for 40022:57
hemna....22:57
hemnalots of em though...22:57
jgriffithhemna: stupid negative tests!22:57
hemnaheh yah22:58
hemnacreate a volume with -1Gb size.....22:58
*** dims has quit IRC22:58
jgriffithhemna: ok, found it22:59
hemna2016-01-22 22:11:37.80222:59
jgriffithhemna: 2016-01-22 22:11:34.27922:59
hemna:P22:59
jgriffith802?22:59
*** alonma has joined #openstack-cinder22:59
hemnaah yours is first22:59
hemnabut yah23:00
hemnaanother one at :37.80223:00
jgriffithhemna: I LOOOOVE the fact that we log that as Info and not Error23:00
smcginnishemna: REgarding testing -1Gb: https://twitter.com/sempf/status/514473420277694465?lang=en23:00
jgriffithhemna: so that confirms my concern though...23:00
jgriffithhemna: this appears to fail at the API layer, nothing to do with the driver that I can see here?23:01
smcginnisjgriffith: Are there unintended consequences of the instance_id/host_name patch?23:01
jgriffithsmcginnis: yeah, from my view it breaks shit :)23:01
*** edtubill has quit IRC23:01
hemnaso, I just did a grep on master for "Invalid request to attach volume to an instance"23:01
hemnaand it's not there23:01
hemnaso I'm confused23:01
smcginnisI wonder why my CI is fine.23:02
jgriffithhemna: api/contrib/volume_actions.py23:02
hemnahttps://review.openstack.org/#/c/266006/1/cinder/api/contrib/volume_actions.py23:02
hemnathat patch has landed23:02
smcginnisBTW, here is a better thread of the good negative tests I linked to: https://www.sempf.net/post/On-Testing1.aspx23:02
hemnathat check and message are gone23:02
jgriffithoh... wait23:02
smcginnisOf completely no importance or relevant to the current conversation.23:03
jgriffithsmcginnis: sure it is... proves my point that negative tests are stooopid :)23:03
smcginnisjgriffith: But much more funny in a bar! :)23:03
jgriffithsmcginnis: yes indeed!!!23:03
*** alonma has quit IRC23:04
*** akerr_ is now known as akerr_away23:04
jgriffithsmcginnis: hemna hmmmm.... https://github.com/openstack-dev/devstack/commit/b9a33191bbeec118a6643961278dfba73a38911c23:05
jgriffithsmcginnis: hemna looks like I may have been bit by another devstack change again ?23:05
jgriffithwait....23:06
hemnauhh wtf is that23:06
jgriffithhell if I know... jus the usual churn of crap23:06
hemnais this testing a latest nova with liberty ?23:06
*** akerr_away is now known as akerr_23:06
*** akerr_ is now known as akerr_away23:06
*** akerr_away is now known as akerr_23:06
jgriffithhemna: but it seems related to your host info changes23:06
jgriffithhemna: err s/your/the/23:06
hemnayah it sure seems like it23:06
*** akerr_ is now known as akerr_away23:06
smcginnisI don't get what that's doing, but oh well.23:06
*** laughterwym has joined #openstack-cinder23:07
hemnanothing like breaking the world23:07
*** akerr_away is now known as akerr_23:07
* hemna hides under a rock23:07
*** salv-orlando has quit IRC23:07
smcginnisI've been trying to get through a change that just updates a text file for 4 days now. World == broken23:08
*** akerr_ is now known as akerr_away23:08
*** akerr_away is now known as akerr_23:08
openstackgerritHelen Walsh proposed openstack/cinder: VMAX-Replacing deprecated API EMCGetTargetEndpoints  https://review.openstack.org/24432823:08
hemnahttps://review.openstack.org/#/c/264223/23:08
*** akerr_ is now known as akerr_away23:08
*** akerr_away is now known as akerr_23:08
smcginnisHmm23:09
hemnanot sure that's related23:09
hemnadunno though23:09
jgriffithhemna: yeah, so it looks like the change breaks stable23:09
*** akerr_ is now known as akerr_away23:09
jgriffithhemna: so they put this in to make sure it's not checked/tested on stable I'm thinking23:09
jgriffithbut not sure23:09
hemnahrmm23:09
hemnaso Nova(master) -> stable/* ?23:10
jgriffithI dunno.. the bug doesn't align with that23:10
hemnaso Nova(master) -> Cinder(stable/*) ?23:10
jgriffithI think that context is infact different23:10
smcginnisThat makes it sound like it was disabled in master (and stable/*) up until now.23:11
*** laughterwym has quit IRC23:11
*** wN has quit IRC23:13
jgriffithit sure would be nice to know where that exception is coming from exactly :)23:13
jgriffithOhhh23:13
jgriffithtempest directly perhaps23:13
hemnathat has to be from Cinder !master23:13
*** dims_ has quit IRC23:13
jgriffithhemna: oh?  Ya think so?23:14
hemnaor my grep isn't correct23:14
hemnaack-grep "Invalid request to attach volume to an instance"23:14
jgriffithhemna: oh.. yeah, I hear ya... same here23:14
hemnathat returned nothing (from Master)23:14
*** salv-orlando has joined #openstack-cinder23:14
*** eharney_ is now known as eharney23:15
*** jamielennox is now known as jamielennox|away23:15
jgriffithhemna: well this is the change that if failed on, which certainly appears to be master :)23:16
jgriffithoops23:17
jgriffithhttps://review.openstack.org/#/c/264223/23:17
jgriffithbahh23:17
jgriffiththis one23:17
*** akerr_away is now known as akerr_23:18
jgriffithsmcginnis: BTW, you fail too :)23:18
*** wN has joined #openstack-cinder23:19
*** wN has joined #openstack-cinder23:19
jgriffithsmcginnis: looks like the same errors23:19
jgriffithMy guess is that once all the node-pool systems get an update they're going to start failing23:19
hemna:(23:19
smcginnisjgriffith: Bah!23:20
jgriffithone thing that's nice about sos is we've seen that a few times now... canary in the coal mine23:20
smcginnisYeah, lucky us.23:20
*** shakamunyi has quit IRC23:22
smcginnisOf course this started failing right as I left the office.23:23
openstackgerritHelen Walsh proposed openstack/cinder: EMC VMAX - Fix for last volume in VMAX3 storage group  https://review.openstack.org/24433123:24
hemnasmcginnis, perfect timing on a Friday23:24
hemnabeer o'clock!23:24
hemnawait......23:24
*** xyang has joined #openstack-cinder23:24
openstackgerritAlex O'Rourke proposed openstack/cinder: 3PAR: Add volume fail-back capabilities  https://review.openstack.org/26539923:25
smcginnishemna: On the plus side, looks like infra servers are updated now, so the os-brick 0.8.0 patch finally passed again.23:26
hemnaphew!23:27
hemnaman that took for-freaking-ever23:27
smcginnisNo kidding. Should have been a pretty trivial thing.23:27
smcginnisBad timing for sure.23:27
smcginnisNow to get a +A again...23:27
angela-s+A, what's that? endangered species i think23:29
smcginnisangela-s: Workflow+1 :)23:30
smcginnis+A == "Approved"23:31
angela-ssmcginnis: i know, was being sarcastic. :)23:31
jgriffithangela-s: I got it, and I personally thought it was kinda funny :)23:32
smcginnisangela-s: Ah, now I get you. Sorry, brain is half turned off.23:32
smcginnisangela-s: It was funny. :]23:32
angela-sjgriffith, smcginnis: Friday humor, is it time to go home yet?23:32
hemna+A everything...it's Friday.23:33
angela-si'm done...23:33
hemnawhat could go wrong....23:33
angela-shell ya23:33
angela-ssounds good23:33
*** bill_az has quit IRC23:34
smcginnisThen we can just spend the rest of the cycle fixing bugs. Pretty simple plan. :D23:35
jgriffithangela-s: I'm +A on that patch when it passes23:36
smcginnishemna: 32 and sunny in Chicago tomorrow. You'll have a good day to see all the sights. ;)23:37
hemna:)23:37
hemnaI'll make it to Chi town23:37
hemnaand then sleep in the terminal23:37
hemnawah wah wahhh23:37
angela-sjgriffith: which one?23:40
jgriffithangela-s: https://review.openstack.org/#/c/245364/23:40
jgriffithjust updated23:40
jgriffithit failed again :(23:40
smcginnisangela-s: You are already running a stack.sh session. << Something went wrong.23:41
*** edmondsw has quit IRC23:41
angela-sargh!23:42
smcginnisCall me caption obvious. :)23:42
*** daneyon has quit IRC23:43
smcginnisjgriffith: Where were you seeing those failure messages again? They were info and not error?23:43
*** akerr_ has quit IRC23:44
jgriffithsmcginnis: correct23:45
jgriffithbut your CI failed for the same tests as mine on that patch23:45
*** lcurtis has quit IRC23:45
smcginnisjgriffith: Yeah, trying to find it now.23:46
jgriffithsmcginnis: https://review.openstack.org/#/c/266743/23:46
*** simondodsley has quit IRC23:48
*** akerr has joined #openstack-cinder23:50
smcginnishemna: Did you say "Invalid request to attach volume to an instance" is coming from Cinder?23:54
smcginnishemna: Wasn't that what you removed/23:54
*** edmondsw has joined #openstack-cinder23:54
*** pratap has quit IRC23:55
*** pratap has joined #openstack-cinder23:56
*** dims has joined #openstack-cinder23:56
*** martyturner has joined #openstack-cinder23:58
smcginnisOh, these are just older patches that need to be rebased on master it looks like.23:59
smcginnisSo the patch to allow host and instance_id went through.23:59
smcginnisAnd nova started sending both.23:59
*** alonma has joined #openstack-cinder23:59
jgriffithsmcginnis: +123:59
jgriffithsmcginnis: so they just need rebased23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!