Tuesday, 2015-05-26

*** _cjones_ has quit IRC00:12
*** jwcroppe_ has joined #openstack-cinder00:13
*** chlong has joined #openstack-cinder00:15
*** jwcroppe has quit IRC00:16
*** takedakn has joined #openstack-cinder00:27
*** takedakn has quit IRC00:32
*** takedakn has joined #openstack-cinder00:34
*** takedakn has quit IRC00:35
*** heyun has joined #openstack-cinder00:42
*** tobe has joined #openstack-cinder00:44
*** ebalduf has joined #openstack-cinder00:44
*** ebalduf has quit IRC00:49
*** marzif has quit IRC01:00
*** krtaylor has quit IRC01:02
*** dimsum__ has joined #openstack-cinder01:12
*** krtaylor has joined #openstack-cinder01:15
*** dimsum__ has quit IRC01:15
*** dimsum__ has joined #openstack-cinder01:15
*** julim has joined #openstack-cinder01:21
*** julim has quit IRC01:21
*** logan2 has joined #openstack-cinder01:25
*** Lee1092 has joined #openstack-cinder01:29
*** MentalRay has quit IRC01:35
*** dimsum__ has quit IRC01:37
*** dimsum__ has joined #openstack-cinder01:43
*** avishay_ has quit IRC01:47
*** mtanino has quit IRC02:02
*** bkopilov_wfh has quit IRC02:18
*** Arkady_Kanevsky has quit IRC02:21
*** _cjones_ has joined #openstack-cinder02:30
*** tobe has quit IRC02:31
*** tobe has joined #openstack-cinder02:41
*** dimsum__ has quit IRC02:45
*** dimsum__ has joined #openstack-cinder02:46
*** dimsum__ has quit IRC02:51
*** asselin has quit IRC02:56
*** _cjones_ has quit IRC02:57
openstackgerritwanghao proposed openstack/cinder: Notification with volume and snapshot metadata  https://review.openstack.org/18040003:04
wanghaoping xing-yang03:05
wanghaohi~ I had to rebase this patch(Notification with volume and snapshot metadata  https://review.openstack.org/180400) before it merged. Would you help +2 again? Thank your for that.03:06
wanghaoDuncanT: Need your help to +2 too :)03:08
*** elonden has joined #openstack-cinder03:12
openstackgerritwanghao proposed openstack/cinder: Support for force-delete backups  https://review.openstack.org/16612703:14
*** ociuhandu has joined #openstack-cinder03:18
*** avishay_ has joined #openstack-cinder03:32
*** tobe has quit IRC03:34
openstackgerritxing-yang proposed openstack/cinder: Fix issues with extra specs in VMAX driver  https://review.openstack.org/16999903:35
*** avishay_ has quit IRC03:38
openstackgerritxing-yang proposed openstack/cinder: EMC VMAX Manage/Unmanage Volume  https://review.openstack.org/18246003:41
openstackgerritwanghao proposed openstack/cinder: query volume detail support volume_glance_metadata  https://review.openstack.org/14773803:42
*** fanyaohong has joined #openstack-cinder03:46
*** anuragpalsule has joined #openstack-cinder03:47
*** tobe has joined #openstack-cinder03:47
openstackgerritxing-yang proposed openstack/cinder: EMC VMAX Manage/Unmanage Volume  https://review.openstack.org/18246003:57
*** alexpilotti has joined #openstack-cinder04:06
openstackgerritLiu Xinguo proposed openstack/cinder: Brocade driver not parsing zone data correctly  https://review.openstack.org/18445804:08
*** anuragpalsule has quit IRC04:10
*** bkopilov_wfh has joined #openstack-cinder04:13
*** links has joined #openstack-cinder04:27
*** sks has joined #openstack-cinder04:36
*** deepakcs has joined #openstack-cinder04:40
*** alexpilotti has quit IRC04:41
*** pradipta has joined #openstack-cinder04:48
*** nkrinner has joined #openstack-cinder04:49
openstackgerritRick Chen proposed openstack/cinder: Prophetstor driver needs to return snapshot objects for create_cgsnapshot and delete_cgsnapshot  https://review.openstack.org/18549504:49
*** anuragpalsule has joined #openstack-cinder04:51
*** tshefi has joined #openstack-cinder04:52
*** tobe has quit IRC04:56
*** aswadr has joined #openstack-cinder05:05
*** logan2 has quit IRC05:08
openstackgerritKazumasa Nomura proposed openstack/cinder: Volume driver for HP XP storage  https://review.openstack.org/18477405:09
*** BharatK has joined #openstack-cinder05:10
*** tobe has joined #openstack-cinder05:13
*** Maike has joined #openstack-cinder05:14
*** IanGovett has joined #openstack-cinder05:17
*** smoriya has quit IRC05:20
*** gardenshed has joined #openstack-cinder05:29
*** gardensh_ has joined #openstack-cinder05:33
*** dimsum__ has joined #openstack-cinder05:36
*** gardenshed has quit IRC05:36
*** dimsum__ has quit IRC05:41
*** gardensh_ has quit IRC05:41
*** nihilifer has joined #openstack-cinder05:45
*** e0ne has joined #openstack-cinder05:46
*** ebalduf has joined #openstack-cinder05:48
*** ebalduf has quit IRC05:52
*** sgotliv has quit IRC05:53
*** e0ne has quit IRC05:53
*** IanGovett has quit IRC05:58
*** lpetrut has joined #openstack-cinder06:02
*** fanyaohong has quit IRC06:03
*** tobe has quit IRC06:04
*** dulek has joined #openstack-cinder06:15
openstackgerritSean Chen proposed openstack/cinder: Tintri Cinder Volume driver  https://review.openstack.org/18514806:15
openstackgerritRick Chen proposed openstack/cinder: Prophetstor driver needs to return snapshot objects for create_cgsnapshot and delete_cgsnapshot  https://review.openstack.org/18549506:16
openstackgerritwanghao proposed openstack/cinder: Notification with volume and snapshot metadata  https://review.openstack.org/18040006:17
*** tobe has joined #openstack-cinder06:23
*** ociuhandu has quit IRC06:23
*** chlong has quit IRC06:30
*** mdbooth has quit IRC06:35
*** mdbooth has joined #openstack-cinder06:41
openstackgerritLiu Xinguo proposed openstack/cinder: Brocade driver not parsing zone data correctly  https://review.openstack.org/18445806:43
*** avishay_ has joined #openstack-cinder06:45
*** gardenshed has joined #openstack-cinder06:51
*** alecv has joined #openstack-cinder06:53
*** gardenshed has quit IRC06:56
*** avishay_ has quit IRC07:00
*** anshul has quit IRC07:02
*** avishay_ has joined #openstack-cinder07:05
*** logan2 has joined #openstack-cinder07:06
*** jordanP has joined #openstack-cinder07:09
*** avishay_ has quit IRC07:09
*** avishay_ has joined #openstack-cinder07:09
*** elonden has quit IRC07:13
*** avishay_ has quit IRC07:14
*** avishay_ has joined #openstack-cinder07:16
*** logan2 has quit IRC07:17
*** avishay_ has quit IRC07:21
*** avishay__ has joined #openstack-cinder07:21
*** bkopilov_wfh is now known as bkopilov07:24
*** IanGovett has joined #openstack-cinder07:26
*** ronis has joined #openstack-cinder07:31
*** ndipanov has quit IRC07:35
*** jistr has joined #openstack-cinder07:39
*** e0ne has joined #openstack-cinder07:39
*** e0ne is now known as e0ne_07:40
*** ndipanov has joined #openstack-cinder07:42
*** sgotliv has joined #openstack-cinder07:43
*** markus_z has joined #openstack-cinder07:44
*** belmoreira has joined #openstack-cinder07:49
*** e0ne_ is now known as e0ne07:49
*** anshul has joined #openstack-cinder07:49
openstackgerritwanghao proposed openstack/cinder: Notification with volume and snapshot metadata  https://review.openstack.org/18040007:50
*** lpetrut has quit IRC07:51
*** avishay__ has quit IRC07:51
*** alonmarx has joined #openstack-cinder07:58
*** alonmarx_ has quit IRC08:01
*** tobe has quit IRC08:10
*** gardenshed has joined #openstack-cinder08:18
openstackgerritwanghao proposed openstack/cinder: Notify the transfer volume action in cinder  https://review.openstack.org/18553108:25
openstackgerritMichal Dulko proposed openstack/cinder: Service object  https://review.openstack.org/16041708:33
openstackgerritMichal Dulko proposed openstack/cinder: Backup object  https://review.openstack.org/15708508:33
*** IanGovett has quit IRC08:40
*** e0ne is now known as e0ne_08:42
*** e0ne_ is now known as e0ne08:43
openstackgerritMichal Dulko proposed openstack/cinder: Backup object  https://review.openstack.org/15708508:44
openstackgerritMichal Dulko proposed openstack/cinder: Service object  https://review.openstack.org/16041708:44
*** agarciam has joined #openstack-cinder08:44
*** aswadr has quit IRC08:48
*** agarciam has quit IRC08:49
*** agarciam has joined #openstack-cinder08:49
*** aswadr has joined #openstack-cinder08:50
*** lpetrut has joined #openstack-cinder08:54
*** turul has joined #openstack-cinder08:56
*** dalgaaf has joined #openstack-cinder08:56
*** turul is now known as afazekas08:56
*** jwcroppe has joined #openstack-cinder08:58
*** pradipta has quit IRC09:01
*** jwcroppe_ has quit IRC09:01
*** logan2 has joined #openstack-cinder09:02
openstackgerritLiu Xinguo proposed openstack/cinder: Fix the ranges in RestURL with Huawei drivers  https://review.openstack.org/17315709:02
*** leopoldj has joined #openstack-cinder09:04
*** aix has joined #openstack-cinder09:08
*** dulek_ has joined #openstack-cinder09:09
*** dulek has quit IRC09:10
*** dulek_ has quit IRC09:11
*** dulek has joined #openstack-cinder09:11
*** gcivitella has joined #openstack-cinder09:16
openstackgerritLiu Xinguo proposed openstack/cinder: Fix the ranges in RestURL with Huawei drivers  https://review.openstack.org/17315709:17
*** haomaiwa_ has quit IRC09:17
*** haypo has joined #openstack-cinder09:22
gcivitellaHi all, I've got a question about QoS definition. I have these QoS on a Cinder installation: http://paste.openstack.org/show/237432/ At the moment only the volumes with Archive QoS are actually rate limited. All the volumes associated with the other QoS values allow very high I/O rates. The only difference I can see is the consumer definition: front-end for the working QoS, back-end for the QoS not rate limited. Can someone explain the09:31
gcivitelladifference between the two consumer?09:31
openstackgerritwanghao proposed openstack/cinder: Fix weird change of volume status in re-scheduling  https://review.openstack.org/18554509:33
openstackgerritRick Chen proposed openstack/cinder: Prophetstor driver needs to return snapshot objects for create_cgsnapshot and delete_cgsnapshot  https://review.openstack.org/18549509:34
*** jwcroppe has quit IRC09:46
*** jwcroppe has joined #openstack-cinder09:47
*** ankit_ag has joined #openstack-cinder09:49
*** dulek has quit IRC10:03
*** dulek has joined #openstack-cinder10:03
*** haomaiwang has joined #openstack-cinder10:06
*** e0ne is now known as e0ne_10:08
*** dulek_ has joined #openstack-cinder10:09
*** dulek has quit IRC10:12
*** e0ne_ has quit IRC10:19
*** afazekas_ has joined #openstack-cinder10:21
*** nikeshm has quit IRC10:23
*** e0ne has joined #openstack-cinder10:28
*** e0ne is now known as e0ne_10:39
*** marzif has joined #openstack-cinder10:39
*** e0ne_ has quit IRC10:49
*** boris-42 has joined #openstack-cinder10:49
*** IanGovett has joined #openstack-cinder10:51
*** ebalduf has joined #openstack-cinder10:52
*** ebalduf has quit IRC10:57
*** heyun has quit IRC10:57
openstackgerritwanghao proposed openstack/cinder: Notify the transfer volume action in cinder  https://review.openstack.org/18553111:02
*** marcusvrn has joined #openstack-cinder11:08
*** aix has quit IRC11:14
*** aix has joined #openstack-cinder11:15
*** nlevinki has joined #openstack-cinder11:19
*** gardenshed has quit IRC11:32
*** jistr is now known as jistr|class11:33
openstackgerritGorka Eguileor proposed openstack/cinder: Fix backup metadata import missing fields  https://review.openstack.org/18322211:40
*** haomaiwang has quit IRC11:40
*** ctina__ has joined #openstack-cinder11:43
*** gardenshed has joined #openstack-cinder11:47
*** haypo has quit IRC11:50
*** IanGovett has quit IRC11:53
*** IanGovett has joined #openstack-cinder11:56
openstackgerritwanghao proposed openstack/cinder: Fix weird change of volume status in re-scheduling  https://review.openstack.org/18554511:58
*** avishay__ has joined #openstack-cinder11:59
*** e0ne has joined #openstack-cinder12:00
*** logan2 has quit IRC12:12
*** agarciam has quit IRC12:14
*** agarciam has joined #openstack-cinder12:14
*** ekarlso has quit IRC12:16
*** ekarlso has joined #openstack-cinder12:16
DuncanTgcivitella: Backend rate limit only works on a few backends12:19
gcivitellaDuncanT: is there any documentation about this?12:20
DuncanTgcivitella: No idea, sorry12:21
DuncanTgcivitella: Frontend rate limiting only works on some hypervisors (KVM is the only one I know for sure works)12:22
*** bswartz has quit IRC12:22
DuncanTgcivitella: Backend only works on some backends, solidfire for sure12:22
*** akerr has joined #openstack-cinder12:23
gcivitellaDuncanT: my backend is Ceph. I guess I just need to change back-end with front-end in my QoS definition. Thanks a lot12:24
DuncanTgcivitella: I can't see any evidence that Ceph can consume QoS, so I don't think it can work12:27
DuncanTgcivitella: Front-end will work if you're using KVM though12:27
*** gardenshed has quit IRC12:29
openstackgerritwangxiyuan proposed openstack/cinder: Fix wrong response with version details  https://review.openstack.org/18557012:30
gcivitellaDuncanT: in fact Ceph does not implement any rate limit so it is not suitable as back-end. And yes I use KVM, so moving QoS on a front-end consumer should allow me to implement rate limiting on the hypervisor's side12:31
*** rasoto1 has joined #openstack-cinder12:32
*** bkopilov has quit IRC12:34
*** crose has joined #openstack-cinder12:36
*** xyang1 has joined #openstack-cinder12:41
wanghaoDuncanT & gcivitella: If we can query the backend's  support for QoS, this issue may be easy to figure out.12:45
*** juzuluag has joined #openstack-cinder12:46
*** Arkady_Kanevsky has joined #openstack-cinder12:48
*** dimsum__ has joined #openstack-cinder12:49
*** deepakcs has quit IRC12:50
DuncanTwanghao: Agreed. It is none-trivial though, since a type can map to multiple backends12:51
DuncanTwanghao: Well defined capabilities includes info on QoS support, so that part is on the way12:52
*** ganso_ has joined #openstack-cinder12:56
*** dimsum__ is now known as dims12:56
*** avishay__ has quit IRC12:59
*** jistr|class is now known as jistr13:00
*** gardenshed has joined #openstack-cinder13:02
*** sks has quit IRC13:02
*** bswartz has joined #openstack-cinder13:02
*** gardenshed has quit IRC13:03
*** gardenshed has joined #openstack-cinder13:04
*** e0ne is now known as e0ne_13:10
*** changbl has quit IRC13:11
*** julim has joined #openstack-cinder13:12
*** rushiagr_away is now known as rushiagr13:13
*** sks has joined #openstack-cinder13:16
*** avishay__ has joined #openstack-cinder13:16
*** primechuck has joined #openstack-cinder13:20
*** e0ne_ has quit IRC13:21
*** logan2 has joined #openstack-cinder13:21
*** primechuck has quit IRC13:25
*** primechuck has joined #openstack-cinder13:25
*** e0ne has joined #openstack-cinder13:26
*** lpetrut has quit IRC13:26
*** links has quit IRC13:29
*** akshai has joined #openstack-cinder13:29
*** jungleboyj has quit IRC13:36
*** crose has quit IRC13:39
*** simondodsley has joined #openstack-cinder13:40
*** lpetrut has joined #openstack-cinder13:41
*** mriedem has joined #openstack-cinder13:41
*** gardenshed has quit IRC13:41
*** gardenshed has joined #openstack-cinder13:44
*** dims has quit IRC13:45
*** Lee1092 has quit IRC13:51
*** logan2 has quit IRC13:54
*** ebalduf has joined #openstack-cinder13:54
*** Maike has quit IRC13:56
*** bnemec has joined #openstack-cinder13:57
*** annegentle has joined #openstack-cinder13:59
*** anuragpalsule has quit IRC14:02
*** rushil has joined #openstack-cinder14:02
*** ebalduf has quit IRC14:03
*** BharatK has quit IRC14:05
*** sks has quit IRC14:05
*** IanGovett has quit IRC14:07
*** MentalRay has joined #openstack-cinder14:07
*** rmesta has joined #openstack-cinder14:08
*** MentalRay has quit IRC14:09
*** avishay__ has quit IRC14:11
*** nkrinner has quit IRC14:12
*** deepakcs has joined #openstack-cinder14:14
*** ronis_ has joined #openstack-cinder14:16
*** rushil has quit IRC14:16
*** ronis has quit IRC14:17
*** sks has joined #openstack-cinder14:17
*** bkopilov has joined #openstack-cinder14:18
*** rushil has joined #openstack-cinder14:18
*** annegentle has quit IRC14:19
*** annegentle has joined #openstack-cinder14:19
*** rushiagr is now known as rushiagr_away14:19
*** breitz has joined #openstack-cinder14:20
*** jungleboyj has joined #openstack-cinder14:20
*** IanGovett has joined #openstack-cinder14:21
*** nlevinki_ has joined #openstack-cinder14:22
*** ankit_ag has quit IRC14:22
*** sks has quit IRC14:23
*** nlevinki has quit IRC14:25
*** tshefi has quit IRC14:28
openstackgerritAnton Arefiev proposed openstack/cinder: Add entry create and cast tasks to manage workflow  https://review.openstack.org/13907114:29
openstackgerritAnton Arefiev proposed openstack/cinder: Fix lvm manage existing volume  https://review.openstack.org/15693914:29
*** dims has joined #openstack-cinder14:35
*** annegentle has quit IRC14:38
*** ronis_ has quit IRC14:40
*** emagana has joined #openstack-cinder14:41
openstackgerritGorka Eguileor proposed openstack/cinder: Fix backup metadata import missing fields  https://review.openstack.org/18322214:41
*** e0ne is now known as e0ne_14:41
*** e0ne_ has quit IRC14:41
*** tshefi has joined #openstack-cinder14:42
openstackgerritSzymon Wróblewski proposed openstack/cinder: POC: Tooz locks  https://review.openstack.org/18353714:44
*** dulek___ has joined #openstack-cinder14:46
*** nihilifer has quit IRC14:47
*** ebalduf has joined #openstack-cinder14:48
*** dulek_ has quit IRC14:49
*** annegentle has joined #openstack-cinder14:50
*** ronis_ has joined #openstack-cinder14:51
*** deepakcs has quit IRC14:51
*** thangp has joined #openstack-cinder14:54
*** dulek___ has quit IRC14:55
openstackgerritGorka Eguileor proposed openstack/cinder: Display NOTIFICATIONS on assert failure  https://review.openstack.org/18563514:55
*** Yogi1 has joined #openstack-cinder14:57
*** Yogi1 has quit IRC14:58
*** e0ne has joined #openstack-cinder14:58
*** Yogi1 has joined #openstack-cinder14:59
*** hemnafk is now known as hemna15:01
*** mtanino has joined #openstack-cinder15:01
*** gardensh_ has joined #openstack-cinder15:02
*** haomaiwang has joined #openstack-cinder15:04
*** tbarron1 has joined #openstack-cinder15:04
*** gardenshed has quit IRC15:05
*** anuragpalsule has joined #openstack-cinder15:06
*** anshul has quit IRC15:06
*** ronis_ has quit IRC15:06
*** rushil has quit IRC15:07
*** Lee1092 has joined #openstack-cinder15:08
*** anuragpalsule1 has joined #openstack-cinder15:11
*** annegentle has quit IRC15:11
*** rushil has joined #openstack-cinder15:13
*** anuragpalsule has quit IRC15:13
*** alejandrito has joined #openstack-cinder15:14
*** tshefi has quit IRC15:16
jgriffithmtanino: yeah, was there a problem on that?15:17
mtaninojgriffith: hi15:18
jgriffithmtanino: hey there15:18
mtaninojgriffith: so, about this one, right? https://review.openstack.org/#/c/183633/15:19
jgriffithmtanino: yeah15:19
mtaninojgriffith: In my understanding, this option always enable conv=sparce for volume migration using dd command.15:20
hemnamornin15:21
jgriffithmtanino: actually I think it sets it to False unless a driver overrides it15:21
jgriffithmtanino: the base driver class sets it False15:21
jgriffithmtanino: the LVM driver (and others on the iscsi side) don't know anything about the option yet15:21
mtaninojgriffith: If both source and dest volume are NFS backend, I think it's OK because both of thier volume is zero cleared before hand, right?15:21
*** dannywilson has joined #openstack-cinder15:22
mtaninojgriffith: But if source is NFS and dest is other backend such as Thick LVM who doesn't initialize volume beforehand,15:22
openstackgerritxing-yang proposed openstack/cinder: EMC ScaleIO Cinder Driver  https://review.openstack.org/18376215:22
*** krtaylor has quit IRC15:23
*** changbl has joined #openstack-cinder15:23
*** leeantho has joined #openstack-cinder15:23
mtaninowe have to do full block copy from NFS to Thick LVM(or other backend who doesn't initialize a volume beforehand)15:23
jgriffithmtanino: hmm, not sure15:25
jgriffithmtanino: I thought that the call to copy for the actual destination was the only applicable point there15:25
jgriffithmtanino: and we don't pass that in on the other items that I am aware of15:26
jgriffithmtanino: but to be honest, I don't know a lot about the NFS based drivers15:26
jgriffith:(15:26
*** dannywilson has quit IRC15:26
jgriffithmtanino: let alone migrating between them15:26
openstackgerritSzymon Wróblewski proposed openstack/cinder: Tooz locks  https://review.openstack.org/18353715:26
mtaninojgriffith: so, for example,15:26
mtaninowe can use sparce option from source thin LVM to dest thin LVM, right?15:27
mtaninosparse15:27
mtaninoif we have an such kind option in the LVM backend.15:27
*** patrickeast has joined #openstack-cinder15:27
jgriffithmtanino: yeah, but currently we don't IIRC15:28
mtaninoyes15:28
mtaninobut if we migrate from source thin LVM to thick LVM, we don't use sparse option becuase volume ofthick LVM might not be initialiezed15:28
jgriffithmtanino: sure15:29
mtaninoso, DuncanT said we should ask destination driver for thier copy strategy before volume migration15:30
mtaninoat the summit session.15:30
*** jdurgin has joined #openstack-cinder15:30
*** rhagarty has joined #openstack-cinder15:30
*** rhagarty_ has joined #openstack-cinder15:30
jgriffithok.. sorry, I may have missed that conversation15:30
jgriffiththink I was late for that session15:30
mtaninoI think so :)15:30
mtaninojgriffith: https://etherpad.openstack.org/p/volume-migration-improvement15:31
mtaninoDuncan: Driver shall report the suggested strategy. That's a MUST: If I have a file that includes a lot of data, and it gets sparsely copied, I might get back wrong data after the migration!15:31
mtaninoBack to the discuttion of https://review.openstack.org/#/c/183633/ ,15:32
*** dannywilson has joined #openstack-cinder15:32
mtaninothis patch decides copy strategy by SOURCE DRIVER, not asking DESTINATION DRIVER.15:32
jgriffithmtanino: :)15:32
mtaninothis is my concern :(15:33
mtaninomake sense?15:33
*** markvoelker has joined #openstack-cinder15:34
*** aswadr has quit IRC15:34
jgriffithmtanino: yeah, kinda15:34
mtaninoso, I'm proposing BP :) https://blueprints.launchpad.net/cinder/+spec/efficient-volume-copy-for-cinder-assisted-migration15:35
mtaninoasking capability of volume pre-initialization(zero-clear) to destination driver before volume migration :)15:36
*** haomaiwang has quit IRC15:37
*** haomaiwang has joined #openstack-cinder15:37
openstackgerritTom Swanson proposed openstack/cinder: Dell: Added verify cert option for REST calls  https://review.openstack.org/18239615:39
*** nlevinki_ has quit IRC15:40
*** jistr has quit IRC15:40
openstackgerritSzymon Wróblewski proposed openstack/cinder: Tooz locks  https://review.openstack.org/18353715:40
openstackgerritSzymon Wróblewski proposed openstack/cinder: POC: Tooz locks demo  https://review.openstack.org/18564615:40
*** e0ne is now known as e0ne_15:43
*** leopoldj has quit IRC15:45
*** krtaylor has joined #openstack-cinder15:46
*** e0ne_ is now known as e0ne15:46
*** jdurgin has quit IRC15:48
*** Apoorva has joined #openstack-cinder15:50
*** Maike has joined #openstack-cinder15:50
*** eharney has joined #openstack-cinder15:53
*** afazekas_ has quit IRC15:54
*** rushil has quit IRC15:57
*** ociuhandu has joined #openstack-cinder15:59
*** belmoreira has quit IRC15:59
*** anuragpalsule has joined #openstack-cinder16:01
*** Yogi1 has quit IRC16:01
*** rwsu has joined #openstack-cinder16:02
*** anuragpalsule1 has quit IRC16:03
*** e0ne is now known as e0ne_16:04
*** e0ne_ is now known as e0ne16:06
*** uberjay has quit IRC16:07
*** krtaylor has quit IRC16:08
*** lpetrut has quit IRC16:09
*** alexpilotti has joined #openstack-cinder16:10
*** uberjay has joined #openstack-cinder16:10
*** ctina__ has quit IRC16:12
jgriffithmtanino: BTW, I'm slightly confused by the proposal there16:13
*** gardenshed has joined #openstack-cinder16:13
jgriffithmtanino: so there's already currenlty a call to the source... but you're saying you want a call to the destination as well?16:13
jgriffithmtanino: it seems like that's kinda cumbersome...16:13
jgriffithmtanino: also it seems like that's the sort of thing the scheduler could determine based on capability reporting16:14
mtaninojgriffith: Just asking copy option to the destination volume, and then start a copy from source driver.16:14
mtaninodestination driver.16:14
mtaninojgriffith: Yes, so I think we need to add the capability to the scheduler.16:14
jgriffithmtanino: yeah, I get that... makes sense; but wonder if that should be determined by scheduler/capa16:14
jgriffithmtanino: :)  never mind, we're on same page :)16:15
*** gardensh_ has quit IRC16:15
jgriffithmtanino: thanks16:15
mtaninoI got it!16:15
mtaninoThanks!16:15
mtaninojgriffith: so we need to discuss the specification at Cinder spec.16:16
mtaninoUsing scheduler/capa is ok or not.16:16
jgriffithmtanino: IMO that's where it belongs, with a default set by base driver16:17
*** gardenshed has quit IRC16:17
jgriffithmtanino: I don't know if that should be folded in to the existing spec for cabability reporting16:17
mtaninoI agee on it16:17
jgriffithmtanino: I think it probably should16:17
jgriffithmtanino: so hopefully thingee is going to hammer that out this week, and we can add this to it as well16:18
mtaninojgriffith: I need to ask thingee about adding this as an additional capability.16:19
mtaninofor well-defined capability16:19
*** annegentle has joined #openstack-cinder16:20
*** Yogi1 has joined #openstack-cinder16:21
*** gardenshed has joined #openstack-cinder16:23
*** ociuhandu has quit IRC16:24
jgriffithmtanino: yep, that sounds perfect16:25
mtaninojgriffith: :)16:25
*** _cjones_ has joined #openstack-cinder16:26
mtaninoand then, add an feature into LVM for a reference and reimplement NFS driver using it.16:27
*** _cjones_ has quit IRC16:28
openstackgerritAnton Arefiev proposed openstack/python-cinderclient: Fix functional readonly_cli tests  https://review.openstack.org/18566016:29
*** _cjones_ has joined #openstack-cinder16:29
*** haomaiwang has quit IRC16:31
aarefievjgriffith: hi16:31
*** gardenshed has quit IRC16:35
Swansonjgriffith: Was thinking about replication.  Specifically what infrastructure I need to build to support it.  I'm assuming something to start it, pause it, delete it and some form of createvolumefromreplication function.16:35
*** asselin has joined #openstack-cinder16:36
*** haomaiwang has joined #openstack-cinder16:38
hemnathingee, ping16:42
hemnathingee, can you remove the -2 on these 2 for me?  https://review.openstack.org/#/c/144389/   https://review.openstack.org/#/c/144384/16:43
thingeehemna: I'll be around in an hour.16:44
hemnathingee, no hurry.  thanks man.16:44
*** jdurgin has joined #openstack-cinder16:44
*** tbarron1 has quit IRC16:44
*** e0ne has quit IRC16:45
Swansonhemna: any further thots on this one?  https://review.openstack.org/#/c/182760/16:45
*** tbarron1 has joined #openstack-cinder16:46
*** jwang_ has quit IRC16:49
*** ndipanov has quit IRC16:49
*** ronis_ has joined #openstack-cinder16:49
*** thangp has quit IRC16:49
hemnaSwanson, so sure.16:49
hemnathe default should be populating target_portal16:49
hemnayour your array sends back multiple targets in an iscsi sendtargets query, then you'll be ok and you'll still get multipath working.16:50
hemnabut I think this patch is for alternate portals no?16:50
hemnaso if that's the case, then populating target_portal as well as target_portals should work.16:50
hemnanova will ignore target_portals for now.16:50
hemnauntil nova is upgraded to use os-brick.16:51
hemnayou just won't get the fallback alternate portals working until nova uses target_portals (os-brick)16:51
hemnaI just didn't want you the driver to think it can only rely on target_portals and work in nova.   :)16:51
openstackgerritWalter A. Boring IV (hemna) proposed openstack/cinder: Dell SC: Added support for alternate iscsi portals  https://review.openstack.org/18276016:52
*** lpetrut has joined #openstack-cinder16:54
*** cbader has joined #openstack-cinder16:54
Swansonhemna: I think that caught a valid bug.  But returning the world (target_portal and target_portals) should satisfy all requests.  (I think.)  Certainly simplifies the return.16:54
SwansonAny idea when nova should be upgraded?  You're working that one, right?16:55
hemnaok, I'll check the patch again.   I just don't have much time today.16:55
hemnaI'm actively working on the nova os-brick patch16:55
hemnaI have the nova-spec up (https://review.openstack.org/#/c/184360/)  and a WIP (https://review.openstack.org/#/c/175569/)16:55
Swansoncool.  Thanks!17:00
*** markus_z has quit IRC17:00
*** yuriy_n17 has quit IRC17:01
*** mtanino has quit IRC17:02
*** IanGovett has quit IRC17:03
*** jwang_ has joined #openstack-cinder17:03
*** rushil has joined #openstack-cinder17:03
*** sgotliv has quit IRC17:04
openstackgerritSean Chen proposed openstack/cinder: Tintri Cinder Volume driver  https://review.openstack.org/18514817:06
*** IanGovett has joined #openstack-cinder17:06
*** harlowja has joined #openstack-cinder17:07
*** jordanP has quit IRC17:07
*** garthb has joined #openstack-cinder17:09
*** garthb_ has joined #openstack-cinder17:09
*** annegentle has quit IRC17:09
*** ebalduf has quit IRC17:09
*** vokt has joined #openstack-cinder17:10
*** ebalduf has joined #openstack-cinder17:11
*** vilobhmm1 has joined #openstack-cinder17:13
*** Maike has quit IRC17:16
*** thangp has joined #openstack-cinder17:17
*** gcivitella has quit IRC17:18
*** ctina__ has joined #openstack-cinder17:25
patrickeastjgriffith: hey, i saw your comment on https://blueprints.launchpad.net/cinder/+spec/image-volume-cache let me know if you have a few min to sync up on it17:28
*** IanGovett has quit IRC17:28
*** juzuluag has quit IRC17:32
*** avishay__ has joined #openstack-cinder17:32
*** annegentle has joined #openstack-cinder17:33
*** alecv has quit IRC17:33
*** Yogi11 has joined #openstack-cinder17:35
*** Yogi1 has quit IRC17:38
hemnathis kinda solves the ironic normal volume attach/detach issue.17:38
hemnaerr17:38
*** HenryG has quit IRC17:42
*** alexpilotti has quit IRC17:42
openstackgerrithadi esiely proposed openstack/cinder: Store volume encryption metadata on each volume  https://review.openstack.org/15228417:42
*** mtanino has joined #openstack-cinder17:44
*** HenryG has joined #openstack-cinder17:46
*** fthiagogv has joined #openstack-cinder17:50
*** cloudm2 has joined #openstack-cinder17:50
vilobhmm1hemna : ping17:51
*** haomaiw__ has joined #openstack-cinder17:51
hemnavilobhmm1, hey17:52
vilobhmm1regarding https://bugs.launchpad.net/cinder/+bug/1446750 have updated the comments regarding the reason for this problem17:52
openstackLaunchpad bug 1446750 in Cinder "cinder-manage service list shows happy for uninitialized driver" [Low,In progress] - Assigned to Vilobh Meshram (vilobhmm)17:52
*** haomaiwang has quit IRC17:52
*** anuragpalsule has quit IRC17:52
vilobhmm1hope you had a good trip back home...17:52
hemnavilobhmm1, ok that looks good :)  thanks for that.17:52
vilobhmm1so my point is along with the checking whether the service_is_up or not we should also check if we were able to successfully initialize the driver.17:53
vilobhmm1ok cool17:53
*** rushil has quit IRC17:53
*** jms has joined #openstack-cinder17:57
jmsIs there a way to figure out why a cinder service on a host is going to a down state?17:58
*** akerr is now known as akerr_away17:58
*** annegentle has quit IRC17:58
*** annegentle has joined #openstack-cinder17:59
*** rushil has joined #openstack-cinder18:00
patrickeastjms: there should be something in the log for the cinder service18:00
*** akerr_away is now known as akerr18:00
*** HenryG has quit IRC18:01
*** xek has quit IRC18:01
*** HenryG has joined #openstack-cinder18:01
*** xek has joined #openstack-cinder18:03
*** gardenshed has joined #openstack-cinder18:04
*** harlowja_ has joined #openstack-cinder18:06
*** harlowja has quit IRC18:06
*** mtanino is now known as mtanino_away18:08
jmspatrickeast: That's what I was hoping ... but I haven't seen anything. Service shows as 'up' for a short time (cinder service-list), then cinder-volume goes down, and a short time later cinder-scheduler does. It _seems_ to be still running on the host. And even though it's showing as down, it's getting sched updates from the controller (in scheduler.log)... i.e. on the 'down' host I'm seeing all storage that's defined either on the controler, or the se18:13
*** rushil has quit IRC18:14
*** rushil has joined #openstack-cinder18:15
*** rushil has quit IRC18:16
patrickeastjms: hmm thats strange, if there aren’t any errors in the log files for the volume services i’m not sure where to look next18:19
openstackgerritRushil Chugh proposed openstack/cinder: Avoid LUN ID collisions in NetApp iSCSI drivers  https://review.openstack.org/17923918:22
vilobhmm1you can check /var/log/messages if that helps18:22
*** rushil has joined #openstack-cinder18:22
vilobhmm1but as patrickeast mentioned service logs should have some details…..also if you have access to db you can check the cinder.services table and see if something is logged in description18:22
vilobhmm1jms : ^^18:23
thingeehemna: I can remove the -2 once we see a CI from brocade.18:23
thingeehemna: not even reviewing those until I see stuff.18:23
jmsthanks, I'll check the DB18:25
*** aix has quit IRC18:25
hemnathingee, ok I'll add that in the review then as well, to make it clear for them.18:26
hemnathingee, thanks18:26
thingeehemna: I'll start drafting an email to send out about CI for zone manager drivers in general like we talked18:26
thingeecan you give me the email addresses to cc from cisco?18:26
thingeehemna: ^18:26
hemnathingee, alau2@cisco.com  and jmmetz@cisco.com18:28
thingeehemna: thanks18:28
hemnathat's Al Lau and J Metz respectively18:28
*** rmesta has quit IRC18:29
*** rmesta has joined #openstack-cinder18:30
*** gardenshed has quit IRC18:30
*** rmesta has quit IRC18:30
*** gardenshed has joined #openstack-cinder18:31
*** rmesta has joined #openstack-cinder18:31
*** fthiagogv has quit IRC18:32
jmsHrmph ... DB is clean. Shows as enabled (doesn't seem to hold state), and nothing in disabled_reason as I would expect. :/18:32
*** fthiagogv has joined #openstack-cinder18:33
*** vilobhmm1 has quit IRC18:33
*** daneyon has joined #openstack-cinder18:33
*** krtaylor has joined #openstack-cinder18:38
*** alejandrito has quit IRC18:38
*** vilobhmm1 has joined #openstack-cinder18:38
*** rmesta has quit IRC18:39
*** zhithuang has joined #openstack-cinder18:39
*** zhithuang is now known as winston-d_18:39
*** rmesta has joined #openstack-cinder18:40
*** angela-s has joined #openstack-cinder18:40
*** madskier has joined #openstack-cinder18:41
*** avishay__ has quit IRC18:41
*** gardenshed has quit IRC18:42
*** vilobhmm1 has quit IRC18:42
*** vokt has quit IRC18:44
*** vokt has joined #openstack-cinder18:44
*** ebalduf has quit IRC18:44
jbernardhemna: for an rbd connector in os-brick, you'd like to first see a spec? or it's okay to submit what I've got?18:44
*** ebalduf has joined #openstack-cinder18:45
*** madskier has quit IRC18:46
*** logan2 has joined #openstack-cinder18:47
*** vilobhmm has joined #openstack-cinder18:47
*** ebalduf has quit IRC18:49
*** rushil has quit IRC18:50
hemnaso, we discussed this idea in the summit and decided we don't need specs for new connector objects.18:52
hemnaa BP is sufficient.18:52
*** akerr is now known as akerr_away18:52
*** e0ne has joined #openstack-cinder18:53
jbernardhemna: ok, great18:54
jbernardhemna: if it were required, is there an os-brick specs repo?18:54
jbernardhemna: or you're using cinder-specs?18:54
jgriffithpatrickeast: ping18:56
patrickeastjgriffith: hey18:56
patrickeastjgriffith: so for your comment, i was curious what you meant by reconsidering the special tenant18:58
patrickeastjgriffith: reconsider as in we shouldn’t use it18:58
patrickeastjgriffith: or reconsider as in we should use it for everything?18:58
jgriffithpatrickeast: kinda18:59
jgriffithpatrickeast: that second18:59
jgriffithpatrickeast: so rather than all of us having "hidden" tenants18:59
jgriffithpatrickeast: go full on frontal assault18:59
*** akerr_away is now known as akerr18:59
patrickeastjgriffith: not sure if you were there for all of the sessions, but i floated the idea around for solving a few things where people brought up hidden volumes and stuff18:59
jgriffithpatrickeast: glance-tenant that has/own volumes and creates public snapshots18:59
patrickeastjgriffith: gotcha19:00
patrickeastjgriffith: yea i’m all for it19:00
jgriffithpatrickeast: Just a thought, I was kinda down on the public snapsssssss19:00
jbernardhemna: it appears blueprints for LP os-brick are disabled?19:00
jgriffithpatrickeast: but the more I thought about it, there might be a number of use cases19:00
openstackgerritVilobh Meshram proposed openstack/cinder: Nested Quota : Create allocated column in cinder.quotas  https://review.openstack.org/18570419:00
patrickeastjgriffith: yea i think stuff like the temp volumes while migrating or doing backup of attached volumes were now looking to maybe do something similar19:00
patrickeastjgriffith: kind of like any cinder object that needs an owner19:01
jmsHrmm... okay, working this backward. What can change the host state? I'm guessing it's HostState in scheduler/host_manager.py (or at least in host_manager somewhere) ... but what determines a 'down' state?19:01
patrickeastand we dont really want the actual user to see them19:01
*** _cjones_ has quit IRC19:01
jgriffithpatrickeast: right19:01
jgriffithpatrickeast: and it's a bit *easier* to manage than the hidden stuff19:02
jgriffithpatrickeast: and debug :)19:02
patrickeastjgriffith: yea, it would be a sad day when one of us has to go add a if (hidden) check to all api methods19:02
jgriffith:(19:02
patrickeastand new ‘—hidden’ params to commands19:02
patrickeastgross19:02
jgriffithpatrickeast: indeed19:02
jgriffithpatrickeast: so  I don't know ws think19:03
*** rushil has joined #openstack-cinder19:03
jgriffithbahh19:03
jgriffith"what others"19:03
jgriffithpatrickeast: but a special tenant might help alleviate some of the quota concerns I had19:03
patrickeastwhen i brought it up at the summit it seemed to have pretty positive feedback19:03
jgriffithpatrickeast: cool19:03
patrickeasti don’t remember any objections19:03
jgriffithpatrickeast: I say we proceed on that  path and exploit public snapshots19:04
jgriffithpattttttbut that assumes we can convince DuncanT19:04
patrickeastjgriffith: iirc he was on board… or asleep?19:04
jgriffithpatrickeast: LOL19:04
winston-d_jms: 'down' state is calculated state at runtime19:05
openstackgerritVilobh Meshram proposed openstack/cinder: Nested Quota : Create allocated column in cinder.quotas  https://review.openstack.org/18570419:05
tbarron1patrickeast: he was OK with special tenant, but less so w/ public snaps19:05
patrickeastjgriffith: i’ll add something to the next meeting agenda so we can get some feedback on it and make sure no one has strong objections19:05
jgriffithpatrickeast: I'm     certain there will be strong     objecttttttionss19:05
jgriffithpatrickeast: buttttttttt I'd likkkkkkee moree than "it's hmmm19:05
jgriffithsomethign seeeerriiiiiiiiously wrong  with my keyboard19:06
patrickeastlol19:06
winston-d_jgriffith: you have turned on sticky key feature? looks nice anyway19:06
jgriffithwinston-d_: apprantly I had19:06
jgriffithwinston-d_: now to see if I can disalbe it19:06
jgriffithlooks like I got it :)19:06
openstackgerritVilobh Meshram proposed openstack/cinder: Nested Quota : Create allocated column in cinder.quotas  https://review.openstack.org/18570419:07
patrickeastjgriffith: so one thing i am still not sure on is how we get the cinder tenant, we talked about it a little bit at the summit but as I think about it I’m not sure exactly what the plan is, i posted a comment on https://review.openstack.org/#/c/182520/ kind of outlining my thoughts in response to scottda asking about it19:09
*** thangp has quit IRC19:10
patrickeastjgriffith: imo the easiest solution is just the cinder user that is already created… but i don’t know if thats ok or not since its a special service one19:11
jmswinston-d_: The state shows 'up' for a while, before switching to down.... So, what's causing the switch?19:11
*** daneyon has quit IRC19:11
*** mtanino_away is now known as mtanino19:12
winston-d_jms: that service isn't updating its state19:14
*** _cjones_ has joined #openstack-cinder19:14
winston-d_jms: you can verify by checking out the 'updated_at' timestamp for that particular service in cinder service DB19:14
*** annegentle has quit IRC19:15
jmswinston-d_: It's updated when it starts (i.e. I restart it)... it goes from down -> up then for a short time before going back down.19:16
*** thangp has joined #openstack-cinder19:16
*** barra204 has joined #openstack-cinder19:16
jmswinston-d_: ... ... and now it's up again and I didn't even restart it :/19:17
e0nethingee: hi. should i abadone this patch https://review.openstack.org/#/c/146541/ (Set default OS_VOLUME_API_VERSION to '2') for cinderclient or we'll merge it?19:19
winston-d_jms: do your c-vol and c-sch service in-sync in terms of time? (having ntp running)?19:19
jmswinston-d_: vol/sched are on the same server. The controller and the cinder host are both synced to the local NTP server19:21
jmswinston-d_: The controller also has a cinder LVM volume; which even when the 2nd host is 'down', it's still getting updates about.19:22
*** vokt has quit IRC19:22
*** annegentle has joined #openstack-cinder19:23
winston-d_jms: so that means the service(s) is/are able to update 'update_at' timestamp in DB, right?19:24
*** rushil has quit IRC19:24
harlowja_DuncanT https://review.openstack.org/#/c/184814/ if u don't mind19:24
harlowja_that will reduce log noise19:25
jgriffithDuncanT: if you have an example of your solution for splitting up log files that would be awesome as well19:26
jgriffithDuncanT: I took a look and turns out don't see how that works19:27
jmswinston-d_: That is correct19:27
jgriffithDuncanT: as we init the log with the service on startup19:27
winston-d_jms: hmm, so the 'down' state you are seeing is the result of 'cinder service-list' CLI?19:29
jmswinston-d_: Yes19:33
jmswinston-d_: It's been fluctuation today between down, then going up for a while, then down... cycle and repeat. Previously it was just staying down, and it was done.19:34
*** sparr has quit IRC19:37
*** sparr has joined #openstack-cinder19:37
*** sparr is now known as spar19:37
*** spar is now known as sparr19:38
*** patrickeast_ has joined #openstack-cinder19:40
*** akerr is now known as akerr_away19:40
*** Lee1092 has quit IRC19:41
*** agarciam has quit IRC19:44
*** patrickeast__ has joined #openstack-cinder19:45
*** akerr_away is now known as akerr19:46
*** tswanson_ has joined #openstack-cinder19:46
*** patrickeast_ has quit IRC19:48
*** tswanson_ has quit IRC19:48
*** Swanson has quit IRC19:48
*** Swanson has joined #openstack-cinder19:51
*** patrickeast__ has quit IRC19:52
*** rushil has joined #openstack-cinder19:53
*** e0ne has quit IRC19:55
*** ebalduf has joined #openstack-cinder19:59
*** marcusvrn has quit IRC19:59
*** e0ne has joined #openstack-cinder20:00
*** Swanson has quit IRC20:01
*** Arkady_Kanevsky_ has joined #openstack-cinder20:02
openstackgerritVictor Stinner proposed openstack/cinder: Use six to fix imports on Python 3  https://review.openstack.org/18541720:05
*** ctina__ has quit IRC20:05
*** Arkady_Kanevsky has quit IRC20:06
*** ebalduf has quit IRC20:09
*** mtanino has quit IRC20:11
*** fthiagogv has quit IRC20:13
*** IanGovett has joined #openstack-cinder20:14
*** mtanino has joined #openstack-cinder20:14
*** setmason has joined #openstack-cinder20:20
harlowja_jgriffith DuncanT  also, https://review.openstack.org/#/c/185116/ that might make things a little more sane (and can make it easier to read/follow?)20:20
harlowja_there will hopefully be a spec from a taskflow core on making that better20:20
setmasonis there a way to dedicate a cinder backend to a particular project/tenant?20:20
harlowja_*hopefully makes it easier to use in the common case*20:21
harlowja_*see example in that review20:21
openstackgerritVilobh Meshram proposed openstack/cinder: Cinder-manage service list shows happy for uninitialized driver  https://review.openstack.org/18573020:21
jgriffithharlowja_: oh yay!  Deocrator magic :)20:21
jgriffithharlowja_: thanks for putting it together, I'll take a look for sure20:22
harlowja_jgriffith well sorta magic, ha20:22
harlowja_only a sprinkle of magic20:22
harlowja_not to much now20:22
* harlowja_ was watching something something CNN marijuna recently and thought of u20:22
harlowja_lol20:22
harlowja_*all that magic in breckenridge20:22
harlowja_lol20:22
*** nikeshm has joined #openstack-cinder20:24
winston-d_setmason: yes there are ways to achieve that but not out-of-box20:25
openstackgerritVilobh Meshram proposed openstack/cinder: Nested Quota : Create allocated column in cinder.quotas  https://review.openstack.org/18570420:25
*** anshul has joined #openstack-cinder20:27
nikeshmasselin : hi20:27
*** julim has quit IRC20:27
asselinhow's it going? back home?20:27
nikeshmnow in san jose20:27
nikeshmits going good20:28
nikeshmactully i forgot to ask FC passthrogh20:28
nikeshmi am able to acheive manually20:28
asselinI didn't know you had a fc driver20:28
nikeshmand able to get internet in nodepool images20:29
asselinnikeshm, great! glad you figured it out20:29
*** ronis_ has quit IRC20:29
asselinnikeshm, you have fc and iscsi or just fc driver?20:29
*** vokt has joined #openstack-cinder20:30
nikeshmboth20:30
asselinnikeshm, sorry...have to go. But focus on iscsi first. Once that works, we can do fc.20:30
nikeshmasselin : https://review.openstack.org/#/c/177665/20:31
nikeshmok20:31
nikeshmi am close in FC too20:31
nikeshmi m following https://review.openstack.org/#/c/182091/20:31
*** Arkady_Kanevsky_ has quit IRC20:32
*** Swanson has joined #openstack-cinder20:33
*** IlyaG has joined #openstack-cinder20:35
*** IlyaG has quit IRC20:38
patrickeastcould someone explain to me correct behavior is for a volume that specifies both thin and thick provisioning like in https://github.com/openstack/cinder/blob/master/cinder/tests/unit/scheduler/test_host_filters.py#L256 ??20:38
patrickeasti put together a fix for https://bugs.launchpad.net/cinder/+bug/1458976 and am looking into a couple of unit tests it breaks20:38
openstackLaunchpad bug 1458976 in Cinder "cannot create thin provisioned volume larger than free space" [Undecided,New] - Assigned to Patrick East (patrick-east)20:38
patrickeastbut i’m not sure how to actually fix these ones…20:39
patrickeastxyang1: ^20:39
thingeejgriffith: why are we considering public snaps now?20:40
winston-d_patrickeast: i think a backend reports thick/thin support both True is valid20:40
patrickeastwinston-d_: yea, but if the backend supports both, and the volume type is specifying both, what is supposed to happen?20:41
patrickeastwinston-d_: should the scheduler assume it *could* do thin and take the virtual free space into account? or assume it might be thick and ignore the over subscription stuff?20:41
winston-d_ok, in that case, i'd treat thick higher priority than thin20:43
winston-d_s/treat/consider/20:44
patrickeastok, that makes sense20:44
xyang1patrickeast: Hi20:45
patrickeastxyang1: hey, do you agree with what winston-d_ recommended?20:46
jgriffiththingee: additional consideration and thoughts regarding the AWS compat as well as what we can do with public snaps in other places20:46
jgriffiththingee: and after looking at the code for myself rather than taking peoples words for it20:47
jgriffiththingee: it's not as complex as I think initially made out to be20:47
patrickeastxyang1: or for that matter, agree that this is a bug in the first place?20:47
patrickeastxyang1: i wasn’t sure if this was intentional or not20:47
jgriffiththingee: but if you're decision is "no" fine by me20:47
xyang1patrickeast: It is intentional at design time, but I have heard complaints like yours after it is merged20:48
xyang1patrickeast: I am thinking about introducing a flag20:48
xyang1patrickeast: Let driver choose whether to allow it or not20:49
xyang1patrickeast: Because I have heard arguments from both sides20:49
patrickeastxyang1: ah ok, wwas the problem that not all backends can do it?20:49
patrickeastwas*20:49
patrickeastxyang1: i would be ok with a flag20:50
xyang1patrickeast: this is only called if backends supports thin20:50
xyang1So backend already says it supports it20:50
patrickeastxyang1: yea so what case would we not allow larger volumes than free space with thin provisioning?20:51
patrickeastxyang1: if someone wants one that will be guarenteed to have space it should be thick, right?20:51
xyang1patrickeast: The conservative approach is to assume everything will be consumed, the other approach is to assume it will not until it is written20:52
xyang1patrickeast: So a flag will satisfy both20:52
patrickeastxyang1: i’m not sure i agree… the ‘conservative’ approach would let me make 100x99GB volumes if I report 100GB of free space, and the non-conservative lets me make 1x101GB volume?20:53
patrickeastxyang1: i don’t see much of a difference20:53
xyang1patrickeast: Maybe I did not understand your problem completely20:54
xyang1patrickeast: If you do not have any free space, everything is written, you should not be allowed to provision any more20:55
xyang1patrickeast: That was why introduced this, otherwise, you can always report infinite20:56
*** Swanson has quit IRC20:56
patrickeastxyang1: well wait, you can over subscribe up to the ratio that is configured, right? so its not infinite20:56
xyang1patrickeast: We introduced two limits20:57
xyang1patrickeast: Oversubscription is one, but used ratio is another20:57
patrickeastxyang1: right, both protect against over use20:57
xyang1patrickeast: Used ratio is also used to limit thin provisioning20:58
patrickeastxyang1: but i still think users should be allowed to create thin volumes larger than the free space on the backend20:58
patrickeastxyang1: maybe limited by max size of the backend or something20:58
*** jungleboyj has quit IRC20:58
xyang1patrickeast: Once your space is filled up, we should not allow any more20:59
jmsWhen creating a new volume in the web interface (or cli), it errors with "No valid host was found. No weighed hosts available" *only* when ran from the controller host. If ran from the actual cinder host it works. I'm assuming I'm missing _something_ that makes it talk back and forth... any suggestions on where to look to make the scheduler on the controller, send data to the scheduler on the 2nd cinder host?20:59
patrickeastxyang1: i agree, once its out of space, but not when it still has space available20:59
patrickeastxyang1: we allow for so much over subscription of volumes to where you can easily fill the backend *before* the volumes are full21:00
winston-d_jms: what does 'ran from' mean?21:00
patrickeastxyang1: the part i’m arguing for is that we shouldn’t stop allowing that just because someone wants a volume larger than the free space21:00
thingeejgriffith: my opinion on it was no from the summit, but definitely willing to hear things out. Initially we said no, not because of complexity, but because of having another case to think about, and maintenance. In the end, is the feature worth all that?21:00
*** daneyon has joined #openstack-cinder21:00
*** Swanson has joined #openstack-cinder21:00
*** bswartz has quit IRC21:01
jgriffiththingee: well, yeah... the answer IMHO now is "maybe"21:01
*** dannywilson has quit IRC21:01
xyang1patrickeast: So when you still have space, that is that flag I mentioned earlier21:01
jgriffiththingee: I think it warrants some investigation, and possibly a POC21:01
jgriffiththingee: one thing that changes my opinion is a "special" tenant as an owner21:01
winston-d_xyang1: which flag?21:02
jgriffiththingee: I reserve the rigth to run away screaming after getting into it though :)21:02
jmswinston-d_: The scheduler on the controller runs the flows.create_volume (cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask)21:02
*** dannywilson has joined #openstack-cinder21:02
asselinnikeshm, you can try those fc scripts. Honestly they work, but I think more complicated then necessary. Looking for a better way...perhaps a v2. but as I said, get your iscsi working first. There's lot sof fun stuff as you deal with load, etc.21:02
thingeejgriffith: definitely open to that.21:02
xyang1patrickeast: I think I'll update that bug with more details, there is a place where we always deduct the volume size21:02
jgriffiththingee: Honestly my main motivation is around the image creation optimization stuff21:03
asselinyou can work on adding fc afterwards. Just my opinion, though21:03
jgriffiththingee: the AWS compat thing is "less" interesting to me, but I see the point made by others21:03
jmswinston-d_: So, if I do it through the cinder cli, or web iterface, from the controller it fails. From the cinder cli on the 2nd cinder host (where it should be created), it works and creates the volume.21:04
*** patrickeast_ has joined #openstack-cinder21:04
*** HenryG has quit IRC21:04
smcginnispatrickeast: Your point is what's the difference between provisioning 200TB of space on 100TB physical whether it is 100 2TB volumes or 1 200 TB volume, right?21:04
thingeejgriffith: yeah I thought you originally came up with it? :)21:05
thingeeI agree21:05
*** patrickeast has quit IRC21:05
*** patrickeast_ is now known as patrickeast21:05
jgriffiththingee: coolio21:05
*** daneyon has quit IRC21:05
*** akerr has quit IRC21:05
nikeshmasselin: ok21:05
winston-d_jms: hmm, did you check the OS env for both places (controller, 2nd cinder host) for cinder CLI? maybe you have mulitple cinder-api, cinder-scheduler?21:06
winston-d_jms: otherwise, where the API is issuing shouldn't affect the result.21:06
jmswinston-d_: Err... is there not supposed to be a scheduler/api on each?21:07
*** HenryG has joined #openstack-cinder21:07
*** Swanson has quit IRC21:08
*** Yogi11 has quit IRC21:08
*** rasoto1 has quit IRC21:08
winston-d_jms: scheduler, maybe, if you point all scheduler to the same rabbit, it actually works in a round-robin active-active manner. but for api, you need to have a load balancer sit before them.21:10
*** jungleboyj has joined #openstack-cinder21:10
jmswinston-d_: And I've verified the conf file... for my knowledge of what's needed anyway. Hopefully the guy here that "knows" OS more should be up tomorrow or the next day and I can try getting info from him, and hope the answer isn't messing around with stuff till he find's something that semi-works.21:10
jmswinston-d_: Hrmm... okay. For now I can kill them on the 2nd host and see if that changes anything. I'll probably need to remove the keystone endpoints I added then?21:11
winston-d_i wonder how that work out for you. i mean multiple endpoints for same service in keystone.21:13
jmsI was just assuming that it needed that for communication to the specific host...21:14
*** IanGovett has quit IRC21:15
*** garthb__ has joined #openstack-cinder21:15
*** garthb has quit IRC21:15
*** garthb_ has quit IRC21:15
*** garthb has joined #openstack-cinder21:15
*** Swanson has joined #openstack-cinder21:16
*** rushil has quit IRC21:16
winston-d_jms: so to make debug easy, can you make sure there is only *1* cinder-api and *1* cinder-scheduler?21:16
*** markstur has joined #openstack-cinder21:19
openstackgerritVilobh Meshram proposed openstack/cinder: Cinder-manage service list shows happy for uninitialized driver  https://review.openstack.org/18573021:20
jmswinston-d_: You bet. Now I know there _should_ be one to make it simple. ;)21:20
openstackgerritVilobh Meshram proposed openstack/cinder: Nested Quota : Create allocated column in cinder.quotas  https://review.openstack.org/18570421:23
*** Swanson has quit IRC21:23
*** tswanson_ has joined #openstack-cinder21:23
*** tbarron1 has quit IRC21:24
*** tswanson_ has quit IRC21:27
*** Swanson has joined #openstack-cinder21:29
*** tbarron1 has joined #openstack-cinder21:29
*** Gomeler has joined #openstack-cinder21:33
jmswinston-d_: Okay, with cinder-{scheduler,api} stopped/disabled on the 2nd host, I get the error "No valid host was found. No weighed hosts available" ... from wherever I try creating it now. So, stopping sched on that host made it stop working there at least. ;)21:38
winston-d_jms: ok, we are making progress.21:40
winston-d_jms: what's the output of 'cinder service-list' now?21:41
*** IlyaG has joined #openstack-cinder21:43
*** lpetrut has quit IRC21:44
*** mtanino has quit IRC21:44
jmswinston-d_: the backend (volume) is enabled/up; scheduler is disabled/down, on the 2nd host. All up on the 1st host21:46
winston-d_jms: what's on 1st host? cinder-api and cinder-scheduler?21:47
jmsand cinder-volume (lvm type)21:48
*** IlyaG has quit IRC21:49
*** primechuck has quit IRC21:49
*** simondodsley has quit IRC21:50
jmswinston-d_: I can kill that for debugging if you think it'll help. Specifying the other type which is pointing at the 2nd host (type-key with volume_backend_name ... etc, etc).21:52
winston-d_jms: what extra specs do you have for the type that you were trying?21:53
jmswinston-d_: None. Just the backend name.21:54
*** mriedem is now known as mriedem_away21:54
*** IlyaG has joined #openstack-cinder21:54
winston-d_jms: ok, can you show me the output of 'cinder extra-specs-list'21:55
jmswinston-d_: Sure..21:56
jungleboyjI have a potentially silly question.  Have a user who is stuck with VMs powered off and volumes attached on an early version of Kilo.  They are hitting the chap authentication problem we previously saw in Kilo and can't Poweron their VMs as a result.21:57
jungleboyjAny ideas as to how to detach the VMs so they can restart the VMs?21:58
jmswinston-d_: I hope this works as I think it should... :/   https://gist.github.com/jmstover/274c45652caeb240443c21:59
winston-d_jungleboyj: so what the VM state and power state is right now?21:59
jungleboyjIt is active but powered off.22:00
jungleboyjThe VMs think they are inuse.22:00
winston-d_jms: it works, thx.  you can try paste.openstack.org next time if you want.22:01
jungleboyjThey can't detach the volumes because the control node was rebooted and the /dev/disk/by-path/<...> devices are gone after the reboot.22:01
*** dannywilson has quit IRC22:01
winston-d_jungleboyj: looks like a iscsi volume?22:01
winston-d_jms: can you paste your cinder.conf for 2nd c-vol service (zfs one)?22:02
jungleboyjYes, LVM backed. winston-d)22:02
winston-d_jungleboyj: is there any stale iscsi session left on the hypervisor? (check by 'iscsiadm -m session')22:03
*** changbl has quit IRC22:03
*** dannywilson has joined #openstack-cinder22:04
jungleboyjwinston-d_ having them check.22:04
jungleboyjI am thinking the source of the problem is the fact that when it tries to reboot it cannot log back in because they don't have the fix to use the old chap credentials.22:05
jungleboyjWhen they try to restart they get an iscsi authentication error in the logs where LVM is running.22:05
jungleboyjNo active iscsi connections.22:06
winston-d_jungleboyj: ok, so they are safe to reset the volume status from 'in-use' to 'available', by whatever means.22:06
*** rhedlind has quit IRC22:06
jungleboyjYeah, they just want to be able to bring the VM back up.  Can reattach the volume later winston-d_22:07
jmswinston-d_: http://paste.openstack.org/show/238571/22:07
*** winston-1_ has joined #openstack-cinder22:10
jungleboyjwinston-d_ can they just do a cinder reset-state on the volume to available?  Will that change what Nova tries to do during reboot though?22:11
*** winston-d_ has quit IRC22:11
*** annegentle has quit IRC22:13
*** Gomeler has quit IRC22:17
*** Gomeler has joined #openstack-cinder22:17
winston-1_jungleboyj: that I don't know, you should try 'reset-state' first, worst case is to cleanup Cinder DB manually.22:18
winston-1_jungleboyj: that is for Cinder.22:18
jungleboyjwinston-1_: Right.  Then I am guessing they will also have to manually clean up Nova.22:19
*** garthb__ has quit IRC22:19
jungleboyjI am trying in a virtual box right now.22:19
*** garthb__ has joined #openstack-cinder22:19
smcginnisOne day to (barely) catch up and I'm off again. See you all Monday.22:19
SwansonWhere to now?22:20
scottdajungleboyj: They will have to remove the volume mapping from Nova's BlockDeviceMapping table.22:20
smcginnisSwanson: mini vacation time with no internet access (gasp!)22:21
jmswinston-1_: http://paste.openstack.org/show/238571/   (if you missed it)22:21
jungleboyjscottda: How do they do that?22:21
scottdaAnd possible removing the <block> ...</block> device from /etc/libvirt/qemu/<instance_id>.xml22:21
jungleboyjscottda: I was expecting that part.22:21
scottdaUsing mysql commands on the DB. Very ugly solution, but all there is ATM22:22
scottdaWe're working on a Nova patch for this, but POC is not quite ready yet.22:22
Swansonsmcginnis: good luck entertaining the kids.22:23
scottdajungleboyj: Since this affects you, you might want to review the Cinder changes for this: https://review.openstack.org/#/c/172213/22:23
* scottda makes a shameless plug for his spec22:24
jungleboyjscottda: Well played!22:24
hemnaI don't see a CI for Violin22:24
scottdaNova BP needs some cleanup, it has just been un-abandoned after I spoke with Nova at the summit: https://blueprints.launchpad.net/nova/+spec/add-force-detach-to-nova22:25
hemnaI just saw this patch, which changes the violin drivers.  https://review.openstack.org/#/c/185418/ Yet, I don't see CI for violin ?22:25
*** dims has quit IRC22:25
hemnahey scottda here is the bug we filed against nova for detach https://bugs.launchpad.net/nova/+bug/145895822:26
openstackLaunchpad bug 1458958 in OpenStack Compute (nova) "Exceptions from Cinder detach volume API not handled" [Undecided,New]22:26
asselinhemna, maybe it's not done yet22:26
hemnaasselin, I looked in one of my old patches that finished jenkins and CI last week....nothing Obvious for "violin"22:26
asselinhemna, nevermind....confused am and pm ;)22:26
hemnaI don't mean to be a CI nazi, but just found it odd I didn't see anything in the CI reporting for it.22:27
scottdahemna: cool. I'll talk to my Nova partner about a patch for this22:27
*** dims has joined #openstack-cinder22:28
hemnascottda, that's what the result of removing the file manager locks will give us, along with actually checking the volume 'ING' state and reporting up VolumeIsBusy errors in the API22:28
*** thangp has quit IRC22:28
scottdaYeah. If it's as straightforward as adding some try: blocks and logging it should be easy to write the code. Then we'll see about the Nova team's reaction....22:29
hemnascottda, ok coolio.22:29
*** akshai has quit IRC22:29
winston-1_jungleboyj: to sync vm state and power state for a VM, you can have them try 'virsh start' on the hypervisor.22:30
winston-1_jms: yes, i saw your config file, it looks sane.22:30
winston-1_jms: now, i have to ask for cinder-scheduler.log for failed creating task.22:31
winston-1_jungleboyj: to delete BDM entry in Nova DB, they will have to find the DB entry and mark it as 'deleted' (soft-delete).22:32
openstackgerritxing-yang proposed openstack/cinder: Add flag assume_virtual_capacity_consumed  https://review.openstack.org/18576422:35
*** lifeless has quit IRC22:35
*** annegentle has joined #openstack-cinder22:36
*** krtaylor has quit IRC22:37
xyang1patrickeast: Hi, see if this addressed your concern22:37
openstackgerritVilobh Meshram proposed openstack/cinder: Nested Quota : Create allocated column in cinder.quotas  https://review.openstack.org/18570422:38
jmswinston-1_: http://paste.openstack.org/show/238652/22:40
jmswinston-1_: Though, now that I was breaking the loggin up I see a (the?) issue. It's only trying to place the volume on a single cinder host, but not looking at the 2nd one.22:41
winston-1_jms: so, it is time to exam the cinder.conf on 1st cinder node. can you paste it?22:46
*** anshul has quit IRC22:48
*** daneyon has joined #openstack-cinder22:49
*** e0ne has quit IRC22:53
*** ganso_ has quit IRC22:54
*** daneyon has quit IRC22:54
jmswinston-1_: ... I'm post it once I clean it (so many comments), but I think I know where this might be leading....22:54
*** bswartz has joined #openstack-cinder22:55
jmswinston-1_: http://paste.openstack.org/show/238669/22:56
*** IlyaG has quit IRC22:58
winston-1_jms: just to make sure, this config file should be the same scheduler is using, right?23:02
jmswinston-1_: Yes23:02
winston-1_jms: basically, the two config files looks mostly the same, especially the essentila part (using the same DB, same rabbit)23:04
winston-1_jms: i have a meeting now, have to run now.23:04
winston-1_leave messages and i will check later.23:04
jmswinston-1_: Okay, thanks.23:05
*** jms has quit IRC23:08
*** chlong has joined #openstack-cinder23:09
*** annegentle has quit IRC23:11
vilobhmmhemna : hey23:13
hemnayough23:14
*** chlong has quit IRC23:15
*** markvoelker has quit IRC23:15
*** annegentle has joined #openstack-cinder23:17
*** winston-1_ has quit IRC23:19
*** akerr has joined #openstack-cinder23:20
*** akerr_ has joined #openstack-cinder23:22
patrickeastxyang1: sry, was away from my desk for a bit23:22
patrickeastxyang1: that looks like it would address the issue, although smcginnis is correct, i still don’t see why its treated any differently if you over subscribe a bunch of smaller volumes or a few larger ones23:23
*** vokt has quit IRC23:23
patrickeastxyang1: that being said, as long as it works i’m pretty happy with it :D23:23
*** akerr has quit IRC23:25
*** winston-d_ has joined #openstack-cinder23:26
vilobhmmhemna : sorry someone stopped by…regarding your comment on https://review.openstack.org/#/c/185730/2/cinder/cmd/manage.py should probably add an RPC call into the host to see if the driver is initiatlized….can you clarify a lil bit on it23:28
*** annegentle has quit IRC23:29
hemnaso yah, the manager object is what comprises a running c-vol service23:29
hemnayou can't just call manage.init_host() and see if that 'works'23:29
hemnayou need to call the running c-vol instance to see if it's working correctly23:29
hemnaor more specifically to ask the running volume manager if it's driver instance has initialized.23:30
hemnawhat you are doing is trying to force a static call to a manager object that has no configuration and isn't running.23:30
hemnayou need to make an RPC into a running volume manager23:30
*** winston-d_ has quit IRC23:31
hemnaI hope that makes sense23:31
hemnaanyway, I have to run to get my kids from daycare....l8rs23:31
*** hemna is now known as hemnafk23:31
*** annegentle has joined #openstack-cinder23:33
vilobhmmi get that…may be will get the reference for that vol manager and check if its there…if the reference to vol mgr is there make a call to init_host23:34
vilobhmmsure23:34
vilobhmmcia later23:34
vilobhmmthanks23:34
*** gardenshed has joined #openstack-cinder23:35
*** IlyaG has joined #openstack-cinder23:36
*** Tross1 has joined #openstack-cinder23:38
*** Tross has quit IRC23:38
*** rmesta has quit IRC23:39
*** Arkady_Kanevsky has joined #openstack-cinder23:40
*** jamielennox is now known as jamielennox|away23:42
*** chlong has joined #openstack-cinder23:44
*** annegentle has quit IRC23:45
*** annegentle has joined #openstack-cinder23:50
*** dannywilson has quit IRC23:53
*** lifeless has joined #openstack-cinder23:54
*** vilobhmm has quit IRC23:57
*** gardenshed has quit IRC23:57
*** leeantho has quit IRC23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!