Wednesday, 2015-05-27

*** emagana has quit IRC00:01
*** ebalduf has joined #openstack-cinder00:03
*** annegentle has quit IRC00:04
*** annegentle has joined #openstack-cinder00:09
openstackgerritwangxiyuan proposed openstack/cinder: Fix wrong response with version details  https://review.openstack.org/18557000:15
*** markvoelker has joined #openstack-cinder00:16
*** winston-d_ has joined #openstack-cinder00:17
*** garthb__ has quit IRC00:19
*** garthb has quit IRC00:19
*** david-lyle has quit IRC00:20
*** annegentle has quit IRC00:21
*** markvoelker has quit IRC00:23
*** winston-d_ has quit IRC00:23
*** markvoelker has joined #openstack-cinder00:23
*** winston-d_ has joined #openstack-cinder00:23
*** annegentle has joined #openstack-cinder00:27
*** Apoorva has quit IRC00:27
*** Apoorva has joined #openstack-cinder00:28
openstackgerritxing-yang proposed openstack/cinder: Add flag assume_virtual_capacity_consumed  https://review.openstack.org/18576400:29
winston-dpatrickeast, xyang1: I'd echo what Patrick and Sean had said, to me 2 x 10TB and 10 x 2TB are identical in terms of capacity management00:29
winston-dxyang1: it'd help if you can provide an example to demo that they don't.00:30
*** Apoorva has quit IRC00:31
openstackgerritPatrick East proposed openstack/cinder-specs: Generic image volume cache functionality  https://review.openstack.org/18252000:33
xyang1winston-d: Not sure what you guys are saying,I just try to accommodate two requirememts here, have you got a chance to look at my patch00:34
winston-dxyang1: "what's the difference between provisioning 200TB of space on 100TB physical whether it is 100 2TB volumes or 1 200 TB volume?"00:35
*** annegentle has quit IRC00:36
xyang1winston-d: I am not trying to make that difference, it depends on the total capacity, free capacity, your usage and over subscription ratio, I can't say whether you can provision them or not00:37
*** daneyon has joined #openstack-cinder00:38
patrickeastxyang1: i think we are all trying to get the same thing here… the problem i was looking to fix with that bug is that right now we *do* make that distinction00:39
patrickeastxyang1: we would allow 100 2TB volumes but not 1 200TB volumes00:39
winston-dexactly00:39
winston-dby we, that means current cinder CapacityFilter00:40
xyang1patrickeast: You'll have to lay out the complete formula00:40
*** heyun has joined #openstack-cinder00:41
patrickeastxyang1: so the formula you have already in there for the over subscription works exactly right, but if we go into that first ‘if free < volume_size we return before ever doing any of the virtual capacity calculations00:41
*** daneyon has quit IRC00:43
xyang1patrickeast: Can you take a look of my latest patch set00:44
patrickeastxyang1: i think the latest patchset you posted is doing the right thing now, it will skip the ‘free < volume_size’ check and make it to the vritual stuff00:44
patrickeastxyang1: yep was just looking at it00:44
xyang1patrickeast: The existing code was based a review suggestion to be conservative00:45
xyang1patrickeast: It is hard to accommodate opposite comments00:46
xyang1patrickeast: I hope the flag can accommodate both00:46
*** zhenguo has joined #openstack-cinder00:46
patrickeastxyang1: yea i had assumed that was probably the case, no worries, i think like most things in cinder everyone has their own way of doing it00:46
patrickeastxyang1: i am curious what the argument is for it though, hopefully i’m not missing something and advocating for a bad change :o00:47
xyang1patrickeast: That comment came right after the Paris summit00:48
winston-dhttps://review.openstack.org/#/c/142171/ the change00:48
xyang1patrickeast: I don't even remember who made that comment.  I may be able to dig out that long email if I try hard enough:)00:49
xyang1patrickeast: After the code was merged though, all the comments I got was similar to yours.  That is why I thought about using a flag00:50
*** _cjones_ has quit IRC00:50
xyang1winston-d: I think that comment was from an email to me00:51
winston-dpatrickeast: maybe you can submit a fix _without_ flag, and we have everybody look at two version of fixes and hopefully get some consenus in next cinder weekly meeting?00:53
xyang1winston-d: Why not just have everyone look at my patch?00:54
winston-dpersonally, i think there is not difference between 100 2TB and 1 200TB00:57
winston-dso why do we need a flag?00:57
patrickeasthmm so i found where it came in https://review.openstack.org/#/c/142171/19..20/cinder/scheduler/filters/capacity_filter.py but i don’t see comments around patchset 18 or 19 for it00:58
xyang1winston-d: That was coming from a comment as I said.  If everyone thinks it is not right, may be we need to think about it again00:59
*** smoriya has joined #openstack-cinder00:59
winston-dxyang1: i can leave same comment to your fix, i'm fine with that.00:59
xyang1winston-d: patrickeast I'll try to dig out the email, the person who made the suggestion said that is what they do in practice01:00
patrickeastxyang1: winston-d: we maybe lets leave xing’s patch up for now and see what others think01:00
patrickeastxyang1: at the meeting*01:00
patrickeastxyang1: oh interesting, so maybe some backends can’t support it01:00
xyang1winston-d: patrickeast sounds good.01:01
xyang1patrickeast: Not sure, it could just be operator is cautious01:01
winston-djust in case I miss the meeting this week (it always happen 1st week when i start work on US timezone), I will vote for not adding new flag.01:02
xyang1winston-d: Ok01:02
xyang1winston-d: That is fine with our array01:02
winston-dif it's backend specific limit, it shouldn't be solved by a global flag.01:02
*** asselin_ has joined #openstack-cinder01:04
*** xiaohui has joined #openstack-cinder01:05
xyang1winston-d: Can you look at the rest of the patch? Line 291 of host_manager.py01:05
xyang1winston-d: We can add the flag to backend specific section01:05
*** tobe has joined #openstack-cinder01:09
*** dims has quit IRC01:12
*** patrickeast has quit IRC01:14
*** IlyaG has quit IRC01:16
*** jamielennox|away is now known as jamielennox01:23
openstackgerritVincent Hou proposed openstack/cinder: Implement the update_migrated_volume for the drivers  https://review.openstack.org/18087301:24
*** annegentle has joined #openstack-cinder01:27
*** annegentle has quit IRC01:28
*** Apoorva has joined #openstack-cinder01:29
*** Lee1092 has joined #openstack-cinder01:30
*** setmason has quit IRC01:35
*** david-lyle has joined #openstack-cinder01:35
*** annegentle has joined #openstack-cinder01:49
*** rushil has joined #openstack-cinder02:07
*** Apoorva has quit IRC02:09
openstackgerritwanghao proposed openstack/cinder: Fix response when querying host detail by host name  https://review.openstack.org/16260102:09
*** rushil has quit IRC02:10
nikeshmasselin: hi wanted to extend my setup to multiple providers,nodepool image-build is working good but image-upload is failing http://paste.openstack.org/show/238941/02:15
*** tbarron1 has quit IRC02:16
nikeshmasselin: this my nodepool yaml file http://paste.openstack.org/show/238951/02:17
nikeshmpatrickeast: hi02:19
*** akerr_ has quit IRC02:20
*** daneyon has joined #openstack-cinder02:27
asselin_nikeshm, I haven't seen that error before02:28
asselin_nikeshm, look like something related to 'shade'. This is new02:28
asselin_nikeshm, looks like it merged recently: https://review.openstack.org/#/c/168633/02:29
*** daneyon has quit IRC02:31
asselin_nikeshm, just asked in -infra. Issue is fixed in a recent release of shade02:32
*** david-lyle has quit IRC02:33
*** setmason has joined #openstack-cinder02:33
asselin_nikeshm, I don't have my env, but re-running ~./install_master.sh should pull in the latest02:34
asselin_nikeshm, if not, go to /opt/shade02:35
asselin_sudo git pull02:35
asselin_sudo pip install .02:35
asselin_and try again02:35
*** annegentle has quit IRC02:38
*** Yogi1 has joined #openstack-cinder02:39
*** annegentle has joined #openstack-cinder02:40
*** asselin_ has quit IRC02:54
nikeshmasselin: there is no /opt/shade03:02
nikeshmalso tried to rerun install master03:02
openstackgerritVincent Hou proposed openstack/cinder: Implement the update_migrated_volume for the drivers  https://review.openstack.org/18087303:04
*** tobe has quit IRC03:08
*** david-lyle has joined #openstack-cinder03:11
openstackgerritVincent Hou proposed openstack/cinder: Implement the update_migrated_volume for the drivers  https://review.openstack.org/18087303:17
*** dmz has joined #openstack-cinder03:18
dmzhowdy y'all, i thought i had cinder working but was just trying to attach a volume & got the following in the nova-api log:  ERROR nova.api.openstack [req-66ecda54-cddc-48f2-a58c-4ff49a7b056a None] Caught error: AmbiguousEndpoints, ; looks like it's uing an authv1 token but not sure what this problem could be03:19
*** kaisers has quit IRC03:20
*** tobe has joined #openstack-cinder03:26
*** Yogi1 has quit IRC03:31
openstackgerritwangchy proposed openstack/cinder: Stop registering opt fatal_exception_format_errors in cinder.exp  https://review.openstack.org/18531403:32
*** annegentle has quit IRC03:32
*** changbl has joined #openstack-cinder03:32
*** annegentle has joined #openstack-cinder03:37
*** dalgaaf has quit IRC03:38
*** Apoorva has joined #openstack-cinder03:42
openstackgerritAlex Meade proposed openstack/cinder: NetApp E-Series driver: Remove caching logic  https://review.openstack.org/18583203:42
openstackgerritAlex Meade proposed openstack/cinder: NetApp E-Series: Refactor class structure for FC  https://review.openstack.org/18583303:42
openstackgerritVincent Hou proposed openstack/cinder: Implement the update_migrated_volume for the drivers  https://review.openstack.org/18087303:44
*** diga has joined #openstack-cinder03:51
*** emagana has joined #openstack-cinder03:51
*** tobe has quit IRC03:58
openstackgerritxing-yang proposed openstack/os-brick: Add connector driver for the ScaleIO cinder driver  https://review.openstack.org/18583503:58
*** Apoorva has quit IRC03:59
*** krtaylor has joined #openstack-cinder04:04
*** daneyon has joined #openstack-cinder04:15
*** links has joined #openstack-cinder04:15
*** daneyon has quit IRC04:20
*** xyang1 has quit IRC04:20
*** annegentle has quit IRC04:26
*** markvoelker has quit IRC04:31
*** ebalduf has quit IRC04:33
*** tobe has joined #openstack-cinder04:34
*** avishay__ has joined #openstack-cinder04:35
*** avishay__ is now known as avishay04:36
*** sks has joined #openstack-cinder04:39
*** bkopilov is now known as bkopilov_wfh04:52
*** mdbooth has quit IRC04:55
*** pradipta has joined #openstack-cinder04:55
*** harlowja_at_home has joined #openstack-cinder04:57
*** mdbooth has joined #openstack-cinder04:59
dmz:(05:02
*** tobe has quit IRC05:04
*** nkrinner has joined #openstack-cinder05:05
openstackgerritwangchy proposed openstack/cinder: Stop registering opt fatal_exception_format_errors in cinder.exp  https://review.openstack.org/18531405:17
*** leopoldj has joined #openstack-cinder05:18
openstackgerritPh. Marek proposed openstack/cinder: Re-add DRBD driver.  https://review.openstack.org/17857305:23
*** belmoreira has joined #openstack-cinder05:23
*** harlowja_at_home has quit IRC05:24
*** avishay has quit IRC05:25
*** avishay has joined #openstack-cinder05:25
openstackgerritVincent Hou proposed openstack/cinder: Implement the update_migrated_volume for the drivers  https://review.openstack.org/18087305:27
*** deepakcs has joined #openstack-cinder05:28
*** cloudm2 has quit IRC05:32
*** setmason has quit IRC05:34
*** setmason has joined #openstack-cinder05:35
*** IanGovett has joined #openstack-cinder05:44
*** smoriya has quit IRC05:46
*** tobe has joined #openstack-cinder05:50
*** IanGovett has quit IRC06:00
*** emagana has quit IRC06:00
*** emagana has joined #openstack-cinder06:01
*** kaisers has joined #openstack-cinder06:01
*** e0ne has joined #openstack-cinder06:01
*** kaisers has quit IRC06:02
*** daneyon has joined #openstack-cinder06:04
*** emagana has quit IRC06:05
*** kaisers has joined #openstack-cinder06:06
*** BharatK has joined #openstack-cinder06:08
*** daneyon has quit IRC06:09
*** e0ne has quit IRC06:11
openstackgerritXinXiaohui proposed openstack/cinder-specs: capacity-headroom  https://review.openstack.org/17038006:13
openstackgerritwanghao proposed openstack/cinder: Notify the transfer volume action in cinder  https://review.openstack.org/18553106:16
*** tobe has quit IRC06:20
*** ankit_ag has joined #openstack-cinder06:22
openstackgerritJulien Danjou proposed openstack/cinder: Stop using deprecated timeutils.isotime()  https://review.openstack.org/18433006:28
*** kaisers has quit IRC06:30
openstackgerritPh. Marek proposed openstack/cinder: Re-add DRBD driver.  https://review.openstack.org/17857306:31
*** tobe has joined #openstack-cinder06:38
*** avishay_ has joined #openstack-cinder06:51
*** avishay has quit IRC06:52
*** lpetrut has joined #openstack-cinder06:55
*** davechen1 has joined #openstack-cinder06:57
*** agarciam has joined #openstack-cinder07:00
*** setmason has quit IRC07:06
openstackgerritVincent Hou proposed openstack/cinder: Add column "hidden" to volumes table  https://review.openstack.org/18585707:07
*** setmason has joined #openstack-cinder07:12
*** jordanP has joined #openstack-cinder07:14
*** IanGovett has joined #openstack-cinder07:17
*** markvoelker has joined #openstack-cinder07:17
*** alecv has joined #openstack-cinder07:19
*** bkopilov_wfh has quit IRC07:20
*** dulek has joined #openstack-cinder07:20
*** ronis has joined #openstack-cinder07:22
*** anshul has joined #openstack-cinder07:25
*** emagana has joined #openstack-cinder07:25
*** bkopilov has joined #openstack-cinder07:25
*** annegentle has joined #openstack-cinder07:27
openstackgerritAbhijeet Malawade proposed openstack/cinder: Remove unused context parameter  https://review.openstack.org/18586107:29
*** emagana has quit IRC07:30
*** annegentle has quit IRC07:32
*** markus_z has joined #openstack-cinder07:32
*** avishay_ has quit IRC07:32
*** avishay_ has joined #openstack-cinder07:33
openstackgerritVincent Hou proposed openstack/cinder: Implement the update_migrated_volume for the drivers  https://review.openstack.org/18087307:35
*** sgotliv has joined #openstack-cinder07:36
*** tshefi has joined #openstack-cinder07:36
*** tobe has quit IRC07:37
openstackgerritVincent Hou proposed openstack/cinder: Implement the update_migrated_volume for the drivers  https://review.openstack.org/18087307:37
*** dims has joined #openstack-cinder07:38
*** tobe has joined #openstack-cinder07:38
*** setmason has quit IRC07:41
*** dims has quit IRC07:44
*** chlong has quit IRC07:45
*** dalgaaf has joined #openstack-cinder07:48
openstackgerritVincent Hou proposed openstack/cinder: Implement the update_migrated_volume for the drivers  https://review.openstack.org/18087307:51
*** jistr has joined #openstack-cinder07:52
*** daneyon has joined #openstack-cinder07:53
openstackgerritMichal Dulko proposed openstack/cinder: Backup object  https://review.openstack.org/15708507:54
openstackgerritMichal Dulko proposed openstack/cinder: Service object  https://review.openstack.org/16041707:55
*** daneyon has quit IRC07:58
*** emagana has joined #openstack-cinder08:04
*** emagana has quit IRC08:09
*** e0ne has joined #openstack-cinder08:15
openstackgerritVincent Hou proposed openstack/cinder: Add column "hidden" to volumes table  https://review.openstack.org/18585708:15
*** e0ne has quit IRC08:19
openstackgerritLiu Xinguo proposed openstack/cinder: Fix the ranges in RestURL with Huawei drivers  https://review.openstack.org/17315708:23
*** ndipanov has joined #openstack-cinder08:36
*** heyun has quit IRC08:36
*** markus_z has quit IRC08:38
*** tobe has quit IRC08:42
*** tobe has joined #openstack-cinder08:43
*** lpetrut has quit IRC08:46
*** markus_z has joined #openstack-cinder08:51
openstackgerritAbhijeet Malawade proposed openstack/cinder: Remove unused context parameter  https://review.openstack.org/18586108:51
openstackgerritXi Yang proposed openstack/cinder: Provide snap copy feature in EMC VNX Cinder driver  https://review.openstack.org/18473308:54
*** gardenshed has joined #openstack-cinder08:54
openstackgerritGorka Eguileor proposed openstack/cinder: Create iSCSI lio portals with right IPs and port  https://review.openstack.org/16120908:55
*** ebalduf has joined #openstack-cinder08:57
*** emagana has joined #openstack-cinder08:59
*** avishay_ has quit IRC09:00
*** lifeless has quit IRC09:00
*** lifeless_ has joined #openstack-cinder09:00
*** gardenshed has quit IRC09:01
*** daneyon has joined #openstack-cinder09:01
*** ebalduf has quit IRC09:02
*** emagana has quit IRC09:03
*** avishay_ has joined #openstack-cinder09:03
*** daneyon has quit IRC09:06
*** belmoreira has quit IRC09:15
*** ekarlso has quit IRC09:16
*** rushiagr_away is now known as rushiagr09:18
openstackgerritLiu Xinguo proposed openstack/cinder: Fix the ranges in RestURL with Huawei drivers  https://review.openstack.org/17315709:22
wanghaoDuncanT: ping09:22
wanghaoDuncanT: Would you have a look to this bp: https://review.openstack.org/#/c/180400/. Is there something else that we need to fix?09:22
*** ekarlso has joined #openstack-cinder09:22
*** e0ne has joined #openstack-cinder09:23
*** e0ne is now known as e0ne_09:23
DuncanTwanghao: The two comments on https://review.openstack.org/#/c/180400/17/cinder/volume/utils.py could do with looking at... Sorry it is taking so long to get through, it happens some times. Both are reasonable comments09:24
wanghaoDuncanT: Well, may be we need one more patch for this. I will do it soon. Thanks:)09:26
openstackgerritVincent Hou proposed openstack/cinder: Add column "hidden" to volumes table  https://review.openstack.org/18585709:27
*** sgotliv has quit IRC09:27
DuncanTwanghao: Ping me when it's up and I should be able to get the first +2 on there quickly09:28
wanghaoDuncanT: Sure09:29
openstackgerritVincent Hou proposed openstack/cinder: Add column "hidden" to volumes table  https://review.openstack.org/18585709:30
*** e0ne_ is now known as e0ne09:31
*** Liu has joined #openstack-cinder09:32
*** lpetrut has joined #openstack-cinder09:33
*** diga has quit IRC09:33
*** theanalyst has quit IRC09:34
*** dosaboy_ has quit IRC09:36
*** dosaboy has joined #openstack-cinder09:36
*** theanalyst has joined #openstack-cinder09:37
*** kaisers has joined #openstack-cinder09:39
openstackgerritwanghao proposed openstack/cinder: Notification with volume and snapshot metadata  https://review.openstack.org/18040009:42
openstackgerritrakesh mishra proposed openstack/cinder: cinder-quota-define-per-volume  https://review.openstack.org/18590609:46
openstackgerritVincent Hou proposed openstack/cinder: Add column "hidden" to volumes table  https://review.openstack.org/18585709:48
*** nikeshm has quit IRC09:48
*** davechen1 has left #openstack-cinder09:48
*** gardenshed has joined #openstack-cinder09:51
*** emagana has joined #openstack-cinder09:53
*** Arkady_Kanevsky has quit IRC09:56
openstackgerritVincent Hou proposed openstack/cinder: Add column "hidden" to volumes table  https://review.openstack.org/18585709:57
*** emagana has quit IRC09:57
*** theanalyst has quit IRC10:02
*** theanalyst has joined #openstack-cinder10:04
*** rushiagr has quit IRC10:08
*** theanalyst has quit IRC10:08
*** dims has joined #openstack-cinder10:09
wanghaoDuncanT: ping10:12
wanghaoDuncant: The patch has been commited. You can have a look after jenkins passed. Thanks.10:12
*** theanalyst has joined #openstack-cinder10:12
openstackgerritVincent Hou proposed openstack/cinder: Add column "hidden" to volumes table  https://review.openstack.org/18585710:15
openstackgerritVincent Hou proposed openstack/cinder: Add column "hidden" to volumes table  https://review.openstack.org/18585710:26
openstackgerritSzymon Wróblewski proposed openstack/cinder: Tooz locks  https://review.openstack.org/18353710:35
openstackgerritMichal Dulko proposed openstack/cinder: Service object  https://review.openstack.org/16041710:36
dulekDuncanT: Hi, do we want to add A/A cinder-volume to today's meeting agenda?10:38
*** nlevinki has joined #openstack-cinder10:42
*** emagana has joined #openstack-cinder10:47
*** emagana has quit IRC10:48
*** emagana has joined #openstack-cinder10:48
*** pradipta has quit IRC10:49
*** daneyon has joined #openstack-cinder10:50
openstackgerritMichal Dulko proposed openstack/cinder: POC: Tooz locks demo  https://review.openstack.org/18564610:51
*** emagana has quit IRC10:53
*** daneyon has quit IRC10:55
*** dulek_ has joined #openstack-cinder10:57
*** dulek has quit IRC10:59
*** rushiagr_away has joined #openstack-cinder11:00
openstackgerritPavel Boldin proposed openstack/cinder: Add iscsi_target_flags configuration option  https://review.openstack.org/18287111:00
*** ebalduf has joined #openstack-cinder11:09
*** ebalduf has quit IRC11:13
openstackgerritIvan Kolodyazhny proposed openstack/cinder-specs: Volume get and list should be able get detailed views  https://review.openstack.org/15629211:15
*** sgotliv has joined #openstack-cinder11:15
*** e0ne is now known as e0ne_11:26
*** openstack has joined #openstack-cinder11:38
*** emagana has joined #openstack-cinder11:43
*** aix has joined #openstack-cinder11:47
*** emagana has quit IRC11:48
*** tobe has joined #openstack-cinder11:52
*** deepakcs has quit IRC11:56
*** tobe has quit IRC11:56
*** tobe has joined #openstack-cinder11:57
*** dulek___ has quit IRC11:58
*** dulek___ has joined #openstack-cinder11:58
*** BharatK has quit IRC12:01
openstackgerritGorka Eguileor proposed openstack/cinder: Create iSCSI lio portals with right IPs and port  https://review.openstack.org/16120912:01
*** markvoelker has quit IRC12:04
*** markvoelker has joined #openstack-cinder12:04
*** dulek_ has joined #openstack-cinder12:06
*** bkopilov is now known as bkopilov_wfh12:08
*** akerr has joined #openstack-cinder12:09
*** e0ne is now known as e0ne_12:09
*** dulek___ has quit IRC12:09
*** e0ne_ is now known as e0ne12:14
*** Yogi1 has joined #openstack-cinder12:14
*** BharatK has joined #openstack-cinder12:15
*** avishay_ is now known as avishay12:15
*** bswartz has quit IRC12:17
*** BharatK has quit IRC12:19
*** ebalduf has joined #openstack-cinder12:29
*** dims has quit IRC12:31
*** dims has joined #openstack-cinder12:32
*** dulek_ has quit IRC12:32
*** ebalduf has quit IRC12:33
*** emagana has joined #openstack-cinder12:37
*** daneyon has joined #openstack-cinder12:39
*** akshai has joined #openstack-cinder12:40
*** eharney has quit IRC12:40
*** emagana has quit IRC12:41
*** annegentle has joined #openstack-cinder12:41
*** daneyon has quit IRC12:44
*** kaisers has quit IRC12:45
*** Yogi1 has quit IRC12:45
*** julim has joined #openstack-cinder12:48
*** marcusvrn has joined #openstack-cinder12:52
*** ekarlso has quit IRC12:53
*** ekarlso has joined #openstack-cinder12:53
*** Yogi1 has joined #openstack-cinder12:54
*** xyang1 has joined #openstack-cinder12:56
*** sks has quit IRC12:56
*** dulek has joined #openstack-cinder12:58
*** Yogi1 has quit IRC12:58
*** nlevinki_ has joined #openstack-cinder13:03
*** nlevinki has quit IRC13:05
*** annegentle has quit IRC13:06
*** simondodsley has joined #openstack-cinder13:09
*** lpetrut has quit IRC13:10
*** sks has joined #openstack-cinder13:10
*** mriedem_away is now known as mriedem13:17
*** juzuluag has joined #openstack-cinder13:17
*** rasoto_ has joined #openstack-cinder13:20
*** lpetrut has joined #openstack-cinder13:23
*** alonmarx_ has joined #openstack-cinder13:23
*** Yogi1 has joined #openstack-cinder13:25
*** dustins has joined #openstack-cinder13:26
*** alonmarx has quit IRC13:26
*** annegentle has joined #openstack-cinder13:28
*** eharney has joined #openstack-cinder13:30
*** abhiram_moturi has joined #openstack-cinder13:30
*** annegentle has quit IRC13:30
*** emagana has joined #openstack-cinder13:31
*** annegentle has joined #openstack-cinder13:33
*** timcl has joined #openstack-cinder13:34
*** emagana has quit IRC13:35
*** abhiram_moturi has quit IRC13:38
*** abhiram_moturi has joined #openstack-cinder13:39
*** jungleboyj has quit IRC13:39
openstackgerritGorka Eguileor proposed openstack/cinder: Create iSCSI lio portals with right IPs and port  https://review.openstack.org/16120913:40
*** links has quit IRC13:41
*** cloudm2 has joined #openstack-cinder13:42
*** abhiram_moturi has quit IRC13:43
*** abhiram_moturi has joined #openstack-cinder13:43
*** abhiram_moturi has quit IRC13:48
*** abhiram_moturi has joined #openstack-cinder13:48
*** abhiram_moturi has quit IRC13:50
*** abhiram_moturi has joined #openstack-cinder13:50
*** annegentle has quit IRC13:52
*** sgotliv has quit IRC13:52
*** annegentle has joined #openstack-cinder13:53
*** rushil has joined #openstack-cinder13:55
*** dustins_ has joined #openstack-cinder13:56
*** cbits has joined #openstack-cinder13:56
*** cbits has left #openstack-cinder13:57
*** sks has quit IRC13:58
*** Yogi11 has joined #openstack-cinder13:58
*** pradipta has joined #openstack-cinder13:59
*** dustins has quit IRC14:00
*** rushil_ has joined #openstack-cinder14:00
*** daneyon has joined #openstack-cinder14:00
*** rushil has quit IRC14:00
*** timcl1 has joined #openstack-cinder14:02
*** Yogi1 has quit IRC14:02
*** timcl has quit IRC14:02
*** sgotliv has joined #openstack-cinder14:03
Liuhi14:04
LiuOur FC CI always failed with some test cases, like test_minimum_basic_scenario and test_volume_boot_pattern14:04
Liubut iSCSI CI is ok for these test cases14:05
*** chlong has joined #openstack-cinder14:05
Liuwho can give me some point14:05
Liulog link: http://182.138.104.27:8088/huawei-18000-fc-dsvm-tempest-full/423/console.html14:05
LiuThanks very much14:06
*** merooney has joined #openstack-cinder14:06
*** dalgaaf has quit IRC14:08
*** dustins_ is now known as dustins14:10
*** dustins is now known as dustins_away14:10
*** bswartz has joined #openstack-cinder14:10
*** Liu is now known as liuxg14:11
*** merooney has quit IRC14:13
*** merooney has joined #openstack-cinder14:13
*** merooney is now known as sdafasdfdsafasdf14:14
*** sdafasdfdsafasdf is now known as merooney14:14
*** tobe has quit IRC14:15
*** annegentle has quit IRC14:17
*** leopoldj has quit IRC14:17
*** merooney has left #openstack-cinder14:18
*** merooney has joined #openstack-cinder14:19
*** breitz has quit IRC14:19
*** breitz has joined #openstack-cinder14:20
*** abhiram_moturi has quit IRC14:24
*** abhiram_moturi has joined #openstack-cinder14:25
*** nlevinki_ has quit IRC14:27
*** ronis has quit IRC14:28
*** Adriano_ has joined #openstack-cinder14:28
*** jungleboyj has joined #openstack-cinder14:30
*** bnemec has quit IRC14:30
*** thangp has joined #openstack-cinder14:30
*** abhiram_moturi has quit IRC14:31
*** abhiram_moturi has joined #openstack-cinder14:31
*** avishay has quit IRC14:31
*** annegentle has joined #openstack-cinder14:32
*** esker has joined #openstack-cinder14:33
*** mtanino has joined #openstack-cinder14:33
*** avishay has joined #openstack-cinder14:34
*** bnemec has joined #openstack-cinder14:35
*** dims has quit IRC14:35
*** daneyon_ has joined #openstack-cinder14:37
*** abhiram_moturi has quit IRC14:39
*** abhiram_moturi has joined #openstack-cinder14:40
*** daneyon has quit IRC14:40
*** sgotliv has quit IRC14:40
*** emagana has joined #openstack-cinder14:40
*** dulek has quit IRC14:41
*** eharney has quit IRC14:42
*** timcl1 has quit IRC14:42
*** jms has joined #openstack-cinder14:44
*** sgotliv has joined #openstack-cinder14:46
*** timcl has joined #openstack-cinder14:47
*** abhiram_moturi has quit IRC14:49
*** abhiram_moturi has joined #openstack-cinder14:49
*** hemnafk is now known as hemna14:50
*** merooney has quit IRC14:50
*** jungleboyj has quit IRC14:50
*** patrickeast has joined #openstack-cinder14:51
*** anshul has quit IRC14:51
*** jungleboyj has joined #openstack-cinder14:53
openstackgerritThang Pham proposed openstack/cinder: Complete switch to snapshot objects  https://review.openstack.org/16391014:53
*** dims_ has joined #openstack-cinder14:55
*** abhiram_moturi has quit IRC14:58
openstackgerritPatrick East proposed openstack/cinder-specs: Generic image volume cache functionality  https://review.openstack.org/18252014:58
*** rmesta has joined #openstack-cinder14:58
*** abhiram_moturi has joined #openstack-cinder14:58
*** abhiram_moturi has quit IRC15:00
*** kvidvans has joined #openstack-cinder15:01
*** abhiram_moturi has joined #openstack-cinder15:01
*** timcl1 has joined #openstack-cinder15:01
*** timcl has quit IRC15:03
*** tsekiyama has joined #openstack-cinder15:04
*** garthb has joined #openstack-cinder15:04
*** garthb__ has joined #openstack-cinder15:04
*** asselin_ has joined #openstack-cinder15:04
*** merooney has joined #openstack-cinder15:05
*** abhiram_moturi has quit IRC15:09
tsekiyamathingee: If you have a chance, could you approve https://blueprints.launchpad.net/cinder/+spec/hitachi-remove-resource-lock ?15:09
tsekiyamathingee: This is hitachi driver update to remove unnecessary locks. Patches is proposed here: https://review.openstack.org/#/c/18500315:09
*** abhiram_moturi has joined #openstack-cinder15:09
*** bswartz has quit IRC15:10
*** tshefi has quit IRC15:11
*** anshul has joined #openstack-cinder15:12
*** bswartz has joined #openstack-cinder15:12
*** e0ne is now known as e0ne_15:13
*** tobe has joined #openstack-cinder15:15
*** tswanson_ has quit IRC15:19
*** Swanson has joined #openstack-cinder15:19
*** dulek_home has joined #openstack-cinder15:20
*** tobe has quit IRC15:20
*** merooney has quit IRC15:20
openstackgerritTina Tang proposed openstack/cinder: Create consistgroup from cgsnapshot support in VNX driver  https://review.openstack.org/16370615:21
SwansonAny cores available to look at this one?  https://review.openstack.org/#/c/177994/  I've one +2.  Just need another.  I changed my API object so I'd like this one in.15:21
*** e0ne_ is now known as e0ne15:22
*** abhiram_moturi has quit IRC15:26
*** abhiram_moturi has joined #openstack-cinder15:27
*** tsekiyama1 has joined #openstack-cinder15:27
*** tsekiyama has quit IRC15:28
jmsSo, mostly have the the 2nd cinder-volume service on a second host staying "up". When trying to create a volume, it fails with "no host". The logs show that the cinder-scheduler on the 1st host (controller), is only looking at the storage defined in that nodes cinder.conf, though the second node shows up in 'service-list' [so, only looking at controller@lvm]. Any pointers on how to get cinder to recognize the second host, and look for the backend that'15:30
*** setmason has joined #openstack-cinder15:31
dulek_homejms: Do you have synchronized times on both hosts?15:31
*** eharney has joined #openstack-cinder15:32
jmsdulek_home: Yes.15:32
*** leeantho has joined #openstack-cinder15:33
dulek_homejms: Can you post the logs on paste.openstack.org?15:33
*** BharatK has joined #openstack-cinder15:34
jmsHah it's still there... http://paste.openstack.org/show/238652/15:37
*** vokt has joined #openstack-cinder15:37
*** ronis has joined #openstack-cinder15:38
*** harlowja_at_home has joined #openstack-cinder15:40
*** merooney has joined #openstack-cinder15:40
*** tsekiyam has joined #openstack-cinder15:42
dulek_homejms: Maybe c-vol on the node has problems with connecting to the AMQP and haven't sent capabilities?15:42
dulek_homejms: If it had the access to the DB then it would be able to bump up DB heartbeat and appear up in cinder service-list.15:43
dmzhowdy y'all, i thought i had cinder working but was just trying to attach a volume & got the following in the nova-api log:  ERROR nova.api.openstack [req-66ecda54-cddc-48f2-a58c-4ff49a7b056a None] Caught error: AmbiguousEndpoints, ; this started after I added my 2nd region cinder deplyment; and yes i'm using OS_REGION_NAME env variable & command line options on both the nova client and cinder client attach commands; i can create / delete disks just15:43
dmznot attach15:43
*** tsekiyama1 has left #openstack-cinder15:43
*** annegentle has quit IRC15:47
*** jwcroppe has quit IRC15:48
*** jwcroppe has joined #openstack-cinder15:48
jmsdulek_home: It does show up in: cinder service-list  -- output.15:51
*** lpetrut has quit IRC15:52
*** lpetrut has joined #openstack-cinder15:52
*** anshul has quit IRC15:53
*** lifeless_ is now known as lifeless15:53
*** barra204 has quit IRC15:54
jgriffithdo we have folks for a meeting today?15:54
jgriffiththingee: has asked that I stand in for him15:54
dulek_homejgriffith: Agenda is huge, so probably yes15:54
jgriffithdulek_home: :)15:54
jgriffithdulek_home: it keeps growing evrey time I open it15:55
dulek_homejms: Yeah, it would in case of having DB access but no AMQP accesss.15:55
*** dannywilson has joined #openstack-cinder15:56
DuncanTjms: Symptoms also match driver init failure... can you post up the cinder-volume log of the service that isn't getting used?15:57
*** jwcroppe has quit IRC15:59
jgriffithOk. let's go have a meeting..15:59
*** deepakcs has joined #openstack-cinder16:00
*** emagana has quit IRC16:00
*** julim has quit IRC16:00
*** agarciam has joined #openstack-cinder16:01
*** agarciam1 has quit IRC16:01
*** emagana has joined #openstack-cinder16:02
*** abhiram_moturi has quit IRC16:02
*** xyang has joined #openstack-cinder16:03
*** rajinir has joined #openstack-cinder16:03
*** julim has joined #openstack-cinder16:03
*** dustins_away is now known as dustins16:03
*** adurbin_ has joined #openstack-cinder16:04
*** merooney has quit IRC16:06
*** jistr has quit IRC16:07
jmsHrmm...think I found a pointer in /var/log/messages (Yes, another log file to remember to check!) ...16:09
jmsMay 27 11:04:26 smh0005 cinder-scheduler: AttributeError: 'RequestContext' object has no attribute 'timestamp'16:09
jmsMay 27 11:04:26 smh0005 cinder-scheduler: Logged from file dispatcher.py, line 15116:09
jmssmh0005 would be the host that has LVM and is being checked.16:09
jmsi.e. controller16:10
*** asselin__ has joined #openstack-cinder16:11
jmsDuncanT: For volume drive it's getting to:  2015-05-27 09:53:57.607 20835 INFO cinder.volume.manager [req-56fff6f6-f11f-4591-8fb7-414da435b604 - - - - -] Driver initialization completed successfully.16:11
jmsNeed more of volume.log from that node?16:11
*** asselin_ has quit IRC16:12
*** harlowja_at_home has quit IRC16:12
*** rushil_ has quit IRC16:14
*** nikeshm has joined #openstack-cinder16:16
*** _cjones_ has joined #openstack-cinder16:20
*** merooney has joined #openstack-cinder16:21
*** vilobhmm has joined #openstack-cinder16:22
*** lpetrut has quit IRC16:23
*** gardenshed has quit IRC16:25
*** crose has joined #openstack-cinder16:27
jmshttp://paste.openstack.org/show/240231/   <-- Any ideas on _which_ dispatch.py is getting the error? I'm guessing oslo_messaging.rpc/oslo_messaging.notify ... And the way that's output, is it the ampq driver that's the _cause_ of the traceback, or is that driver output irrelevant to the traceback?16:28
*** pboldin has joined #openstack-cinder16:29
pboldinguys, please review https://review.openstack.org/18287116:29
*** ankit_ag has quit IRC16:29
*** cdelatte has joined #openstack-cinder16:29
*** julim has quit IRC16:31
e0nejms, pboldin: almost all are on irc meeting now16:31
openstackgerritEric Brown proposed openstack/cinder: VMware: insecure option should be exposed  https://review.openstack.org/17999116:32
*** dannywilson has quit IRC16:32
*** alecv has quit IRC16:33
*** pboldin has quit IRC16:34
*** daneyon_ has quit IRC16:38
*** Apoorva has joined #openstack-cinder16:40
*** rushil has joined #openstack-cinder16:40
*** Apoorva has quit IRC16:41
*** rhedlind has joined #openstack-cinder16:41
asselinnikeshm, out of curiousity, how di dyou solve your shade issue?16:41
*** merooney has quit IRC16:42
openstackgerritSean Chen proposed openstack/cinder: Tintri Cinder Volume driver  https://review.openstack.org/18514816:42
*** merooney has joined #openstack-cinder16:42
*** merooney has quit IRC16:43
*** julim has joined #openstack-cinder16:43
*** rushil has quit IRC16:44
*** rushil has joined #openstack-cinder16:45
nikeshmasselin: everything tried from scratc, but i think pip install shade==0.6.3 alone is enough16:45
nikeshmscratch16:45
asselin__ok thanks16:45
*** julim has quit IRC16:45
*** dulek has joined #openstack-cinder16:48
openstackgerritSean Chen proposed openstack/cinder: Tintri Cinder Volume driver  https://review.openstack.org/18514816:48
Swansonxyang: Thanks!16:48
*** annegentle has joined #openstack-cinder16:48
*** pboldin has joined #openstack-cinder16:48
*** lpetrut has joined #openstack-cinder16:50
*** Apoorva has joined #openstack-cinder16:50
nikeshmasselin: problem is i am not getting logs/devstacklog.txt.gz file in/opt/stack/logs of jenkins slave16:50
nikeshmso not able to know16:50
*** gardenshed has joined #openstack-cinder16:50
nikeshmwhy its failing16:51
asselin__nikeshm, it should be in /opt/stack/new/16:51
nikeshmyes16:51
nikeshmsorry same16:51
asselin__nikeshm, if you're using nodepool, do a nodepool hold <id> so it doesn't get deleted. then you can ssh into the vm and take a look16:51
*** avishay has quit IRC16:52
*** merooney has joined #openstack-cinder16:52
*** rhe00 has joined #openstack-cinder16:53
*** rhe00 has quit IRC16:53
*** annegentle has quit IRC16:53
*** emagana has quit IRC16:53
*** rhedlind has quit IRC16:54
*** gardenshed has quit IRC16:57
*** sgotliv has quit IRC16:59
*** agarciam has quit IRC16:59
openstackgerritSean Chen proposed openstack/cinder: Tintri Cinder Volume driver  https://review.openstack.org/18514816:59
*** sgotliv has joined #openstack-cinder16:59
*** rhedlind has joined #openstack-cinder17:00
e0nefyi, our updgrade scripts https://review.openstack.org/#/c/183814/ doesn't work well:(17:02
hemnaDuncanT, yah, I'm on the fence about it really.17:02
nikeshmasselin__ : http://paste.openstack.org/show/240316/   that file is no where in vm17:02
hemnamaking them work has been a PITA and unit testing them as well.17:02
*** deepakcs has quit IRC17:02
hemnabut it seems that oslo.db changes will break us anyway.17:03
nikeshmasselin__ : like yu said i holded the vm and run test again17:03
patrickeastjgriffith: got a min to talk about the image cache stuff? i want to make sure i understand your latest suggestions with the periodic update of glance images and stuff17:03
e0nehemna: agree17:03
*** hemna is now known as hemnafk17:03
hemnafk(beating)17:03
patrickeastjgriffith: i think i initially misunderstood (and probably still do)17:03
e0neDuncanT: live upgrades/rollback is only one case when downgrade could help17:04
jgriffithpatrickeast: yeah, gimmie just a minute to put some coffee on17:04
*** tobe has joined #openstack-cinder17:04
openstackgerritxing-yang proposed openstack/cinder: Not to deduct thin volume size in capacity filter  https://review.openstack.org/18576417:05
*** xyang has quit IRC17:05
*** crose has quit IRC17:06
*** jwcroppe has joined #openstack-cinder17:06
*** merooney has quit IRC17:07
*** merooney has joined #openstack-cinder17:07
SwansonDevstack being grumpy for anyone else?17:07
*** daneyon has joined #openstack-cinder17:08
*** daneyon_ has joined #openstack-cinder17:09
*** tobe has quit IRC17:09
asselin__Swanson, nikeshm is having issues, not sure if they're the same....17:10
asselin__^^17:10
dmzanyone using cinder in a multi-region configuratoin? I can't attach volumes since getting multi-region setup :( (it's the only thing not working in multi-region)17:10
*** annegentle has joined #openstack-cinder17:10
DuncanTe0ne: For existing migrations, I think you're right that a second, fixup migration is the only way forward... some people have already run the 'bad' migration so we end up in a place where your db depends on when you upgrade your db, which is insane17:11
Swansonasselin__: Oodles of role add and endpoint add failures.  looks like a call changed...17:11
*** vilobhmm has quit IRC17:12
*** ociuhandu has joined #openstack-cinder17:12
*** annegentle has quit IRC17:13
*** daneyon has quit IRC17:13
DuncanTdmz: We use it multi-region, have been for a long time17:13
*** ebalduf has joined #openstack-cinder17:13
*** agarciam has joined #openstack-cinder17:13
*** vilobhmm has joined #openstack-cinder17:13
*** markus_z has quit IRC17:14
*** xyang has joined #openstack-cinder17:14
*** julim has joined #openstack-cinder17:14
*** akerr has quit IRC17:14
xyangwinston-d, patrickeast, jgriffith: https://review.openstack.org/#/c/185764/17:15
*** akerr has joined #openstack-cinder17:15
xyangfree virtual capacity is already used for thin provisioning, so no need to add that in the capacity filter17:15
*** vilobhmm has quit IRC17:16
xyangwinston-d, patrickeast, jgriffith: I just changed how the free physical capacity is used for thin17:16
*** e0ne is now known as e0ne_17:16
dulek_homeDuncanT: We want to divide the work for c-vol A/A on next's week meeting?17:16
jgriffithxyang: yes, perfect17:18
jgriffithxyang: that's what I was trying to get at17:18
xyangjgriffith: great, let me know your comments on the patch17:19
jgriffithpatrickeast: shout if/when you want17:19
dmzDuncanT, when I added my 2nd region I have been no longer able to mount volumes; were there any cinder specific hanges needed to make it / nova understand which specific region it's working in? I've set os-region-name on command line arg & OS_REGION_NAME env variable to no success :(17:19
patrickeastjgriffith: i’ve got a few min now (meeting in 10)17:20
jgriffithpatrickeast: cool, let's get going here then :)17:20
*** e0ne_ is now known as e0ne17:20
patrickeastjgriffith: so if i understand correctly your suggestion is that we have this image cache hold onto the volumes (or snapshots?) of the public snapshots, and snapshots of volume-backed instances17:21
patrickeastjgriffith: kind of like another way we get stuff seeded in the cache17:21
patrickeastjgriffith: and we could have some periodic task to update stuff in the cache from glance17:21
jgriffithpatrickeast: yeah, so specifically, what I was envisioning was:17:21
*** krtaylor has quit IRC17:22
jgriffithpatrickeast: create-volume from image ---> creates a volume template using the "image-tenant"17:22
jgriffithimage-tenant creates volumes, then creates a "public snapshot"17:22
jgriffithall tenants can then access that public snapshot and do create-from snap using it17:22
jgriffithit's *different*17:23
jgriffithbut same end result17:23
patrickeastoh so for all of the volumes backed in the cache we have a public snapshot of it available?17:23
jgriffithif people hate it, that's cool17:23
jgriffithpatrickeast: right17:23
patrickeastthat makes sense, gives people a short-cut to get fast creates if they want17:23
jgriffithpatrickeast: then there needs to be some new internals that "check" if a backends has a snap to create the bootable volume from17:24
jgriffithpatrickeast: right, so people can actively use the snapshot themeselves, or let the driver internals figure it out17:24
jgriffitheithe rway17:24
jgriffitheither17:24
patrickeastgotcha17:24
jgriffithpatrickeast: give it some thought, could be I missed some details that make it undesireable17:24
*** akerr has quit IRC17:24
jgriffithbut for the most part is seemed like a useful addition to the workflow17:24
*** e0ne has quit IRC17:25
jgriffithand a bit more flexible17:25
patrickeasti like the idea, and do think it would be useful (i’ve been asked about public snaps by the sales guys before) but im kind of thinking maybe thats a iteration once we have the cache17:25
jgriffithit also gives admins some control of things17:25
jgriffithpatrickeast: yeah, so I initially didn't like the idea of public snapshots at all17:25
*** merooney has quit IRC17:25
jgriffithpatrickeast: but started asking around more, and talkes with the folks doing the AWS compat stuff some more17:26
patrickeastits like a feature that knows about the cache, and sits on top of it, but doesn’t really change the proposed flow of images->volumes that are re-used as needed17:26
jgriffithpatrickeast: Im' warming up to the idea17:26
patrickeastyea i still don’t quite see the benefit, but there is enough demand17:26
jgriffithpatrickeast: exactly17:26
jgriffithpatrickeast: even better, it can change if you want, but doesn't have to17:26
patrickeastok cool, so i’ll update the spec to mention this as a future use-case to make sure we track it somewhere17:27
patrickeastand maybe file a bp and link them17:27
jgriffithsweet17:27
*** gardenshed has joined #openstack-cinder17:27
dmzDuncanT, any suggestions would be most welcome :) everything google tells me is that i need the region_name passed in but i am and horizon is also having problems so it must be a config option in my api somewhere :(17:27
openstackgerritDaniel Allegood proposed openstack/cinder: Updating cmd/manage.py get_arg_string() argument parser  https://review.openstack.org/18609417:27
winston-djgriffith, patrickeast: I haven't read image-cache spec, so bear me for my dumb question: is image-cache suppose to be maintained by cinder and make available to all backends Cinder manages?17:28
patrickeasti wonder if this starts to bleed into the glance cinder backend… i heard it mentioned in the public snapshots session, but i don’t really know exactly what the scope is for it17:28
patrickeastwinston-d: yea the idea is you can enable it with a backend config option and anyone can use it17:28
patrickeastwinston-d: it only makes sense for some backends, so it would require some docs to suggestion when its best to enable17:28
winston-dpatrickeast: do you have the spec link handy?17:29
patrickeastwinston-d: https://review.openstack.org/#/c/182520/17:29
winston-dpatrickeast: perfect, thx17:29
*** gardenshed has quit IRC17:30
*** ebalduf has quit IRC17:30
*** emagana has joined #openstack-cinder17:31
jgriffithwinston-d: if you haven't seen it, it's based loosely off of this: https://goo.gl/HWtizZ17:32
*** sgotliv has quit IRC17:32
*** crose has joined #openstack-cinder17:33
*** ebalduf has joined #openstack-cinder17:33
winston-djgriffith: thx17:33
dulek_homejgriffith: Do you had time to take a look at TaskFlow refactoring?17:33
*** crose has quit IRC17:34
dulek_homejgriffith: To me it seems that there's at least two separate issues - too much logging and the fact that code is harder to read.17:35
jgriffithdulek_home: sadly no, I took the week-end off and did "not much at all"17:35
*** aix has quit IRC17:35
jgriffithdulek_home: the logging doesn't bother me as much as it does some folks17:35
dulek_homejgriffith: Similar stuff from my side, jetlag killed my productivity. ;)17:35
jgriffithdulek_home: I can use grep pretty easily :)17:35
*** julim has quit IRC17:35
*** vilobhmm has joined #openstack-cinder17:35
jgriffithdulek_home: the complexity and following the code is the thing for me17:35
jgriffithdulek_home: hehe.. yeah, where is home?17:36
dulek_homejgriffith: Poland, from UTC-7 to UTC+2. ;)17:37
*** crose has joined #openstack-cinder17:37
nikeshmasselin: got the issue,my vm ip is 10.0.1.x and FIXED_RANGE=${DEVSTACK_GATE_FIXED_RANGE:-10.1.0.0/20} in devstack-gate/devstack-vm-gate.sh17:37
nikeshmsorry my vm ip is 10.1.0.x17:37
asselin__nikeshm, ok cool17:37
dulek_homejgriffith: I can work with harlowja_ to get logging right and start for example with refactoring easier flows - like these in c-api and c-sch.17:37
harlowja_dulek_home cool, sputnix (another taskflow core) has some ideas (and wants something similar there to)17:38
winston-djgriffith, dulek_home: both log and 'hard to read' part bother me17:38
harlowja_make exceptions betteer (more looking like tracebacks)17:38
dulek_homejgriffith: It would be great if you could take a look on c-vol one. I haven't seen this code before TaskFlow got in, so you're way more familiar.17:38
*** vilobhmm has quit IRC17:38
harlowja_dulek_home https://review.openstack.org/#/c/184814/17:38
*** julim has joined #openstack-cinder17:39
harlowja_and for all its worth, if the initial code in cinder can be made better (easier to read...), let's do it (it was after all made a year + ago)17:39
*** juzuluag has quit IRC17:39
*** hemnafk is now known as hemna17:39
dulek_homeharlowja_: What actually sputnix want to change?17:40
dulek_homeharlowja_: You mean exceptions?17:40
harlowja_dulek_home soooo, when a failure occurs, add a way to make the failure string representation include various engine state information (what has executed, what the active results were...)17:40
harlowja_this makes it look more like a stack trace17:41
dulek_homeharlowja_: Ah, that's cool thing. Now the exception is likely to be displayed twice.17:41
harlowja_possibly, not sure what cinder is doing with it :-P17:41
harlowja_dulek_home http://docs.openstack.org/developer/taskflow/types.html#module-taskflow.types.failure just needs to have a reference to the engine it came from (and its __str__ or other method needs to extract info from it)17:42
harlowja_probably http://docs.openstack.org/developer/taskflow/types.html#taskflow.types.failure.Failure.pformat (that method?) needs to do this17:42
harlowja_something along that line17:42
dulek_homeOkay17:43
nikeshmasselin: thanks,my provider has a flavor of ram 16gb and same i am giving in nodepool and in dashboard also its showing that nodepool vm have a ram 16 gb but after loging it showing only 6 gb ram17:43
harlowja_dulek_home right now cinder has https://github.com/openstack/cinder/blob/master/cinder/flow_utils.py#L48 (which it could also do things with to make similar information)17:43
harlowja_that things is whats attaching into the engine state transitions and logging failures and stuff17:44
dulek_homeharlowja_: Yeah, I'm aware of that stuff :)17:44
harlowja_*so cinder code could format the stuff as it wants already17:44
*** dustins has quit IRC17:45
jmshttp://paste.openstack.org/show/240231/   <-- Any ideas on _which_ dispatch.py is getting the error? I'm guessing oslo_messaging.rpc/oslo_messaging.notify ... And the way that's output, is it the ampq driver that's the _cause_ of the traceback, or is that driver output irrelevant to the traceback?17:45
asselin__nikeshm, not sure....17:45
asselin__nikeshm, signing off...will be back later17:46
*** jordanP has quit IRC17:46
winston-djms: before digging into that, how's your debugging so far?17:46
dulek_homejgriffith: So do you agree on the approach to the refactoring?17:47
*** earlephilhower has joined #openstack-cinder17:47
nikeshmasselin:ok17:47
jgriffithdulek_home: kinda17:47
dulek_homejgriffith: What's bugging you?17:48
jgriffithdulek_home: the thing is taskflow doesn't really do anything with c-vol17:48
jgriffithdulek_home: it replaced code in c-api and c-manager17:48
jgriffithdulek_home: so what you proposed means I don't have to do anything :)17:48
jgriffithdulek_home: not that there's anything wrong with that :)17:48
jmswinston-d: Still the same.... the controller just doesn't look for the backends on the second node. That trace is the only thing I'm seeing... not seeing any rabbit communication errors, no auth errors, etc... :/17:48
*** vilobhmm has joined #openstack-cinder17:48
*** vilobhmm has quit IRC17:48
*** merooney has joined #openstack-cinder17:49
*** vilobhmm has joined #openstack-cinder17:49
dulek_homejgriffith: https://github.com/openstack/cinder/blob/master/cinder/volume/flows/manager/create_volume.py17:49
dulek_homejgriffith: This is what I've meant.17:49
jgriffithdulek_home: so that's the only call we really use taskflow for currently17:49
jgriffithdulek_home: yeah, sure17:50
jgriffiththat sounds good17:50
jmswinston-d: Now I could be absolutly looking over some obvious thing ... :/17:50
*** emagana has quit IRC17:50
dulek_homejgriffith: API one also has ~850 lines of code.17:50
*** merooney has quit IRC17:51
*** asselin__ has quit IRC17:51
winston-djms: did you check c-sch log and find the zfs backend reporting stats?17:51
harlowja_dulek_home https://review.openstack.org/#/c/185116/ could also make things easier/more elegant(?) for cinder to17:52
harlowja_see example in there17:52
*** akerr has joined #openstack-cinder17:52
dulek_homejgriffith: Just to clarify for myself - what's the difference between c-manager and c-vol? There's c-vol in DevStack...17:52
*** ebalduf has quit IRC17:52
*** dannywilson has joined #openstack-cinder17:52
*** chlong has quit IRC17:53
*** vilobhmm has quit IRC17:53
jgriffithdulek_home: so theyr'e kinda the same17:54
jgriffithdulek_home: if you look in the code though...17:54
dulek_homeharlowja_: I don't think that simple building of the flow can be made easier. ;)17:54
jgriffithdulek_home: manager.py actually coordinates calls to the volume driver17:54
jgriffithdulek_home: if you ask guitarzan it's pointless :)17:54
*** dannywil_ has joined #openstack-cinder17:55
harlowja_dulek_home hmmm, idk, maybe u are right :-P17:55
jgriffithharlowja_: dulek_home one thing I was looking at was seperating things a bit17:56
*** dannywilson has quit IRC17:56
jgriffithharlowja_: dulek_home building the flow, vs running the flow17:56
jgriffithharlowja_: I think that might help?17:56
harlowja_isn't that already sorta split?17:56
jgriffithIDK17:56
jgriffithharlowja_: yes17:56
jgriffithharlowja_: you actually did a bunch of work on that for me once already :)17:56
jgriffithharlowja_: dulek_home I'll spend some time looking at it17:57
harlowja_possibly, my memory is getting bad, lol17:57
harlowja_getting old and all17:57
harlowja_lol17:57
jgriffithharlowja_: hehe17:57
jgriffithwelcome my friend!17:57
harlowja_:-P17:57
harlowja_oh hi, who are u17:57
harlowja_how did i end up here, lol17:57
dulek_home:D17:57
*** merooney has joined #openstack-cinder17:58
jmswinston-d: Not on the controller sched. When I had the sched running on the 2nd host I was getting both (in API log anyway), but not since I disabled it. On the 2nd host I was seeing both LVM and the zfs backend in sched when I had sched running on it.17:58
* dulek_home Still trying to get c-vol and c-manager things right.17:58
dulek_homejgriffith: So what you mean is c-vol is volume driver and c-manager is the actual service running?17:59
jgriffithdulek_home: rpc calls get routed to the manger which then routes to the actual driver17:59
jgriffithdulek_home: the combination fo manager and driver make up the c-vol service17:59
*** rmesta has quit IRC17:59
jgriffithdulek_home: the driver code is "only" interested in accessing the device18:00
*** rmesta has joined #openstack-cinder18:00
jgriffithdulek_home: the manager is a bridge between the device-driver and the rpc layer18:00
*** pradipta has quit IRC18:00
*** rushiagr_away is now known as rushiagr18:00
dulek_homejgriffith: Okay, get it. So actually the flow is running in manager (which is part of c-vol).18:00
dulek_homejgriffith: I get the difference though.18:01
winston-djms: so basically nothing (lvm/zfs) on your scheduler log on 1st host?18:01
*** IanGovett has quit IRC18:01
jgriffithdulek_home: right18:02
*** dims_ has quit IRC18:02
*** dims_ has joined #openstack-cinder18:03
dulek_homejgriffith: Thanks for explaining. Okay, gotta go. Have a nice day everyone!18:03
*** merooney has quit IRC18:03
*** dulek_home has quit IRC18:03
*** manishg has joined #openstack-cinder18:05
*** e0ne has joined #openstack-cinder18:06
*** e0ne is now known as e0ne_18:06
*** e0ne_ is now known as e0ne18:07
jmswinston-d: lvm is. The one that's defined in that hosts cinder.conf18:10
*** merooney has joined #openstack-cinder18:10
*** setmason has quit IRC18:11
openstackgerritOpenStack Proposal Bot proposed openstack/cinder: Updated from global requirements  https://review.openstack.org/18611218:11
*** merooney has quit IRC18:12
*** setmason has joined #openstack-cinder18:13
winston-djms: is there problem for your 2nd node to connect to your rabbit/mysql (which I suppose are on 1st host)?18:15
jmswinston-d: Nope, connects fine. Haven't seen any sort of errors from rabbit, a failed mysql connection, etc...18:16
*** krtaylor has joined #openstack-cinder18:16
*** annashen has joined #openstack-cinder18:17
jmswinston-d: When I had the scheduler running on the zfs node, I was able to create a volume (cinder create), *if* I created it on that node... but going through the controller still failed.18:17
winston-djms: then we can't explain why no stats update can be seen on scheduler.18:17
guitarzanjgriffith: hah, hey, I use c-vol too :)18:17
winston-djms: can you post the output of 'keystone endpoint-list' from both 1st & 2nd node?18:18
winston-djms: when you create from the controller, what error did you see? vol stuck at 'creating' or 'error'?18:19
jmswinston-d: http://paste.openstack.org/show/240423/18:20
*** BharatK has quit IRC18:20
*** setmason has quit IRC18:21
*** kmartin has quit IRC18:22
jgriffithguitarzan: hehe.. but not because you want to :)18:23
*** vilobhmm has joined #openstack-cinder18:23
winston-djms: this is for 1st node?18:23
jmswinston-d: Error ... the error is: No weighted hosts found ... It's trying on the LVM storage, but fails because the volume_backend_name extra-spec doesn't match18:23
*** merooney has joined #openstack-cinder18:23
guitarzanjgriffith: hmm, I'm not sure if that's true or not... might be18:23
jgriffithguitarzan: LOL... I could be confused18:24
winston-djms: who is trying? the scheduler on first or 2nd node?18:24
jmswinston-d: That is ran from the 2nd node.18:24
guitarzanit does make some decisions that make our lives a little miserable18:24
guitarzanbut it doesn't typically do much harm :)18:24
jgriffithguitarzan: the conversation is coming back to me now :)18:24
jmswinston-d: The 1st node is trying. That's the only node that has a scheduler running (after I stopped it from the 2nd node, from my understanding that there should be a single one yesterday).18:25
*** ankit_ag has joined #openstack-cinder18:25
*** merooney has quit IRC18:26
*** fthiagogv has joined #openstack-cinder18:26
*** vilobhmm1 has joined #openstack-cinder18:27
jmswinston-d: But even when I did have the sched running on the zfs host, it failed creation unless I ran the 'cinder create' on the zfs host. Would get the no host error if I ran it from the controller cli, or through horizon (I think that's the name of that web thing).18:27
winston-dxyang: could you explain why free capacity is only deduct when backend doesn't suport thin? this logic wasn't there and doesn't seem related to the bug.18:29
winston-dxyang: ok, i just noticed that you left comment on previous change. looking18:30
xyangwinston-d: hi, ok18:30
*** ankit_ag has quit IRC18:30
*** vilobhmm has quit IRC18:30
*** merooney has joined #openstack-cinder18:31
*** setmason has joined #openstack-cinder18:31
openstackgerritAlfredo Moralejo proposed openstack/cinder: rbd driver in cinder does not manage glance images multi-location  https://review.openstack.org/18261618:32
winston-dxyang: i don't agree with your argument about free capacity for thin being deduct (reserved) from schedule would cause any signifincant problem.18:32
*** merooney has quit IRC18:33
winston-dxyang: as you mentioned in your reply to wang hao's comment, schedule stats are being updated every 60 secs.18:33
xyangwinston-d: since free_capacity is for physical capacity, why do we need to deduct the new volume size from it?  that's why I think they are related18:33
xyangwinston-d: I guess I don't understand why we deduct the volume size there18:33
*** BharatK has joined #openstack-cinder18:33
*** dulek has quit IRC18:33
*** dustins has joined #openstack-cinder18:33
xyangwinston-d: is the free_capacity we get in capacity_filter a result after the deduction?18:34
winston-dxyang: if you don't, basically you are treat thin backend as inifinte for 60 secs.18:34
*** setmason has quit IRC18:34
winston-dwhich is totally wrong18:34
*** kmartin has joined #openstack-cinder18:34
*** setmason has joined #openstack-cinder18:34
winston-dimgine that you have 200 vol creates coming in in 60secs, before your backend is able to send updates to sch.18:35
xyangwinston-d: if a new thin volume is 10GB and not written yet, its physical capacity is 018:35
guitarzanwinston-d: thin is infinite! :)18:36
winston-dxyang: so you can create as many vols as you want, that's no right. if that's the case, why do we bother to do oversubscriptioon ratio at all?18:36
xyangwinston-d: I'd like patrickeast to take a look too. It seems that we should use the same standard in both places18:36
hemnawinston-d, doesn't every driver basically have that problem ?18:37
winston-dthese two are totally unrelated issue.18:37
*** rushil has quit IRC18:37
winston-dif you want to fix that, please create a separate bug18:37
hemnaas our driver doesn't keep track of # of volumes created between get_volume_stats() updates18:37
winston-dhemna: that's why i want scheduler to be cautious18:37
patrickeastwinston-d: xyang: you guys are talking about the changes in host_manager, right?18:37
xyangwinston-d: so during this 60 seconds, we assume the new thin volume space will be totally written18:37
hemnaand like xyang says, if each of those volumes created doesn't get data, then the reporting from the array will be the same as prior to their creation18:38
xyangwinston-d: then in capacity filter, we think a new thin volume will not be written.  that is where my confusion is coming from18:38
hemnawe might have allocated #'s though, not sure18:38
patrickeastso we either play it safe and assume they might be full, or assume they are empty and take up what amounts to 0 space18:38
patrickeasteither way its not super great18:38
winston-dxyang: if you want, you can do a more complex math, i.e. deduct the size/oversus_ratio18:38
winston-dxyang: but no deduct at all will create problem.18:38
patrickeasti’m ok with deducting the full amount there18:39
patrickeastas long as we keep the other changes in capacity_filter.py18:39
patrickeastthey are two different issues in my mind18:39
xyangwinston-d: I guess what I don't understand is that, if we deduct in host_manager, why don't we consider the size in capacity_filter?18:39
winston-dxyang: you can have your QA team to design a case for this and see how your backend works.18:39
winston-dpatrickeast: yes, two different issues.18:40
xyangwinston-d: will this (free_capacity-volume_size) be used in capacity_filter?18:40
winston-dxyang: we do in capacity filer, i don't understand your argument, TBH18:40
*** gardenshed has joined #openstack-cinder18:40
winston-dxyang: it will, for next create request18:40
jgriffithwinston-d: FWIW I think you're describing exactly what I was trying to propose18:41
asselinnikeshm, btw, your ram issue is not likely related to nodepool. Take a look at nova and see what's going on there18:41
*** akerr has quit IRC18:43
xyangwinston-d: are you okay with the changes in capacity_filter only?18:43
*** gardenshed has quit IRC18:44
*** akerr has joined #openstack-cinder18:45
jungleboyjjgriffith: Do you know if it is still possible to run into race conditions in c-api if running active active?18:45
jungleboyjhttps://etherpad.openstack.org/p/kilo-crossproject-ha-integration18:45
e0nejungleboyj: i've got race in c-api in Juno18:46
openstackgerritRushil Chugh proposed openstack/cinder: Avoid LUN ID collisions in NetApp iSCSI drivers  https://review.openstack.org/17923918:47
*** ebalduf has joined #openstack-cinder18:47
*** merooney has joined #openstack-cinder18:47
*** rhedlind has quit IRC18:47
jungleboyje0ne: Ok, and I don't think we did anything in Kilo to change that either.18:47
e0nejungleboyj: agree18:47
*** vilobhmm has joined #openstack-cinder18:47
*** vilobhmm1 has quit IRC18:48
*** rushil has joined #openstack-cinder18:48
jgriffithjungleboyj: well *someobdy* added a comment that "this it valid issue in Juno" so apparantly it's a problem18:48
jgriffithjungleboyj: note they said "this it" not me :)18:48
*** BharatK has quit IRC18:48
e0nejgriffith: maybe it was me18:49
jgriffithe0ne: darn you! :)18:49
jungleboyjjgriffith: So, you would agree that we still have issues there?18:50
e0nefixed to "this is"18:50
*** merooney has quit IRC18:50
jungleboyj:-)18:50
jgriffithjungleboyj: not saying that :)18:50
jgriffithjungleboyj: so that's an odd one to call out IMHO18:51
*** merooney has joined #openstack-cinder18:51
jgriffithjungleboyj: we have all sorts of issues in other places I think (races)18:51
jgriffithjungleboyj: but I think DuncanT has been the most involved/vocal on that topic18:51
e0nehttps://etherpad.openstack.org/p/cinder-active-active-vol-service-issues18:51
* jungleboyj missed that tag on the end of the line. Sorry about that.18:52
winston-dxyang: I've left new comment.18:52
*** tobe has joined #openstack-cinder18:53
xyangwinston-d: thanks18:54
*** leeantho has quit IRC18:55
xyangwinston: I am ok just to make changes in capacity_filter now.  then both patrickeast and I can do more testing to see if anything else needs to be changed18:55
patrickeastsounds good to me18:56
winston-dxyang, patrickeast: thx18:58
*** tobe has quit IRC18:58
*** nlevinki has joined #openstack-cinder18:58
*** e0ne has quit IRC18:59
winston-djms: so, i'm pretty sure there is something wrong with your setup.19:00
*** ik__ has joined #openstack-cinder19:00
ik__any idea why this would happen? http://paste.openstack.org/raw/240475/19:00
winston-djms: e.g. your c-vol service on 2nd isn't able to send updates to scheduler on 1st node.19:00
*** rushiagr is now known as rushiagr_away19:01
*** bandwidth has joined #openstack-cinder19:01
bandwidthI have a hard time configuring notification_topics for cinder (kilo)19:02
bandwidthit was working fine before Kilo, but now, the bindings aren't configured properly in rabbitmq19:02
bandwidthsomeone can help?19:02
*** merooney has quit IRC19:03
*** ik__ has left #openstack-cinder19:05
winston-dbandwidth: what exact problem are you seeing with notification_topics with kilo cinder?19:07
*** eharney has quit IRC19:08
vilobhmmscottda : ping19:08
scottda:Hi vilobhmm19:10
vilobhmmwhen you see that you can't detach volumes as they are stuck in "detaching" state how do you get around this problem….how do you make sure that the data on those volumes is accessible (as we can't attach the volume to any other instance)19:10
nikeshmasselin: tried to create a cirros image with flavor of 8 gb,ssh to cirros vm,ram is 8 gb but same thing when tried with nodepool vm its 6gb,you are correct better to check nova19:10
vilobhmmsince the state of volumes in detaching ( and not available)19:10
scottdaYou have to see the state of the Volume(s) in Nova (is it attached to an instance), in Nova BlockDeviceMapping table (if BDM table says it's attached, you won't be able to attach it elsewhere) ....19:11
scottdaState in Cinder DB (needs to be 'available' to re-attach)..19:11
scottdaand state on storage backend (needs a Cinder terminate_connection or manual un-export of volume to compute host.19:12
scottdaIn other words, solving the problem is a big mess19:12
*** annashen has quit IRC19:14
*** eharney has joined #openstack-cinder19:15
xyangwinston-d: are you still there?19:16
scottdavilobhmm: Some notes and links here: https://etherpad.openstack.org/p/CinderReset-stateAndForceDelete19:17
*** merooney has joined #openstack-cinder19:17
winston-dxyang: yes19:17
xyangwinston-d: line 85 is not covered.19:18
xyangwinston-d: https://review.openstack.org/#/c/185764/3/cinder/scheduler/filters/capacity_filter.py19:18
vilobhmmscottda : I agree…as DuncanT , myself and all of us saw in active active volume services design summit talk….just wanted to know if there was a work around rather that doing the db route19:18
vilobhmmthanks for the link19:18
xyangwinston-d: the case for thin and over subscription >=1, do we want to check if free capacity is 019:19
nikeshmasselin : this i found in nova for nodepool vm http://paste.openstack.org/show/240527/19:20
*** kmartin has quit IRC19:20
winston-dxyang: i don't think we need an extra check, because it's already covered here: https://github.com/openstack/cinder/blob/master/cinder/scheduler/filters/capacity_filter.py#L11419:22
xyangwinston-d: that is using free_virtual_capacity. do we also want to check free_physical_capacity is 0?19:23
xyangwinston-d: if free physical capacity, do we still allow thin provisioning to happen19:23
xyangwinston-d: if free physical capacity is 019:24
winston-dxyang: nope, free_virtual = free * host_state.max_over_subscription_ratio19:24
winston-dxyang: free virtual will be 0 if free is 019:24
winston-dxyang: and free_virtual >= volume_size will be False19:24
xyangwinston-d: that is a good point19:25
winston-dmake sense?19:25
xyangwinston-d: yes19:25
scottdavilobhmm: I'm working on some POC code to fix, but it's a few days out.19:25
winston-dscottda: actually, if a volume is marked as 'available' in cinder but still in Nova BDM, no one will be able to stop user from attaching the volume and other instances.19:26
vilobhmmlet me know if any help is needed19:26
vilobhmmwe are seeing this internally as well on a PoC and would like to help you so that it benefits everyone19:27
*** theanalyst has quit IRC19:27
scottdawinston-d: I think Nova will stop the user from attaching again, since it searches the BDM table and will fail if it finds an entry for the volume.19:29
*** theanalyst has joined #openstack-cinder19:30
vilobhmmwinston-d, scottda : I dont see status field in nova.bdm http://paste.openstack.org/show/240528/19:30
vilobhmmscottda : I agree what you said19:30
*** ebalduf has quit IRC19:32
*** madskier has joined #openstack-cinder19:32
*** vokt has quit IRC19:34
winston-dscottda, vilobhmm: can you point out where does Nova do the check? I couldn't find it after going through API extension, Compute Api and Compute Manager.19:35
openstackgerritxing-yang proposed openstack/cinder: Not to deduct thin volume size in capacity filter  https://review.openstack.org/18576419:35
*** xyang has quit IRC19:35
vilobhmmhttps://github.com/openstack/nova/blob/master/nova/volume/cinder.py#L30919:36
*** kvidvans has quit IRC19:36
*** jgriffith has quit IRC19:36
vilobhmmwinston-d, scottda : ^^19:36
*** jgriffith has joined #openstack-cinder19:36
winston-dvilobhmm: what I said was: if a volume is marked as 'available' in cinder but still in Nova BDM, no one will be able to stop user from attaching the volume and other instances.19:36
winston-dvilobhmm: checking volume's status doesn't gate the case I mentioned19:37
scottdaOff to a meeting just now, I'll look in the code for where Nova checks BDM mapping. I'm pretty sure it's there.19:37
winston-dscottda: ok.19:38
vilobhmmwinston-d : what you said makes sense….may be I read betwn the lines..missed .."but still in Nova BDM"19:38
vilobhmmlet me point you to bdm code19:38
*** lpabon has joined #openstack-cinder19:41
*** nlevinki has quit IRC19:42
*** madskier has quit IRC19:47
*** ebalduf has joined #openstack-cinder19:48
*** ebalduf has quit IRC19:52
*** ebalduf has joined #openstack-cinder19:54
*** ociuhandu has quit IRC19:55
*** ganso_ has joined #openstack-cinder19:56
*** vokt has joined #openstack-cinder19:57
*** jungleboyj has quit IRC19:58
*** juzuluag has joined #openstack-cinder19:58
*** xyang has joined #openstack-cinder19:59
*** sgotliv has joined #openstack-cinder19:59
*** kmartin has joined #openstack-cinder20:00
*** leeantho has joined #openstack-cinder20:00
*** Lee1092 has quit IRC20:01
*** Rockyg has joined #openstack-cinder20:01
*** juzuluag has quit IRC20:03
*** simondodsley has quit IRC20:03
nikeshmasselin:i tried to manually create vm with nodepool image,but still its taking 6 gb,and in nova i have around 90GB,is any kernel limit of nodepool image20:03
asselinnikeshm, I don't know....20:04
nikeshmok20:05
nikeshmlet me ask in infra20:05
vilobhmmwinston-d : https://github.com/openstack/nova/blob/a8259c4d8dd475f993eef75b268e952a0bafc341/nova/api/openstack/compute/plugins/v3/block_device_mapping.py#L65;  https://github.com/openstack/nova/blob/master/nova/block_device.py#L175 ;20:05
openstackgerritVilobh Meshram proposed openstack/cinder: Nested Quota : Create allocated column in cinder.quotas  https://review.openstack.org/18570420:12
bandwidthwinston-d: normally, cinder should create bindings on the configured exchange to route the message based on the routing-key20:12
bandwidthwinston-d: i completely removed the exchange (cinder), restarted the service, it recreated the exchange with the bindings (cinder-volumes, cinder-backups, etc) but without the configured notification_topics provided (notification_topics=notifications,xyx...)20:13
*** annashen has joined #openstack-cinder20:15
*** IanGovett has joined #openstack-cinder20:16
*** timcl1 has quit IRC20:17
*** esker has quit IRC20:19
jmswinston-d: Yeah, I just don't know _what_ is being blocked... I've disabled the firewall for now, so it isn't the port being blocked. As I mentioned, I haven't seen any errors anywhere about failed connectes, it's showing it's connecting to the amqp server, the database connection seems to be valid (and I can connect with mysql cli client from zfs host to db on controller), api-paste.ini sections appear fine. :/20:19
*** madskier has joined #openstack-cinder20:21
*** ebalduf has quit IRC20:21
jmswinston-d: c-volume.log has output of: "Notifying Schedulers of capabilities ... _publish_service_capabilities" ... So, I'm guessing it may be trying, or thinking about, updating. But I don't know of a way to increase verbosity to see what it's doing more than having debug/verbose true, or using pdb to try tracing cinder-volume :/20:28
scottdawinston-d: You are right. There is nothing in Nova that prevents attaching a volume to 2 instances at once, if you have used reset-state to make the volume 'available'.20:32
*** lpabon has quit IRC20:32
scottdaLooks like we didn't need all that complicated code for multi-attach :)20:32
*** julim has quit IRC20:34
*** barra204 has joined #openstack-cinder20:35
*** annegentle has joined #openstack-cinder20:37
*** jungleboyj has joined #openstack-cinder20:37
hemnalol20:39
*** rushil has quit IRC20:40
*** IanGovett has quit IRC20:41
*** tobe has joined #openstack-cinder20:42
*** rushil has joined #openstack-cinder20:44
*** esker has joined #openstack-cinder20:46
*** tobe has quit IRC20:46
*** annashen has quit IRC20:47
*** nkrinner has quit IRC20:48
*** ronis has quit IRC20:49
*** barra204 has quit IRC20:50
*** bswartz has quit IRC20:50
winston-dscottda: yeah, that's what we've observed in our production env. we do hit a lot of issues with attach/detach. it's really a PITA.20:52
winston-dinconsistency between Cinder and Nova; inconsistency between Nova Api cell and child cell.20:53
*** rushil has quit IRC20:53
winston-djms: cinder-volume publish its stats via a fanout call, so basically just send the message and don't care who receives it.20:54
winston-djms: you may want to check your rabbit queues, see if the fanout queue is there. you can even dump messages from there.20:54
*** rasoto_ has quit IRC20:55
*** rushil has joined #openstack-cinder21:00
*** daneyon_ has quit IRC21:00
*** ociuhandu has joined #openstack-cinder21:01
*** esker has quit IRC21:01
*** ociuhandu has quit IRC21:02
*** dustins has quit IRC21:02
*** rushil has quit IRC21:04
*** ociuhandu has joined #openstack-cinder21:04
*** rushil has joined #openstack-cinder21:04
*** akerr has quit IRC21:05
*** ebalduf has joined #openstack-cinder21:06
*** rushil has quit IRC21:06
*** madskier has quit IRC21:07
openstackgerritJon Bernard proposed openstack/os-brick: Add RBD connector  https://review.openstack.org/18617221:11
anteayaso I was just going through the meeting backscroll21:12
anteayaand on the top of db downgrades21:12
anteayaafter the ops tagging working group session I chatted with a few ops before lunch21:13
*** bkopilov_wfh has quit IRC21:13
anteayathey never use a db downgrade21:13
anteayasince they are never sure it will work21:13
anteayaJon Proulx was one of the people I was talking with21:14
anteayaif anyone wants to reach out to him and get his firsthand thoughts on the matter21:14
*** bswartz has joined #openstack-cinder21:18
*** ebalduf has quit IRC21:18
*** Yogi11 has quit IRC21:19
*** merooney has quit IRC21:19
openstackgerritJon Bernard proposed openstack/os-brick: Add RBD connector  https://review.openstack.org/18617221:21
*** bkopilov has joined #openstack-cinder21:23
*** dannywil_ has quit IRC21:27
*** dannywilson has joined #openstack-cinder21:27
*** akshai has quit IRC21:31
*** bkopilov has quit IRC21:31
*** merooney has joined #openstack-cinder21:32
openstackgerritTom Barron proposed openstack/cinder: Add standard QoS spec support to cDOT drivers  https://review.openstack.org/17935221:40
*** merooney has quit IRC21:40
jmswinston-d: Hrmm... Thanks for the rabbit pointer. Connecting with rabbitmqctl I get:21:41
jms  * TCP connection succeeded but Erlang distribution failed21:41
jms  * suggestion: hostname mismatch?21:41
jms  * suggestion: is the cookie set correctly?21:41
jmsThe erlang packages are the same between hosts.21:41
*** fthiagogv has quit IRC21:43
*** thangp has quit IRC21:43
*** annashen has joined #openstack-cinder21:48
*** bkopilov has joined #openstack-cinder21:49
*** mriedem has quit IRC21:51
*** daneyon has joined #openstack-cinder21:51
*** annashen has quit IRC21:53
*** dannywilson has quit IRC21:54
openstackgerritDaniel Allegood proposed openstack/cinder: Updating cmd/manage.py get_arg_string() argument parser  https://review.openstack.org/18609421:54
*** dannywilson has joined #openstack-cinder21:55
uberjaythingee: ping -- could I get some attention on our driver commit? I *think* everything is in order, just awaiting bp approval and core review. (https://review.openstack.org/#/c/178295/)21:58
hemnauberjay, is CI reporting ?21:58
*** jwcroppe has quit IRC21:58
hemnanew driver?21:59
uberjayyes21:59
uberjayyes to both21:59
*** jwcroppe has joined #openstack-cinder21:59
hemnahey look at that.  CI  :)21:59
hemnauberjay, nice21:59
uberjayCI seems to be working. I've investigated a handful of failures, and I'm pretty sure they're legit21:59
uberjayhemna, thanks :)22:00
hemnauberjay, ok I have a meeting starting in like 1 minute ago.  I'll see if I can look at it after.22:00
uberjaythe CI is stable for the most part. every once in a while (once per week, or so), something seems to get jammed up in zuul or zuul-merger which causes the gerrit changes to stop being processed.22:02
uberjayhemna: awesome, thanks! i really appreciate it :)22:02
*** akshai has joined #openstack-cinder22:03
*** mriedem has joined #openstack-cinder22:03
*** annashen has joined #openstack-cinder22:04
asselinuberjay, a few tips:22:04
asselin1. export GIT_BASE=https://review.openstack.org/p --> export GIT_BASE="https://git.openstack.org"22:05
asselinthat other link will provide better performance22:06
uberjayahh, ok. that may help; one of the failures looked to be a connectivity or performance issue while fetching a change22:07
asselinuberjay, well...just one. good job! :)22:08
uberjayhow are they related to each other? can just treat them as equivalent?22:08
uberjaywell, i suspect it'll get busier as June passes ;)22:09
asselinuberjay, yes, basically, the git.openstack.org link is a git farm that is load balanced22:09
uberjayah, got it22:09
asselinreview.openstack.org is gerrit which works, but not the best choice for CI22:09
*** Rockyg has quit IRC22:11
uberjayif you have any other tips, I'm all ears. :)22:11
asselinwhich repo are you using to setup your ci?22:12
uberjayit's all based on your os-ext-testing repo22:13
openstackgerritOpenStack Proposal Bot proposed openstack/cinder: Updated from global requirements  https://review.openstack.org/18611222:15
*** lpetrut has quit IRC22:19
*** annegentle has quit IRC22:20
asselinuberjay, ok cool...glad to know that's working22:21
*** bandwidth has quit IRC22:24
*** annegentle has joined #openstack-cinder22:24
*** crose has quit IRC22:28
anteayauberjay: thank you for starting out in the ci-sandbox repo22:28
*** ndipanov has quit IRC22:30
*** tobe has joined #openstack-cinder22:30
uberjayanteaya: no problem! considering the number of moving parts, it was clearly inappropriate to spam everyone with my setup process. ;)22:32
anteayauberjay: yay22:32
anteayasomeone with common sense22:32
*** dannywil_ has joined #openstack-cinder22:32
anteayawelcome!!22:32
uberjayhaha ;)22:33
uberjaythanks :)22:33
anteayathank you22:33
*** kmartin_ has joined #openstack-cinder22:34
*** kmartin has quit IRC22:34
*** patrickeast_ has joined #openstack-cinder22:34
uberjayasselin: yeah, it's working well. there were a couple moments when -infra diverged in incompatible ways while I was getting things going, but ... these things happen. I'm fairly certain without your help I'd still be scratching my head...22:34
*** tobe has quit IRC22:35
*** dannywilson has quit IRC22:35
*** patrickeast has quit IRC22:36
*** patrickeast_ is now known as patrickeast22:36
asselinuberjay, yeah...we're working on converging. if you'd like to help out, let me know!22:36
uberjayconverging is an awesome and worthwhile goal. i'll absolutely let you know if/when I have the time. unfortunately, I just can't right now.22:38
asselinuberjay, no problem at all.22:38
*** pboldin has quit IRC22:39
*** annashen has quit IRC22:43
*** annegentle has quit IRC22:46
*** jms has quit IRC22:46
*** annashen has joined #openstack-cinder22:50
*** krtaylor has quit IRC22:50
*** afazekas has quit IRC22:51
*** merooney has joined #openstack-cinder22:52
*** akshai has quit IRC22:52
nikeshmuberjay: hi,are using trusty images in nodepool22:55
openstackgerritMitsuhiro Tanino proposed openstack/cinder-specs: Efficient volume copy for volume migration  https://review.openstack.org/18620922:55
*** dannywil_ has quit IRC22:57
*** merooney has quit IRC22:59
openstackgerritMitsuhiro Tanino proposed openstack/cinder-specs: Efficient volume copy for volume migration  https://review.openstack.org/18620922:59
nikeshmuberjay: what flavor are yu using for nodepool vms23:00
*** angela-s has quit IRC23:00
uberjaynikeshm: i am using trusty-based images (the cloud image)23:00
nikeshmuberjay: and flavor23:01
*** Rockyg has joined #openstack-cinder23:01
uberjayi created a new flavor between medium and large for the slave nodes... 6GB of ram & 2 VCPUs23:03
openstackgerritVincent Hou proposed openstack/cinder: Implement the update_migrated_volume for the drivers  https://review.openstack.org/18087323:03
*** mriedem has quit IRC23:04
uberjaynikeshm: i needed a bit more -- 4GB is enough for devstack alone, but i'm running a self-contained installation of our storage software in a container on the same node.23:04
*** manishg has left #openstack-cinder23:05
*** chlong has joined #openstack-cinder23:05
nikeshmuberjay: ok actually i tried with 8 GB and 16GB RAM flavor but in nodepool vm its showing 6GB23:05
nikeshmuberjay: how much time its taking tto run tempest for yu23:06
uberjayyou mean... if you ssh to one of the nodes it appears to have less memory? (where does it appear to have 6GB?)23:07
nikeshmyes23:08
nikeshmfree -g23:08
nikeshmi tried after log in to nodepool node23:08
nikeshmits showing 6GB23:08
uberjaynikeshm: just the tempest portion of the test is "Ran: 281 tests in 711.0000 sec."23:08
nikeshmuberjay: cool!! ,what about devstack installation23:10
uberjaythe rest of setup, etc, takes quite a bit of time though. in total, somewhere between 32-40 minutes for the full jenkins job23:10
*** annashen has quit IRC23:12
uberjaymy slave nodes have ~6000MB of RAM, which is what i expect23:12
nikeshmuberjay: great,for me devstack itself taking 30 minutes23:13
*** annashen has joined #openstack-cinder23:13
nikeshmuberjay: how did yu cross-check that your system is using kvm23:13
uberjaynikeshm: huh... for me, the steps leading up to running devstack take ~4-5 minutes. running devstack, another 10 minutes or so, then tempest starts23:14
*** ganso_ has quit IRC23:14
*** annashen has quit IRC23:17
uberjaynikeshm: we do use kvm, do you think you might not be?23:21
*** annashen has joined #openstack-cinder23:21
uberjaythat could certainly account for some performance issues...23:21
nikeshmuberjay: i am also using kvm,but in dashboard its showing qemu23:22
nikeshmuberjay: in yur dashboard of openstack did yu cross-check hypervisor details23:22
*** sgotliv has quit IRC23:23
uberjayah, yeah, it does show qemu in the hypervisor summary. qemu makes use of the kvm feature, though. you should be able to check directly on the host if kvm is being used. i forget if it's visible through horizon?23:24
nikeshmuberjay: how can we check in host23:26
*** annashen has quit IRC23:29
*** rmesta has quit IRC23:29
*** annashen has joined #openstack-cinder23:30
*** tobe has joined #openstack-cinder23:31
nikeshmuberjay: did yu get any trouble with these tests http://paste.openstack.org/show/240850/23:33
*** asselin_ has joined #openstack-cinder23:34
uberjayah, you should be able to see a bunch of qemu-kvm processes running (one per guest).23:34
nikeshmasselin : able to run iscsi tests on my driver but some tests are giving trouble http://paste.openstack.org/show/240850/23:34
uberjayyou should see -machine accel=kvm, i guess you should ensure your cpu has virtualization support, but it would surprise me if it didn't!23:35
*** annashen has quit IRC23:35
nikeshmuberjay: qemu-system-x86_64 -enable-kvm -name instance-0000000d -S -machine pc-i440fx-trusty,accel=kvm23:36
*** tobe has quit IRC23:36
asselin_nikeshm, I skip this one....don't know how to make it pass: tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_compute_with_volumes23:36
asselin_nikeshm, this one I got working my using neutron networking instead of nova networking: tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario23:37
asselin_but it really depends on what the failure cause is...your mileage may vary23:37
asselin_this one....I dont remember any issues...tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern although it fails intermittently23:38
*** dims_ has quit IRC23:38
*** krtaylor has joined #openstack-cinder23:39
*** dims_ has joined #openstack-cinder23:40
openstackgerritDaniel Allegood proposed openstack/cinder: Updating cmd/manage.py get_arg_string() argument parser  https://review.openstack.org/18609423:40
openstackgerritDaniel Allegood proposed openstack/cinder: Unit test for cmd/manage.py get_arg_string function  https://review.openstack.org/18622723:40
uberjaynikeshm: i have to run -- good luck!23:41
openstackgerritDaniel Allegood proposed openstack/cinder: Unit test for cmd/manage.py get_arg_string function  https://review.openstack.org/18622723:43
openstackgerritDaniel Allegood proposed openstack/cinder: Updating cmd/manage.py get_arg_string() argument parser  https://review.openstack.org/18609423:43
*** vokt has quit IRC23:43
*** vokt has joined #openstack-cinder23:44
thingeeuberjay: read this https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver#Submitting_Driver_For_Review23:50
*** leeantho has quit IRC23:50
*** Yogi1 has joined #openstack-cinder23:58
*** Yogi11 has joined #openstack-cinder23:58
*** julim has joined #openstack-cinder23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!