Wednesday, 2016-04-13

*** sdake has joined #openstack-meeting-cp00:06
*** sdake_ has joined #openstack-meeting-cp00:22
*** sdake has quit IRC00:23
*** vilobhmm11 has quit IRC00:48
*** rpjr_ has quit IRC00:49
*** sdake_ has quit IRC00:58
*** rpjr_ has joined #openstack-meeting-cp01:12
*** _amrith_ is now known as amrith01:26
*** rpjr_ has quit IRC01:44
*** rpjr_ has joined #openstack-meeting-cp01:54
*** Rockyg has quit IRC01:56
*** askb has joined #openstack-meeting-cp02:03
*** vilobhmm11 has joined #openstack-meeting-cp02:26
*** sdake has joined #openstack-meeting-cp02:32
*** sdake has quit IRC02:32
*** sdake has joined #openstack-meeting-cp02:32
*** sdake_ has joined #openstack-meeting-cp02:42
*** sdake has quit IRC02:45
*** vilobhmm11 has quit IRC02:57
*** sdake_ is now known as sdkae03:05
*** vilobhmm11 has joined #openstack-meeting-cp03:39
*** nikhil_k has joined #openstack-meeting-cp03:56
*** nikhil has quit IRC04:00
*** beekhof has quit IRC04:30
*** beekhof has joined #openstack-meeting-cp05:11
*** ninag has joined #openstack-meeting-cp05:15
*** mestery has quit IRC05:17
*** ninag has quit IRC05:19
*** mestery has joined #openstack-meeting-cp05:23
*** amrith is now known as _amrith_06:02
*** _amrith_ is now known as amrith06:04
*** sdkae is now known as sdake06:12
*** vilobhmm11 has quit IRC07:29
*** vilobhmm11 has joined #openstack-meeting-cp07:32
*** sdake_ has joined #openstack-meeting-cp07:46
*** sdake has quit IRC07:49
*** sdake_ is now known as sdake07:50
*** rpjr_ has quit IRC08:10
*** persia has quit IRC09:04
*** persia has joined #openstack-meeting-cp09:06
*** mestery_ has joined #openstack-meeting-cp09:40
*** tristanC_ has joined #openstack-meeting-cp09:43
*** mestery has quit IRC09:47
*** tristanC has quit IRC09:47
*** jklare_ has quit IRC09:47
*** sballe_ has quit IRC09:47
*** fungi has quit IRC09:47
*** cFouts has quit IRC09:47
*** russellb has quit IRC09:47
*** mestery_ is now known as mestery09:47
*** gnarld_ has joined #openstack-meeting-cp09:48
*** jklare has joined #openstack-meeting-cp09:48
*** russellb has joined #openstack-meeting-cp09:50
*** sballe_ has joined #openstack-meeting-cp09:52
*** fungi has joined #openstack-meeting-cp09:53
*** sdague has joined #openstack-meeting-cp10:04
*** rpjr_ has joined #openstack-meeting-cp10:10
*** ninag has joined #openstack-meeting-cp10:15
*** rpjr_ has quit IRC10:15
*** ninag has quit IRC10:19
*** sdake_ has joined #openstack-meeting-cp10:20
*** vilobhmm11 has quit IRC10:21
*** sdake has quit IRC10:22
*** nikhil has joined #openstack-meeting-cp10:30
*** nikhil_k has quit IRC10:32
*** openstackstatus has quit IRC10:32
*** dims has quit IRC10:33
*** openstack has joined #openstack-meeting-cp11:15
*** ChanServ sets mode: +o openstack11:15
*** openstack has quit IRC11:16
*** openstack has joined #openstack-meeting-cp11:31
*** ChanServ sets mode: +o openstack11:31
*** tjcocozz has quit IRC11:35
*** EmilienM has quit IRC11:36
*** sdague has quit IRC11:37
*** askb has quit IRC11:37
*** bswartz has quit IRC11:37
*** tjcocozz has joined #openstack-meeting-cp11:42
*** tristanC_ is now known as tristanC11:42
*** Guest38251 has joined #openstack-meeting-cp11:46
*** askb has joined #openstack-meeting-cp11:50
*** sdague has joined #openstack-meeting-cp11:51
*** sdake is now known as sdake_busy_webin11:51
*** Guest38251 has quit IRC11:51
*** _amrith_ is now known as amrith11:56
*** sheel has joined #openstack-meeting-cp11:57
*** EmilienM has joined #openstack-meeting-cp12:10
*** EmilienM is now known as Guest5022012:11
*** Guest50220 has quit IRC12:11
*** Guest50220 has joined #openstack-meeting-cp12:11
*** Guest50220 is now known as EmilienM12:11
*** automagically has joined #openstack-meeting-cp12:16
*** automagically has quit IRC12:21
*** raildo-afk is now known as raildo12:23
*** xyang1 has joined #openstack-meeting-cp12:27
*** bswartz has joined #openstack-meeting-cp12:34
*** ninag has joined #openstack-meeting-cp12:35
*** amrith is now known as _amrith_12:42
*** rpjr_ has joined #openstack-meeting-cp13:08
*** sdake has joined #openstack-meeting-cp13:10
*** sdake_busy_webin has quit IRC13:12
*** sdake_ has joined #openstack-meeting-cp13:13
*** sdake has quit IRC13:15
*** sdake_ is now known as sdake13:18
*** sdake is now known as sdake_busy13:18
*** rpjr__ has joined #openstack-meeting-cp13:21
*** rpjr_ has quit IRC13:24
*** automagically has joined #openstack-meeting-cp13:28
*** askb_ has joined #openstack-meeting-cp13:30
*** automagically has left #openstack-meeting-cp13:30
*** vgridnev_ has joined #openstack-meeting-cp13:34
*** sheeprine_ has joined #openstack-meeting-cp13:35
*** ninag has quit IRC13:37
*** ninag has joined #openstack-meeting-cp13:38
*** vgridnev_ has quit IRC13:38
*** ninag_ has joined #openstack-meeting-cp13:39
*** ninag__ has joined #openstack-meeting-cp13:40
*** ninag has quit IRC13:42
*** jklare_ has joined #openstack-meeting-cp13:43
*** dims_ has joined #openstack-meeting-cp13:43
*** askb has quit IRC13:43
*** vgridnev has quit IRC13:43
*** dims has quit IRC13:43
*** jklare has quit IRC13:43
*** sheeprine has quit IRC13:43
*** ninag_ has quit IRC13:44
*** vgridnev has joined #openstack-meeting-cp13:44
*** ninag__ has quit IRC13:44
*** _amrith_ is now known as amrith13:44
*** sigmavirus24_awa is now known as sigmavirus2413:51
*** sheeprine_ is now known as sheeprine13:51
*** sheeprine has quit IRC13:51
*** sheeprine has joined #openstack-meeting-cp13:51
*** persia has quit IRC14:02
*** sheel has quit IRC14:17
*** rpjr__ has quit IRC14:37
*** rpjr_ has joined #openstack-meeting-cp14:38
*** dims_ has quit IRC14:38
*** dims has joined #openstack-meeting-cp14:41
*** sheel has joined #openstack-meeting-cp14:43
*** jungleboyj has joined #openstack-meeting-cp14:50
*** amrith is now known as _amrith_14:59
*** sdake_busy has quit IRC15:08
*** sdake has joined #openstack-meeting-cp15:17
*** nikhil has quit IRC15:18
*** nikhil has joined #openstack-meeting-cp15:28
*** nikhil has quit IRC15:38
*** nikhil has joined #openstack-meeting-cp15:38
*** openstack has joined #openstack-meeting-cp15:51
*** ChanServ sets mode: +o openstack15:51
*** sballe_ has joined #openstack-meeting-cp15:53
*** openstack has joined #openstack-meeting-cp15:54
*** ChanServ sets mode: +o openstack15:54
*** ninag has joined #openstack-meeting-cp15:54
*** ninag has quit IRC15:55
*** ninag has joined #openstack-meeting-cp15:55
*** Rocky_g has joined #openstack-meeting-cp15:57
*** Rockyg has quit IRC15:57
*** Rocky_g has quit IRC15:58
*** Rocky_g has joined #openstack-meeting-cp15:58
*** openstack has joined #openstack-meeting-cp16:08
*** ChanServ sets mode: +o openstack16:08
*** sdake_ has joined #openstack-meeting-cp16:09
*** sdake has quit IRC16:11
*** sdake has joined #openstack-meeting-cp16:15
*** sdake_ has quit IRC16:17
*** ninag has joined #openstack-meeting-cp16:17
*** ninag has quit IRC16:21
*** ninag has joined #openstack-meeting-cp16:29
*** _amrith_ is now known as amrith16:30
*** ninag has quit IRC16:34
*** ninag has joined #openstack-meeting-cp16:43
*** ninag has quit IRC16:45
*** thingee has quit IRC16:46
*** ninag has joined #openstack-meeting-cp16:46
*** ninag has quit IRC16:46
*** ninag has joined #openstack-meeting-cp16:47
*** sdake has quit IRC16:49
*** ninag has quit IRC16:50
*** ninag has joined #openstack-meeting-cp16:50
*** ninag has quit IRC17:01
*** ninag has joined #openstack-meeting-cp17:12
*** ninag has quit IRC17:13
*** ninag has joined #openstack-meeting-cp17:13
*** ninag has quit IRC17:24
*** ninag has joined #openstack-meeting-cp17:27
*** ninag_ has joined #openstack-meeting-cp17:28
*** Rockyg has joined #openstack-meeting-cp17:28
*** Rockyg has quit IRC17:30
*** ninag has quit IRC17:31
*** Rockyg has joined #openstack-meeting-cp17:31
*** Rocky_g has quit IRC17:31
*** ninag_ has quit IRC17:32
*** Rocky_g has joined #openstack-meeting-cp17:41
*** Rockyg has quit IRC17:45
*** sheel has quit IRC17:54
*** Rocky_g has quit IRC17:55
*** sdake has joined #openstack-meeting-cp17:55
*** Rockyg has joined #openstack-meeting-cp17:56
*** ildikov has quit IRC18:02
*** persia has joined #openstack-meeting-cp18:07
*** vilobhmm11 has joined #openstack-meeting-cp18:22
*** vilobhmm11 has quit IRC18:23
*** vilobhmm11 has joined #openstack-meeting-cp18:23
*** Rockyg has quit IRC18:26
*** Rockyg has joined #openstack-meeting-cp18:27
*** vilobhmm11 has quit IRC18:27
*** vilobhmm11 has joined #openstack-meeting-cp18:28
*** rpjr has joined #openstack-meeting-cp18:38
*** flaper87 has quit IRC18:38
*** gnarld_ is now known as nug18:39
*** rpjr_ has quit IRC18:39
*** nug is now known as Guest663318:39
*** amrith has quit IRC18:39
*** DuncanT has quit IRC18:41
*** ninag has joined #openstack-meeting-cp18:42
*** amrith has joined #openstack-meeting-cp18:43
*** Rockyg has quit IRC18:44
*** Rockyg has joined #openstack-meeting-cp18:44
*** Guest6633 is now known as cFouts18:45
*** ninag_ has joined #openstack-meeting-cp18:45
*** flaper87 has joined #openstack-meeting-cp18:45
*** vilobhmm11 has quit IRC18:45
*** ninag_ has quit IRC18:45
*** ninag_ has joined #openstack-meeting-cp18:46
*** ninag has quit IRC18:47
*** automagically has joined #openstack-meeting-cp18:48
*** vilobhmm11 has joined #openstack-meeting-cp18:48
*** ninag_ has quit IRC18:48
*** ninag has joined #openstack-meeting-cp18:48
*** sdake_ has joined #openstack-meeting-cp19:01
*** vilobhmm11 has quit IRC19:03
*** sdake has quit IRC19:03
*** vilobhmm11 has joined #openstack-meeting-cp19:04
*** DuncanT has joined #openstack-meeting-cp19:04
*** ildikov has joined #openstack-meeting-cp19:13
*** sdague_ is now known as sdague19:42
*** sdake_ has quit IRC19:48
*** beekhof has quit IRC19:48
*** markvoelker has quit IRC19:48
*** david-lyle has quit IRC19:48
*** redrobot has quit IRC19:48
*** homerp has quit IRC19:48
*** dansmith has quit IRC19:48
*** jokke_ has quit IRC19:48
*** dansmith has joined #openstack-meeting-cp19:53
*** sdake_ has joined #openstack-meeting-cp19:58
*** beekhof has joined #openstack-meeting-cp19:58
*** markvoelker has joined #openstack-meeting-cp19:58
*** david-lyle has joined #openstack-meeting-cp19:58
*** redrobot has joined #openstack-meeting-cp19:58
*** homerp has joined #openstack-meeting-cp19:58
*** jokke_ has joined #openstack-meeting-cp19:58
*** homerp_ has joined #openstack-meeting-cp20:02
*** sdake_ has quit IRC20:04
*** beekhof has quit IRC20:04
*** markvoelker has quit IRC20:04
*** david-lyle has quit IRC20:04
*** redrobot has quit IRC20:04
*** homerp has quit IRC20:04
*** jokke_ has quit IRC20:04
*** david-lyle has joined #openstack-meeting-cp20:05
*** amrith is now known as _amrith_20:09
*** sdake_ has joined #openstack-meeting-cp20:11
*** beekhof has joined #openstack-meeting-cp20:11
*** markvoelker has joined #openstack-meeting-cp20:11
*** redrobot has joined #openstack-meeting-cp20:11
*** jokke_ has joined #openstack-meeting-cp20:11
*** tyr_ has joined #openstack-meeting-cp20:12
*** sdake has joined #openstack-meeting-cp20:12
*** markvoelker_ has joined #openstack-meeting-cp20:14
*** beekhof has quit IRC20:15
*** markvoelker has quit IRC20:15
*** redrobot has quit IRC20:15
*** jokke_ has quit IRC20:15
*** sdake_ has quit IRC20:16
*** Rockyg has quit IRC20:17
*** jokke_ has joined #openstack-meeting-cp20:17
*** Rockyg has joined #openstack-meeting-cp20:17
*** automagically has quit IRC20:23
*** redrobot has joined #openstack-meeting-cp20:23
*** redrobot is now known as Guest7878620:24
*** ninag has quit IRC20:34
*** ninag has joined #openstack-meeting-cp20:35
*** ninag_ has joined #openstack-meeting-cp20:37
*** odyssey4me_ has joined #openstack-meeting-cp20:38
*** _amrith_ is now known as amrith20:38
*** ninag has quit IRC20:40
*** tonyb_ has joined #openstack-meeting-cp20:41
*** ninag_ has quit IRC20:41
*** rpjr has quit IRC20:42
*** markvoelker has joined #openstack-meeting-cp20:42
*** hemna_ has joined #openstack-meeting-cp20:42
*** Guest78786 is now known as redrobot20:43
*** lbragstad_ has joined #openstack-meeting-cp20:44
*** jokke_ has quit IRC20:47
*** markvoelker_ has quit IRC20:47
*** jungleboyj has quit IRC20:47
*** russellb has quit IRC20:47
*** harlowja has quit IRC20:47
*** odyssey4me has quit IRC20:47
*** lbragstad has quit IRC20:47
*** hemna has quit IRC20:47
*** tonyb has quit IRC20:47
*** russellb has joined #openstack-meeting-cp20:48
*** jokke_ has joined #openstack-meeting-cp20:49
*** rpjr has joined #openstack-meeting-cp20:53
*** mriedem has joined #openstack-meeting-cp20:54
*** andrearosa_web has joined #openstack-meeting-cp20:57
ildikov#startmeeting cinder-nova-api-changes21:00
openstackMeeting started Wed Apr 13 21:00:06 2016 UTC and is due to finish in 60 minutes.  The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot.21:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:00
*** openstack changes topic to " (Meeting topic: cinder-nova-api-changes)"21:00
openstackThe meeting name has been set to 'cinder_nova_api_changes'21:00
mriedemo/21:00
ildikovscottda ildikov DuncanT ameade cFouts johnthetubaguy jaypipes takashin alaski e0ne jgriffith tbarron andrearosa hemna erlon mriedem gouthamr ebalduf patrickeast smcginnis diablo_rojo21:00
smcginniso/21:00
scottdahi21:00
andrearosa_webhi21:00
ildikovhi21:00
*** ninag has joined #openstack-meeting-cp21:01
mriedem#link etherpad https://etherpad.openstack.org/p/cinder-nova-api-changes21:01
hemna_hey21:01
*** ninag has quit IRC21:01
scottdaif jgriffith Does not show up we shall pile a bunch of work on his plate.21:02
ildikovbeyond the topics on the agenda I wonder whether we want to have meeting next week as well21:02
*** tyr__ has joined #openstack-meeting-cp21:02
alaskio/21:02
hemna_fwiw, I finished the attach, detach and live migration sequence diagrams21:02
scottdahemna_: Those are great.21:02
hemna_I also exported them to draw.io editable 'xml' files21:02
ildikoveither case maybe we should spend a few seconds on whether we need to prepare with more things than what we already have21:02
ildikovhemna_: diagrams are awesome!21:03
hemna_so we can edit them as needed later.  my gliffy.com account expires in 6 days21:03
scottdahemna_: Did you see my question in Cinder? I'm wondering where we need an update_connector API?21:03
hemna_yah I saw that21:03
hemna_I'm not sure honestly21:03
scottdahemna_: It looks to me like we create a new connector from new_host, and delete old one from old_host.21:03
hemna_the live migration process was always the need for it21:03
hemna_as the attachment entry is being moved from host-1 to host-221:04
hemna_and hence the connector and hostname would change21:04
mriedemi've got a high level question,21:04
scottdaSo it is the attachment entry only that needs an update? That makes sense...21:04
mriedemdo the connector issues need to be ironed out for the live migration stuff as a prerequisite for multiattach?21:04
hemna_scottda, yah I think so, and only for making sure the connector and host were updated for force detach.21:05
hemna_mriedem, I think they are all connected21:05
*** angdraug has joined #openstack-meeting-cp21:05
*** tyr_ has quit IRC21:05
*** amrith is now known as _amrith_21:05
hemna_we need to make sure we support getting the correct attachment in initialize_connection for multi-attach21:05
ildikovmriedem: the issue is when you get two attachments on the same host by migrating an instance21:05
ildikovmriedem: in other cases this data is not critical21:06
hemna_do we need a diagram for swap volume as well ? :(21:06
scottdahemna_: Well, while you are at it....21:06
hemna_*sigh*21:06
mriedemso is volume-backed live migration broken today?21:06
mriedemw/o multiattach21:06
ildikovscottda: good answer :)21:06
hemna_mriedem, for single attach, it's working afaik.21:06
mriedemok, it should be, we have a gate job for it21:07
mriedemwhich fails anywhere from 25-75% of the time21:07
ildikovnot nice, but I think it's working21:07
scottdamriedem: I couldn't find any bugs associated with live migration and single attach.21:07
mriedembut that's besides the point21:07
hemna_heh21:07
smcginnisYeah, other than maybe some specific backend errors, I haven't heard of live migration problems.21:07
smcginnisIIRC, our test team here has done it.21:07
hemna_I noticed a bunch of bugs related to encrypted volumes :(21:07
mriedemok, i was trying to see if there was some latent bug with volume-backed live migration which is why it kept coming up21:08
hemna_but that's another topic....21:08
mriedemso there are the 4 (really 3) alternatives in the etherpad,21:08
*** sdake has quit IRC21:08
mriedemand it sounds like that's really just 2 and 421:08
mriedemthey are similar,21:08
hemna_yah they are21:08
ildikovand the current winner is #221:09
mriedemand john would prefer 2 becaues he doesn't want to change reserve_volume to create the attachment in the cinder db21:09
mriedemis that right?21:09
ildikovif no objections from this group either21:09
hemna_the only thing I an unsure about is how initialize_connection finds the correct attachment21:09
hemna_for the multi-attach case21:09
scottdamriedem: Yes, that is why he likes #2 over #421:09
ildikovhemna_: in same host case or in every?21:10
mriedemso for initialize_connection today we just have the volume id and connector21:10
hemna_it looks like his changes in initialize_connection he always calls db.volume_attach21:11
hemna_which creates a new volume_attachment DB table entry21:11
hemna_:(21:11
hemna_this is one of the reasons I liked os-reserve returning an attachment_id.21:11
mriedemand with option 4, reserve volume creates the attachment, and nova passes that attachment id to initialize_connection, right?21:11
hemna_then nova can call initialize_connection with the attachment_id21:11
hemna_and it's explicit21:11
mriedemand unreserve would delete the attachment?21:11
hemna_mriedem, yes21:12
mriedemin case attach fails21:12
mriedemwhat does initialize_connection need to do with the attachment? store the connector info in the vol attachment table?21:12
hemna_if initialize_connection took attachment_id, then it could also act as an 'update'21:12
hemna_mriedem, yes21:12
hemna_it should/needs to store the host and the connector21:13
jgriffithmriedem: well... initialize connection does a lot21:13
scottdahemna_: Yes, that's what i asked about the spec I posted for update_attachment..is it needed if we can use initialize_connection for that..21:13
mriedemhemna_: the host info is in the connector dict isn't it?21:13
hemna_mriedem, yah21:13
hemna_it is21:13
hemna_scottda, yes21:13
hemna_sonus, I think w/o multi-attach in the picture, the WIP will have issues w/ live migration21:14
mriedemso let's say we go with #2 and init_connection creates the attachment, what deletes it on failure?21:14
scottdahemna_: yes as in "I hear what you are saying" or yes as in "new api is needed even if initialize_.... can update"21:14
*** rpjr_ has joined #openstack-meeting-cp21:14
hemna_I hear what you are saying21:15
hemna_I'm not sure we need the update if we can get update to work inside of initialize_connection21:15
scottdahemna_: Cool. I like that.21:15
hemna_crap21:16
*** jungleboyj has joined #openstack-meeting-cp21:16
hemna_so...I left out one thing on the live migration diagram that I have to add in21:16
hemna_my bad21:16
hemna_post_live_migration calls initialize_connection, prior to calling terminate_connection.21:16
hemna_to make sure it has the correct connection_info of the volume on the source n-cpu host.21:17
hemna_because the bdm has the connection_info for the destination n-cpu host then.21:17
*** rpjr has quit IRC21:17
hemna_I'll add that in to the diagram.  sorry about missing that.21:17
hemna_it's the one time that nova calls initialize_connection with the intention of only fetching the up to date connection_info for an existing attach21:18
mriedemso what rolls back the attachment that's created in the case that the attach fails?21:18
mriedemfor options 2 and 4?21:18
hemna_https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L683121:18
hemna_there21:18
scottdamriedem: Today, there is no rollback21:19
hemna_correct21:19
hemna_there is no rollback21:19
mriedemis this the force detach issue?21:19
ildikovhemna_: which still looks weird, but at least we have a diagram to capture that now21:19
jgriffithmriedem: can you describe a failure case?21:19
scottdamriedem: Yes, this is21:19
jgriffithmriedem: *where/when* is the failure you're considering?21:19
mriedemjgriffith: os-attach fails (cinder is down), or nova times out waiting during boot from volume21:19
mriedemlet me find the compute manager code quick21:20
mriedemhttps://github.com/openstack/nova/blob/master/nova/compute/manager.py#L473521:20
hemna_heh if cinder is down, not much we can do21:20
mriedemyeah...21:20
mriedemso ^ is just making the volume available again21:20
mriedemi think21:20
hemna_yah I think that resets the state21:21
jgriffithmriedem: hemna_ yes it does21:21
*** lbragstad_ is now known as lbragstad21:21
jgriffithhemna_: mriedem except :)21:21
jgriffithyour volume is in an error state21:21
hemna_https://github.com/openstack/cinder/blob/master/cinder/volume/api.py#L581-L58821:21
scottdaand perhaps export to compute host still exists.21:22
mriedemi guess we call terminate_connection before that https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L28721:22
jgriffithI guess I'm unclear why any of the existing calls would go away?21:22
jgriffithmriedem: exactly21:22
jgriffithmriedem: add a call to terminate21:22
mriedemso if initialize_connection creates the attachment, terminate_connection could delete it21:22
mriedemwell,21:23
mriedemnvm that21:23
mriedemyou'd have to only delete the attachment on a failure...21:23
mriedemnot sure how to signal that21:23
hemna_terminate_connection should update that attachment and set the volume state to either 'in-use' or 'available'21:23
hemna_which I think it does21:23
scottdayeah, terminate_conn calls unreserve()21:24
mriedemok, i guess i was thinking we (nova) could get confused if we needed to check the list of attachments on a volume and there are > 1 but one is actually not attached21:24
jgriffithmriedem: what if you store the attachment ID though?21:25
jgriffithmriedem: ie, dump it in the bdm table21:25
mriedemthat was another question, i guess that's a new proposal21:25
jgriffithmriedem: it's actually the "nova" side of the wip I put up21:25
ildikovI wish I would not need to touch bdm21:25
hemna_jgriffith, +121:25
mriedemcouldn't the connection_info that's returned from initialize_connection have the attachment id in it?21:25
jgriffithmriedem: it does :)21:26
mriedemthe connection_info is already stored in the bdm21:26
jgriffithmriedem: that's my proposal :)21:26
ildikovmriedem: does Nova store connection_info as is?21:27
mriedemand the bdm should be 1:1 with the volume id and instance uuid, which is the same for the attachment right?21:27
mriedemildikov: yes21:28
mriedemildikov: https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L28921:28
hemna_mriedem, yes21:28
ildikovmriedem: cool, tnx21:28
*** hemna_ is now known as hemna21:29
scottdamriedem: How's that connection info stored? If we add attachment_id, does the DB table need to change?21:29
mriedemok, so what does nova do with the attachment id in the bdm.connection_info?21:29
mriedemscottda: nope, just a json blob21:29
scottdacool21:29
hemnamriedem, pass it with terminate_connection21:29
mriedemit's unversioned...which kind of sucks21:29
hemnamriedem, and hopefully with a call to initialize_connection if it has it.21:29
jgriffithmriedem: so that's the thing I would need to figure out next21:29
hemnare: live-migration21:30
mriedemok, so nova will need to have conditional logic for handling this if it's talking to a new enough cinder to provide the attachment id21:30
jgriffithmriedem: where/how that's stored and mapped to a volume21:30
mriedemand if it's in there, nova can pass it back21:30
mriedemhttps://github.com/openstack/nova/blob/master/nova/objects/block_device.py#L8521:30
scottdamriedem: Yes. Some plumbing needed, probably involving the microversion the cinder api supports.21:30
mriedemthat's the object field21:30
mriedemok, so what does the client side check for nova look like wrt nova and new enough cinder? does nova fail if it's asked to attach a multiattach volume but cinder isn't new enough to provide the attachment id from initialize_connection?21:31
mriedemfwiw, nova today can already get the attachment id given the volume id and instance uuid, i think that's what ildikov's code already does21:32
jgriffithmriedem: Yes, IMO21:32
scottdamriedem: Yes, I agree. WE can just say multi-attach will only work with newton+ Cinder and newton+ Nova21:32
*** sigmavirus24 is now known as sigmavirus24_awa21:32
mriedembut w/o checking if cinder is new enough, i guess we don't know that the internal plumbing is new enough to handle the multi-detach on the same host?21:32
*** sigmavirus24_awa is now known as sigmavirus2421:32
ildikovmriedem: for detach Nova gets the attachment_id from the volume info from Cinder21:32
hemnaso the attachment_id has been around for a few releases now with Cinder21:33
mriedemildikov: yeah but it filters the attachment list from that volume based on the instance uuid21:33
mriedemright?21:33
hemnasince I put the multi-attach code in cinder, which was...Kilo ?21:33
smcginnishemna: Yep21:34
ildikovmriedem: yeap, that's correct21:34
scottdaBut passing in both instance_uuid and host are not plumbed for Cinder, so cinder would need to be Newton+ for this to work anyway21:34
mriedemscottda: passing in to which cinder API?21:34
mriedemattach?21:34
smcginnisscottda: Well, did we ever revert that change. I think it might still.21:34
scottdainitialize_connection21:34
mriedemwe pass the connector which has the host...21:34
ildikovmriedem: yes, attach fails with having both in the request21:34
jgriffithscottda: we don't need it though21:34
jgriffithscottda: we already have it on initialize connection, that's been my point all along21:35
scottdajgriffith: We don't need it with your Alt#2, that is correct.21:35
jgriffithscottda: we have it, but we don't do it with anything21:35
mriedemjgriffith: b/c of the connector dict has the host right?21:35
jgriffithmriedem: yes, correct21:35
hemnathe connector has always had the host afaik21:35
scottdaI'm just saying that, no matter what, Cinder will have to be Newton+21:35
jgriffithmriedem: the whole thing is kinda silly... we actually have all of the info we just weren't storing it21:35
ildikovscottda: +121:35
jgriffithmriedem: then we got into passing it all again in another later call21:35
mriedemjgriffith: and that's why we need to chekc that cinder is newton+ ?21:35
scottdaThere is no controversy that Newton+ Nova with older Cinder won't have multi-attach. It simply will not work.21:35
hemnaI'm just not sure how we find the right attachment in initialize_connection21:35
jgriffithmriedem: yes, that would be the requirement21:36
mriedemjgriffith: ok, i'm finally starting to understand :)21:36
smcginnisscottda: I think it's completely fair to require N+21:36
hemnabecause we don't have the instance_uuid at initialize_connection time.21:36
ildikovscottda: attach would work and detach too if it's not same host same target case, but we need to block the happy scenarios too21:36
jgriffithhemna: well... we don't actually have multi-attach yet anyway, so I'd hack some code for that21:36
jgriffithhemna: but, that being said21:36
jgriffithhemna: if you already have a plan/solution I'm happy to turn it over to you21:37
hemnajgriffith, but if we had the attachment_id being passed in to initialize_connection, we would always be able to find the right attachment21:37
jgriffithhemna: ok21:37
jgriffithhemna: go for it21:37
ildikovare going for alternative #4 now?21:37
mriedemhold up,21:37
jgriffithildikov: looks like it :)21:37
hemnathat would work for single attach and multi21:37
mriedemso one issue on the nova side with #4 is,21:37
jgriffithI'm just tired of beating my head against this wall for no reason :)21:38
mriedemnova would have to store the attachment_id in the bdm before we have the connection_info,21:38
mriedemthat means a schema change to the nova db21:38
jgriffithmriedem: yes21:38
jgriffithmriedem: and it completely deviates from the purpose of the reserve call21:38
hemnajgriffith, I don't think it's for no reason, we are trying to make sure any changes we make work for single attaches and multi-attach.  at least that's what I'm assuming.21:38
jgriffithhemna: I mean no reason from my personal side and usage of time21:38
jgriffithhemna: I believe I can address your concerns, but I don't care enough to continue arguing21:39
jgriffithhemna: if you have code and a proposal that works I say go for it21:39
smcginnisI think mriedem makes a good point.21:39
smcginnisIf we can avoid that, it would be good.21:39
mriedemok, so from a nova perspective, i'm not sure why we need the attachment id when we reserve the volume21:39
jgriffithmriedem: and my point last week was that's *impossible* to do21:39
ildikovsmcginnis: I hear ya21:39
smcginnismriedem: You're saying #2 would be better from the Nova perspective?21:39
hemnamriedem, could fake the connection_info at reserve time (/me shudders)21:39
mriedemsmcginnis: it seems cleaner, i'm just thinking it out21:40
smcginnisYeah, I think so too.21:40
mriedemi'm hearing that nova only needs the attachment id at initialize_connection time to pass it in to that API21:40
mriedembut we could avoid that with just initialize_connection creating the attachment and returning that id in connection_info21:40
mriedemwhich is then stored in the nova bdm21:40
hemnathe attachment_id at initialize_connection time is so cinder can find the right attachment to operate on21:40
mriedemand we can use to pass to cinder terminate_connection21:40
mriedemhemna: but if initialize_connection creates the attachment at that time, then it knows21:41
jgriffithmriedem: you are correct in your statements21:41
mriedembecause it just created it21:41
smcginnishemna: So the issue is that intialize_connection is called repeatedly?21:41
hemnamriedem, but there are calls to initialize_connection that are for existing attachments (re: live-migration)21:41
hemnasmcginnis, yes21:41
smcginnisSo we only return it if we know we created a new one?21:41
hemnalive migration calls initialize_connection again right before it calls terminate_connection21:41
smcginnismriedem: Or can nova pass it in on subsequent calls?21:42
hemnabecause the connection_info in the BDM is for the destination n-cpu host and is wrong.21:42
ildikovhemna: can we pass attachment_id to Cinder only these cases?21:42
ildikovhemna: I mean Nova21:42
mriedemok, so creating the attachment at init_connection doesn't work for when we need to 'update' the original attachment for live migration21:42
hemnabut in this case21:42
jgriffithwe could fix the live migration code to behave differently once it has attachment ID's21:42
hemnawe don't want cinder to update the attachment21:42
jgriffithseems easier than building a house of cards and replumbing every API call21:43
hemnabecause it will be updating it to the source n-cpu host, which will shortly be detached.21:43
mriedemrather than update the old attachment from the source host, why not create a new attachment table entry for the new host, and then when we're done, delete the old attachment?21:43
mriedemso we don't update, we just create a 2nd and delete the 1st?21:43
jgriffithmriedem: +121:43
hemnamriedem, yah I think that's what I was going to suggest21:43
hemnais that we have 2 attachments21:43
jgriffithmriedem: which in fact is what we discussed at the mid-cycles21:43
hemnafor live migration21:43
smcginnismriedem: +121:44
mriedemok, so that seems better from a nova pov21:44
mriedemand is option #2 right?21:44
jgriffithmriedem: hemna my proposal was and is that we treat live-migration as a special case of multi-attach21:44
hemnajgriffith, +121:44
smcginnisBecause it is multiattached briefly.21:44
hemnayah we had chewed on that in the past21:44
hemnasmcginnis, yah it is21:44
ildikovyeah, I remember some stories as well21:44
hemnaI'll get the live migration diagram up to date21:45
mriedemi mean, we could do really hacky stuff and pass the instance uuid in the connector dict to initialize_connection if we needed too...21:45
mriedembut i'd like to avoid that if possible21:45
hemnait'll show 2 calls to initialize_connection21:45
* jgriffith gathers everybody around the camp fire to tell a tale21:45
hemnaew yah21:45
jgriffithmriedem: please don't... we'll pay a price later21:45
mriedemjust saying :)21:45
smcginnisjgriffith: Got smores?21:45
jgriffithsmcginnis: of courrse!21:45
mriedemhonestly that doesn't sound different from init_connection adding the attachment id to the connection_info that's returned21:45
mriedembut ok21:45
jgriffithmriedem: wait....21:46
jgriffithmriedem: let me think about that for a second21:46
mriedemalaski: you might enjoy this part...21:46
mriedemalaski: this is why we have unversioned port details dicts with neutron probably, it opens us up to all sorts of fun hacks :)21:47
jgriffithmriedem: so, it wouldn't be that terrible as a kwarg or something.  I'm still not completely sure it's necessary though21:47
mriedemjgriffith: i think cinder would microversion the init_connection response to add the attachment id21:47
scottdamriedem: Yes21:47
mriedemsomething like that, but nova needs to check the microversion21:47
mriedemok21:47
mriedemthat makes it less of a hack because it signals that there is a new thing in the connection_info response21:48
hemnayah that sounds good21:48
mriedemnote that nova can still find the attachment id to pass to terminate connection if it's needed w/o it being stored in the bdm.connection_info21:49
jgriffithmriedem: so circling back to your other statement...21:49
jgriffithmriedem: easy fix, add the instance-id to the connector that we already get21:49
jgriffithmriedem: actually... I think that's already there in some cases21:49
mriedemit might be for vmware i think21:49
hemnaah21:50
mriedemyup https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/volumeops.py#L30221:50
hemnaif we had the instance_uuid in the connector, we can get the correct attachment inside of initialize_connection.21:50
mriedemconnector is just another unversioned dict that we pass around21:50
jgriffithmriedem: :)21:50
mriedemhemna: yup21:50
hemnaproblem solved.21:50
scottdaSo what problems are left?21:51
mriedemsdague: we're not gauranteed to be able to tell what microversion the volume endpoint is at right?21:51
mriedembecause some deployments disable that?21:51
mriedemguaranteed even21:51
alaskimriedem: oh boy. let's please not add unversioned things here21:52
scottdamriedem: you can hit http://<vol_endpoint>:8776/ and get it if cinder is mitaka+21:52
scottdaThat's a new Cinder API think in mitaka21:52
mriedemscottda: i think version discovery can be disabled though21:52
mriedemvia policy21:52
mriedemwe found that out the hard way in adding microversion support to nova client21:53
mriedemb/c rackspace and others had it disabled21:53
ildikovalaski: not adding new things so far21:53
scottdatrue, I think you are right.21:53
alaskiildikov: okay. I'm trying to catch up here21:53
mriedemso...21:54
scottdaSo, if you cannot discover Cinder version, you cannot use any new stuffs. I don't think there's any way around that.21:54
ildikovalaski: we are playing with already existing ones in theory atm21:54
mriedemwe can avoid unversioned things if cinder adds a microversion to the terminate_connection api such that it takes a new parameter21:54
hemnascottda, but then you always assume the lowest API version no?21:54
scottdaOther than ugly try:21:54
hemnamriedem, terminate_connection needs new things?   I believe it already supports attachment_id21:55
scottdahemna: You just assume you are on /v2 (and /v3 at version 3.0 is the equivalent)21:55
jgriffithhemna: nah... it gets a connector, but not an AID21:55
mriedemhemna: it does? https://github.com/openstack/python-cinderclient/blob/master/cinderclient/v2/volumes.py#L7321:56
hemnahuh21:56
hemnayah you are right21:56
hemnanm21:56
hemnaignore me.21:56
mriedemso do we need initialize_connection to return the attachment id? i'm not sure that it does21:57
mriedemnova can figure that out by filtering the volume attachments list by instance uuid,21:57
hemnamriedem, I think jgriffith's proposal is that it will, inside the connection_info dict it's already returning.21:57
mriedemor just pass the instance uuid to terminate_connection with a microversion that takes instance uuid as a 3rd parameter21:57
mriedemhemna: but we don't need it, i don't think21:58
mriedemunless it's purely about signaling that we're talking to a version of cinder that has that behavior21:58
jgriffithmriedem: I think either is doable, I had planned to return it21:58
mriedemin which case, that works for me21:58
mriedemif it's not there and multiattach, nova fails21:58
*** andrearosa_web has quit IRC21:59
mriedemit would be nice if the nova api could detect this support earlier in the process though21:59
mriedemwhich i guess is the version discovery api21:59
jgriffithmriedem: the trick is as you mention the versioning to know if things are actually *in* the table or not.  If you solve that via micro-v then so beit21:59
jgriffithmriedem: I'll write  an API method "discover-api-version" for you21:59
mriedem#action mriedem to talk to sdague about version discovery21:59
mriedemwe've worked around some of this with neutron via their extension list22:00
scottdamriedem: Please keep me in the loop on that conversation22:00
jgriffithhonestly it would probably be useful to have said call anyway22:00
mriedemnova checks the neutron extension list to see if we can do a new thing22:00
scottdaSince My next cinderclient patch is automatic version discovery.22:00
scottdaYeah, but isn't that because Neutron doesn't have microversions?22:01
smcginnisGotta run, thanks everyone.22:01
mriedemi think so yeah22:01
scottdaI mean, this is the point of microversions.22:01
mriedemi have to head out soon too,22:01
scottdaI need to head as well.22:01
mriedemso are we in soft agreement on option 222:01
mriedeminit connection returns the attachment id in the connection_info22:01
mriedemnova passes the instance uuid or attachment id to terminate_connection22:01
scottdaildikov: I think we might as well keep open a meeting for next week. Just for discussing details and other items.22:02
mriedemand we have a todo to talk to sdague about version discovery22:02
mriedem^ sound good?22:02
*** sigmavirus24 is now known as sigmavirus24_awa22:02
ildikovscottda: sure, I'll check back on Monday and send out a reminder mail as well if we feel the need22:02
smcginnis+122:02
scottdamriedem: yes22:02
hemnamriedem, nova pass the instance_uuid to initialize_connection as well ?22:02
ildikovmriedem: +122:02
*** sdake has joined #openstack-meeting-cp22:02
hemnaso cinder can find the right attachment22:02
hemna(for multi-attach)22:02
mriedemhemna: not sure that is needed22:02
hemnait is22:02
mriedemthen sure :)22:03
hemnaok22:03
hemna+A22:03
mriedemi'll assume you guys are updating the etherpad22:03
mriedemand gals22:03
scottda#action scottda update etherpad with this meeting's conclusions22:04
mriedemif we pass a new parameter to initialize_connection and it explodes, we'll know we're not talking to a new enough cinder22:04
mriedem^ in the case that we can't detect the version ahead of time22:04
hemna#action hemna updates the live-migration sequence diagram to include the initialize_connection call in post_live_migration() time.22:04
scottdamriedem: True enough. or if we don't detect the version, just don't try and assume old behaviour, which is no multi-attcach and don't pass it in.22:05
mriedemyeah, fast fail in the api22:05
scottdaOK, I need to run as well. Thanks everyone22:05
mriedemyup, thanks22:05
ildikovthanks guys22:05
*** mriedem has quit IRC22:06
ildikov#endmeeting22:06
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"22:06
openstackMeeting ended Wed Apr 13 22:06:38 2016 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:06
openstackMinutes:        http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2016/cinder_nova_api_changes.2016-04-13-21.00.html22:06
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2016/cinder_nova_api_changes.2016-04-13-21.00.txt22:06
openstackLog:            http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2016/cinder_nova_api_changes.2016-04-13-21.00.log.html22:06
*** harlowja has joined #openstack-meeting-cp22:07
*** mriedem1 has joined #openstack-meeting-cp22:12
*** vilobhmm11 has quit IRC22:14
*** vilobhmm11 has joined #openstack-meeting-cp22:14
*** mriedem1 has quit IRC22:18
*** raildo is now known as raildo-afk22:19
*** hemna is now known as hemnafk22:30
*** tonyb_ is now known as tonyb22:30
*** vilobhmm111 has joined #openstack-meeting-cp22:42
*** vilobhmm11 has quit IRC22:44
*** sdake has quit IRC22:48
*** jungleboyj has quit IRC22:52
*** sdake has joined #openstack-meeting-cp23:00
*** angdraug has quit IRC23:12
*** xyang1 has quit IRC23:13
*** angdraug has joined #openstack-meeting-cp23:25
*** sdague has quit IRC23:25
*** angdraug has quit IRC23:27
*** tyr__ has quit IRC23:55
*** rpjr has joined #openstack-meeting-cp23:55
*** rpjr_ has quit IRC23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!