Wednesday, 2016-04-20

*** tyr__ has quit IRC00:10
*** _amrith_ is now known as amrith00:20
*** sdake has quit IRC00:21
*** sdake has joined #openstack-meeting-cp00:25
*** sdake has quit IRC00:38
*** sdake has joined #openstack-meeting-cp00:38
*** mageshgv has quit IRC00:41
*** vilobhmm11 has quit IRC00:46
*** amrith is now known as _amrith_01:08
*** _amrith_ is now known as amrith01:16
*** vilobhmm11 has joined #openstack-meeting-cp01:35
*** vilobhmm111 has joined #openstack-meeting-cp01:38
*** vilobhmm11 has quit IRC01:40
*** vilobhmm111 has quit IRC02:02
*** amrith is now known as _amrith_02:10
*** sdake_ has joined #openstack-meeting-cp02:20
*** sdake has quit IRC02:23
*** _amrith_ is now known as amrith02:26
*** ebalduf has joined #openstack-meeting-cp02:32
*** sdake_ has quit IRC02:36
*** sekrit is now known as CIA02:44
*** sdake has joined #openstack-meeting-cp02:48
*** vilobhmm11 has joined #openstack-meeting-cp02:55
*** tyr_ has joined #openstack-meeting-cp02:58
*** tyr_ has quit IRC03:02
*** vilobhmm11 has quit IRC03:35
*** amrith is now known as _amrith_03:45
*** markvoelker has quit IRC04:34
*** markvoelker has joined #openstack-meeting-cp04:35
*** sdake has quit IRC04:36
*** markvoelker has quit IRC04:40
*** _amrith_ is now known as amrith04:53
*** amrith is now known as _amrith_05:32
*** sdake_ has joined #openstack-meeting-cp05:37
*** ericksonsantos has quit IRC06:13
*** vilobhmm11 has joined #openstack-meeting-cp06:19
*** raildo is now known as raildo-afk06:25
*** sheel has joined #openstack-meeting-cp06:47
*** Rockyg has quit IRC07:10
*** ebalduf has quit IRC07:11
*** vgridnev has joined #openstack-meeting-cp07:11
*** melwitt has quit IRC07:23
*** melwitt has joined #openstack-meeting-cp07:25
*** melwitt is now known as Guest1703307:25
*** _amrith_ is now known as amrith08:25
*** vgridnev has quit IRC08:26
*** david-lyle has quit IRC08:59
*** david-lyle has joined #openstack-meeting-cp09:00
*** vgridnev has joined #openstack-meeting-cp09:13
*** amrith is now known as _amrith_09:52
*** vilobhmm11 has quit IRC10:00
*** sdague has joined #openstack-meeting-cp10:07
*** _amrith_ is now known as amrith11:26
*** amrith is now known as _amrith_11:36
*** mageshgv has joined #openstack-meeting-cp11:39
*** raildo-afk is now known as raildo12:10
*** diablo_rojo has joined #openstack-meeting-cp12:14
*** markvoelker has joined #openstack-meeting-cp12:18
*** diablo_rojo1 has joined #openstack-meeting-cp13:16
*** dansmith has quit IRC13:17
*** sheeprine has quit IRC13:17
*** sheeprine has joined #openstack-meeting-cp13:18
*** diablo_rojo has quit IRC13:19
*** dansmith has joined #openstack-meeting-cp13:20
*** dansmith is now known as Guest6476713:20
*** xyang1 has joined #openstack-meeting-cp13:23
*** diablo_rojo1 has quit IRC13:23
*** diablo_rojo has joined #openstack-meeting-cp13:24
*** vgridnev has quit IRC13:34
*** vgridnev has joined #openstack-meeting-cp13:38
*** ebalduf has joined #openstack-meeting-cp13:49
*** _amrith_ is now known as amrith13:55
*** vgridnev has quit IRC14:08
*** vgridnev has joined #openstack-meeting-cp14:09
*** sigmavirus24_awa is now known as sigmavirus2414:15
*** vgridnev has quit IRC14:24
*** vgridnev has joined #openstack-meeting-cp14:28
*** sdake_ has quit IRC14:42
*** sdake_ has joined #openstack-meeting-cp14:45
*** vgridnev has quit IRC14:58
*** Guest64767 is now known as dansmith15:00
*** tyr_ has joined #openstack-meeting-cp15:07
*** nikhil_k has joined #openstack-meeting-cp15:38
*** nikhil has quit IRC15:40
*** diablo_rojo has quit IRC15:44
*** jungleboyj has joined #openstack-meeting-cp15:47
*** diablo_rojo has joined #openstack-meeting-cp15:49
*** xyang1 has quit IRC15:56
*** raildo is now known as raildo-afk15:56
*** xyang1 has joined #openstack-meeting-cp15:58
*** mc_nair has left #openstack-meeting-cp16:00
*** Rockyg has joined #openstack-meeting-cp16:00
*** gnarld_ is now known as cFouts16:26
*** raildo-afk is now known as raildo16:41
*** amrith is now known as _amrith_16:41
*** _amrith_ is now known as amrith16:47
*** vilobhmm11 has joined #openstack-meeting-cp16:57
*** vilobhmm11 has quit IRC17:05
*** Guest17033 is now known as melwitt17:19
*** sdake__ has joined #openstack-meeting-cp17:41
*** sdake__ has quit IRC17:43
*** sdake_ has quit IRC17:43
*** sdake_ has joined #openstack-meeting-cp17:45
*** sdake__ has joined #openstack-meeting-cp17:53
*** sdake_ has quit IRC17:57
*** sdake__ has quit IRC18:14
*** vilobhmm11 has joined #openstack-meeting-cp18:21
*** sigmavirus24 is now known as sigmavirus24_awa18:45
*** sigmavirus24_awa is now known as sigmavirus2418:56
*** vilobhmm11 has quit IRC19:02
*** vilobhmm111 has joined #openstack-meeting-cp19:02
*** rocky_g has joined #openstack-meeting-cp19:12
*** sheel has quit IRC19:15
ildikov#startmeeting cinder-nova-api-changes21:00
openstackMeeting started Wed Apr 20 21:00:08 2016 UTC and is due to finish in 60 minutes.  The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot.21:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:00
*** openstack changes topic to " (Meeting topic: cinder-nova-api-changes)"21:00
openstackThe meeting name has been set to 'cinder_nova_api_changes'21:00
smcginniso/21:00
ildikovscottda ildikov DuncanT ameade cFouts johnthetubaguy jaypipes takashin alaski e0ne jgriffith tbarron andrearosa hemna erlon mriedem gouthamr ebalduf patrickeast smcginnis diablo_rojo21:00
*** mriedem has joined #openstack-meeting-cp21:00
mriedemo/21:00
* alaski has to run another meeting, sorry21:01
scottdahi21:01
ildikovalaski: no worries, it's still just the warming up for next week21:01
DuncanTHi21:02
ildikovhi All21:03
cFoutshi21:03
*** hemna_ has joined #openstack-meeting-cp21:03
ildikovbasically the plan for today is to prep for next week regarding scenarios we need to discuss mostly in connection with multiattach21:04
mriedemi reviewed the update attachment api spec from scottda last night21:04
mriedemhaven't checked back on it today,21:04
mriedembut i'd like the nova portion of that fleshed out a bit before a new api is added to cinder21:04
scottdamriedem: Yes, I'm not ignoring  your review...21:05
hemna_mriedem, +121:05
ildikovI agree, there's also the patch that jgriffith is working on to have a PoC for alternative #2, so a few bits to put together21:05
scottdaI'm just trying to gather answers.21:05
ildikovmriedem: a quick question as you mentioned Nova21:07
*** mriedem has left #openstack-meeting-cp21:07
ildikovmriedem: is there a chance to have multiattach on the list of high priority items?21:07
*** mriedem has joined #openstack-meeting-cp21:07
mriedemoops, accidentally closed the tab21:07
ildikovmriedem: good timing, so question again21:08
ildikovmriedem: is there a chance to have multiattach on the list of high priority items?21:08
mriedemi won't commit to that right now, we discuss nova priorities in the last session on thursday of the design summit21:08
smcginnismriedem: At least that will be after the Nova-Cinder session.21:09
mriedemright21:09
ildikovI know, but I guess I need to start the campaign way before that session21:09
mriedemit's going to depend on how clear the plan is after the nova/cinder session i think21:09
mriedemif it's all up in the air still, it's going to be hard to say it's a priority21:09
mriedemand things like swap volume with multiattach volumes scare me21:09
mriedemall the weird edge cases with multiattach21:10
mriedemplus we don't test any of that today21:10
mriedemhonestly,21:10
ildikovyou mean swap volume, etc in general?21:10
mriedembefore multiattach is a priority, i think volume migration needs to be tested in the multinode job in the gate21:10
mriedemb/c that would also test swap_volume since it's part of the flow21:10
mriedemand then we could see if multiattach blows any of that up21:11
smcginnismriedem: +121:11
ildikovok that sounds reasonable also it would be good to test it anyway21:11
mriedemi think i asked this earlier in the week, but does cinder allow migration of multiattach volumes?21:11
smcginnismriedem: I don't believe we block it.21:11
smcginnismriedem: But I also don't believe we test that at the moment.21:12
mriedemvolume migration? probably not21:12
mriedemi think the only thing that uses both nodes in the multinode job is nova's live migration21:12
smcginnishemna_, scottda: Are you aware of any testing added for that?21:12
hemna_wait21:12
scottdaNo, I'm not aware of any testing.21:12
hemna_do you mean nova live-migration ?21:12
hemna_or cinder migrate ?21:12
mriedemcinder migrate21:12
mriedemwhich triggers swap volume in nova21:12
mriedemneither are tested in the gate21:13
hemna_:(21:13
mriedemand those + multiattach scare me21:13
hemna_well that seems to be a problem in general then21:13
mriedemespecially with some of the issues i found with swap volume this week21:13
mriedemyeah21:13
ildikovhemna_: +121:13
mriedemso, there are some testing gaps to get us to multiattach21:13
smcginnisWe're kind of putting on the roof before the foundation is done, in some respects.21:13
mriedemi need to add that to the etherpad21:13
hemna_I'm not sure why that would prevent nova from setting multiattach to high priority21:13
smcginnisNot to be negative.21:14
ildikovlet's try to figure out dependencies as well21:14
ildikovwe might not want to put on the roof first21:14
scottdaIt keeps us dry while we're digging a deeper hole.21:15
smcginnis:)21:15
smcginnisWe certainly know how to dig holes.21:15
mriedemhemna_: multiattach is a whole new set of problems if it goes wrong, and i'd like to be tight on testing of existing functionality before introducing new complexity21:15
ildikovsmcginnis: lol :)21:15
mriedemespecially given how hard it's been to get a clear picture on the design21:16
hemna_I'm not saying it should be forced to land in nova21:16
hemna_just saying it should be a candidate for high priority21:16
mriedemi'm not knocking the team, or not trying to, it just worries me21:16
mriedemit's a candidate yes21:16
hemna_ok cool21:16
hemna_we've had a lot of people working on it for literally years now21:17
hemna_:(21:17
mriedemyes, but as far as i recall, the major nova effort wasn't until mitaka21:17
mriedem*nova effort from the nova team21:17
mriedemanyway, it is what it is21:17
ildikovwe would need high prio to get review attention, I don't think we will flood Gerrit with too much new code by N-321:17
mriedemi'd like to see it happen, just thinking through gaps we have today before we add this layer in21:18
hemna_well I wouldn't agree with that, but whatevs.21:18
scottdaOne thing in the critical path is jgriffith code...21:18
hemna_it's always the same thing in Nova.21:18
hemna_lots of patches get in21:18
hemna_nobody reviews or tests21:18
hemna_deadlines come up and go21:18
hemna_and oops...you are too late.21:18
hemna_thank you next release.21:18
hemna_2+ years later.  here we are.21:18
hemna_but at this point, it's probably not a bad thing :)21:18
ildikovscottda: yeah, it's on my list to look at from Nova perspective as well21:19
mriedemyeah, i'm familiar with that problem21:19
hemna_so21:19
hemna_it seems a prerequisite is getting some gate tests in the mix21:19
ildikovscottda: maybe next Friday we could have a small code sprint on the contributors' meetup part as opposed to just talking :)21:19
scottdaildikov: We just merged microversion support in the cinderclient, so you (or someone) could start to POC using that...21:19
hemna_I thought we did have some nova live migration tests at some point ?21:20
ildikovscottda: yeap, exactly21:20
scottdaildikov: Sounds good.21:20
ildikovscottda: I will look at that patch, I'll have some time on the plane tomorrow to do so21:21
scottdaAnyone know how difficult it will be to test cinder migration using current infra tools?21:22
mriedemhemna_: we do in the multinode job in the gate21:22
mriedemhemna_: it's non-voting b/c of some race bugs (we think due to old ass libvirt/qemu in those jobs)21:22
scottdaHave we thought about it?21:22
scottdai.e using multinode in the gate.21:22
mriedemscottda: i'd say look at the live migration tests in tempest21:22
mriedemit's not too complicated, there is logic for finding the two computes21:22
mriedemspawning the instance on one and then live migrating to the other21:22
mriedemi'd expect volume migration to be similar21:23
scottdamriedem: OK, Cinder has testing on the agenda for the Summit. We can look at this.21:23
mriedemhttps://github.com/openstack/tempest/blob/master/tempest/api/compute/admin/test_live_migration.py21:23
ildikovscottda: cool, that sounds good21:23
scottdaI put it on Cinder agenda for testing session.21:24
ildikovscottda: I'll try to join to that slot21:24
mriedemis jdg's option 2 fleshed out as far as the flow is concerned?21:25
hemna_so the scenario in question is cinder migrate21:25
mriedemlooks like some of the notes in the etherpad are stale21:25
scottdamriedem: I don't think things are fleshed out. It would be nice to get a flow diagram for Alt#2 similar to what hemna_ has done for the current flow.21:26
mriedemstale as in it seems like we came up with different solutions for some of those items last week21:26
mriedeme.g. nova not neeting to store the attachment_id in the bdm.connection_info column21:26
mriedem*needing21:26
hemna_anyone have the etherpad url handy ?21:27
mriedemhttps://etherpad.openstack.org/p/cinder-nova-api-changes21:27
mriedemi don't remember the details though, we have the meeting logs if we need to go back21:27
hemna_thanks21:27
mriedemhere are the 4/13 meeting logs http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-cp/%23openstack-meeting-cp.2016-04-13.log.html#t2016-04-13T21:00:0621:28
mriedemi feel like we came to some aha solution type things at the end which aren't fed back into the etherpad21:28
*** sdake_ has joined #openstack-meeting-cp21:28
ildikovmriedem: we ended up in a discussion about unversioned things21:28
mriedemyeah, like os-initialize_connection passing the attachment_id, and nova passing the connector with the instance uuid21:29
ildikovI'm not sure whether it was the connection_info itself or another bunch of data21:29
smcginnisSorry, I have to drop off. Will catch up later.21:29
mriedemlater21:29
hemna_I think we were going to stuff the attachment_id into the connection_info21:29
hemna_for nova to save at detach time21:29
mriedem*os-initialize_connection putting the attachment_id in the connection_info dict response i mean21:29
hemna_and we were going to save the connector at initialize_connection time.21:29
mriedembut we were going to microversion that or not? i thought yes21:29
hemna_I was unsure about that21:30
mriedemme too :)21:30
mriedemtechnically i don't think we *need* to because nova can check if it's there and if not, determine cinder isn't new enough21:30
hemna_the other part is the fact that initialize_connection is called at times to simply fetch the connection_info21:30
hemna_during live migration time21:30
scottdaWouldn't Nova want that microversioned? To get a different payload in the connection_info?21:30
hemna_and it can't create a new attachment entry then.21:30
mriedembut, that happens down in the compute - if nova is going to fail on that, it'd be nice to fail fast in nova-api based on a versoin21:30
mriedemscottda: generally yes we want api changes versioned21:31
scottdaWe really need a design spec for this....21:31
hemna_well21:31
hemna_we aren't changing the API here21:31
mriedembecause nova-api could see it's asked to attach a multiattach volume, check the cinder version to see if it can handle it, and if not fail with a 400 before casting to compute21:31
hemna_it's adding additional data to a dict in response21:31
hemna_there is no new param in the call21:31
hemna_or not a new response either.21:32
mriedemhemna_: well, that's still changing the api response21:32
hemna_so....21:32
mriedemi realize we've fudged around this forever in the past21:32
hemna_I dunno21:32
mriedemand that's how it works with neutron port details too21:32
hemna_cinder drivers put 'new' things into connection_info at times21:32
mriedemi know21:32
hemna_that qualifies as a 'change in response' ?21:32
mriedemwhich is why it's not like new badness21:32
hemna_I don't know what the right thing is here.21:32
mriedemit's just continued badness21:32
hemna_we don't version the connection_info as an object21:33
mriedemlong-term connection_info should be a versioned object in os-brick, imo21:33
hemna_every cinder driver has different things it puts in there.21:33
hemna_for later21:33
mriedemlike we're doing with vifs in os-vif21:33
mriedemyeah, anyway, this is kind of a minor detail, i think it's a matter of UX on the nova side21:33
mriedemwe fail in the api or we fail in the compute21:33
mriedemfailing fast in the api is better21:33
scottdaSo, it might not be necessary, but it might be useful (for fail-fast, at least) to microversion it. It's not a big deal to bump the version.21:34
mriedemscottda: right21:34
mriedemcinder api could strip the attachment_id key out of the connection_info in the API if it's not at that microversion21:34
hemna_the other part of this is that detach needs the attachment_id21:34
mriedemwell, if nova isn't requesting that version21:34
hemna_that could/should be in the same patch, which requires a microversion21:35
mriedemagree it should be the same microversion21:35
scottdaLook, I hate to bog down in process, and I don't want this to slip, but there are others who aren't present who'll need to know all these design decisions. We're probably better off writing a spec for all of this, I think.21:35
mriedemscottda: this might feed into a spec after the summit,21:35
mriedembut these are questions that feed into that21:35
scottdasure, probably not until after the summit21:36
hemna_we don't have much time between now and the summit anyway21:36
mriedemright21:36
hemna_unless someone magic wands the tests into the gate right away :)21:36
mriedemjungleboyj has minions, make them write those tests :)21:36
mriedemjust open a bug and mark it low-hanging-fruit, some intern will write them :)21:36
ildikovhemna_: detach is already using the attachment_id21:36
hemna_ildikov, wait, didn't we already look at that last week?21:37
* hemna_ is confused21:37
mriedemcinderclient(context).volumes.detach(volume_id, attachment_id)21:37
mriedemildikov: got that into nova in mitaka21:37
hemna_https://github.com/openstack/cinder/blob/master/cinder/api/contrib/volume_actions.py#L15121:38
mriedemin newton api nova passes the attachment id down to the compute, on a mitaka compute it looks up the attachment id21:38
hemna_maybe I'm thinking terminate_connection21:38
ildikovhemna_: we might mentioned it, the point is that those patches made it into Mitaka that pass the attachment_id to Cinder at detach time21:38
hemna_yah I'm thinking terminate21:38
hemna_sorry21:38
mriedemok, and that wasn't the thing that was reverted in cinder https://review.openstack.org/#/c/275316/21:39
jungleboyjmriedem: Since when have I had Minions?21:39
mriedemthat was passing host and instance uuid to attach21:39
mriedemjungleboyj: your glorious army of minions21:39
mriedemwill solve all of our problems21:39
ildikovmriedem: that was the host_name, which got reverted21:39
jungleboyjmriedem: They are my esteemed co-workers.  :-)21:39
mriedemhemna_: ildikov: scottda: last week we were talking about a cinder api that needed the attachment_id and i suggested that we could stash that in the connector dict passed to cinder, was that terminate_connection?21:40
ildikovjungleboyj: tell them they are cute and we like them :)21:40
mriedemit's icky, and unversioned21:40
hemna_mriedem, that could have been21:40
hemna_it's kinda icky to put that into the connector though.21:40
jungleboyjildikov: You can tell them in Austin.  They are all coming with me.  :-)21:41
hemna_the connector is supposed to describe the host initiator information for the entire system.21:41
mriedemyeah, but it's the same idea as putting the attachment_id in the connection_info that comes back from init_conn21:41
hemna_it seems like a hack to put that in there to me.21:41
mriedemit's a hack both ways :)21:41
ildikovjungleboyj: ok Gru, will do ;)21:41
hemna_yah, it's a total hack :(21:41
mriedemhonestly, the thing to probably do,21:41
mriedemis not put attachment_id in connection_info or instance uuid in connector,21:42
hemna_mriedem, +121:42
mriedembut microversion those APIs and add them to the response body21:42
jungleboyjildikov: *Laughing*  Hey now!21:42
mriedemand request body21:42
hemna_mriedem, +1  I like that way better.21:42
mriedemrespecetively21:42
hemna_it's clean.21:42
mriedemscottda: catch all that? ^21:42
ildikovjungleboyj: lol :)21:42
scottdaI did, and it sounds good to me.21:42
mriedemi do wonder,21:43
mriedemif nova isn't requesting that microversion, will cinder still ceate the attachment entry in init-connection?21:43
hemna_mriedem, yes21:43
ildikovdo we want to store attachment_id in Nova or still not?21:43
mriedemwill it be able to clean up later?21:43
hemna_mriedem, according to jdg's POC patch21:43
mriedemin terminate_connection?21:43
hemna_terminate_connection shouldn't really touch the attachment entry21:44
hemna_only in detach()21:44
mriedemok, and we pass the attachment_id there21:44
hemna_yah21:44
mriedemwhich we lookup from the volume id and instance uuid in nova21:44
mriedemwhich makes me wonder why initialize_connection needs to return the attachment id for nova?21:45
mriedemi get so confused here21:45
hemna_it is confusing21:45
hemna_so21:45
hemna_the reason is21:45
*** sdake_ has quit IRC21:45
mriedemnova could store it and then it wouldn't need to look it up i guess, but we already have that lookup code since mitaka21:45
hemna_for multi-attach21:45
hemna_it would get stored in the nova bdm21:45
hemna_and nova would use that value directly21:45
hemna_instead of looking it up again21:45
mriedemstill, that's just an optimizatoin right?21:46
ildikovI like the Cinder is the ultimate source of truth thing better still21:46
hemna_I'm still confused as to how initialize_connection is going to know if it's a call for an new attach, or a fetch for connection_info21:46
scottdaI need to go to IRC-on-the-smart-phone mode for a few minutes...limited typing ability...21:46
ildikovhemna_: I think that's the key question here21:46
hemna_re: live migration.21:47
*** sdake_ has joined #openstack-meeting-cp21:47
mriedemhemna_: which is why you wanted to create the attachment in reserve/21:47
mriedem?21:47
hemna_mriedem, yes21:47
ildikovhemna_: and it still looks weird to call initialize_connection only for fetching already existing connection info21:47
mriedemi guess jgriffith would have to answer that21:47
hemna_anyway, I'm not going to argue that anymore21:47
hemna_ildikov, it seems weird, but it's completely necessary21:48
mriedemwell, if nova is calling initialize_connection to get the connection_info, and that creates a new attachment in cinder, but it's actually the same volume/instance, won't that be bad?21:48
hemna_because the connection_info for volume X on host A will be different on Host B21:48
hemna_mriedem, yes21:49
ildikovhemna_: I understand the need for the info, I just wanted to highlight that with this naming structure and behaviour it's confusing21:49
hemna_that's the problem I've raised a few times with creating the attachment in initialize_connection21:49
hemna_but was told that it's not a problem.21:49
* hemna_ shrugs21:49
mriedemand why wasn't it a problem?21:49
mriedemis something cleaning that up later?21:49
hemna_magic21:49
mriedemref counting?21:49
hemna_the POC ignores the problem really.21:49
hemna_and ignores that we are trying to do this for multi-attach21:50
mriedemif necessary, initialize_connection could take a new parameter, attach:boolean21:50
mriedemas part of the microversion that returns the attachment id21:50
hemna_ildikov, yah initialize_connection the name was fine prior to live migration :)21:50
mriedemso nova passes attach=True on attach, cinder creates new attachment in the db,21:50
mriedemif just getting the conn info, nova passes attach=False and cinder doesn't create a new attachment21:50
hemna_so21:51
ildikovhemna_: I kinda thought it's "legacy" :)21:51
mriedemor is this bleeding into update_attachment territory?21:51
hemna_the other way to do this, which requires even more nova changes is that live migration uses multi-attach21:51
hemna_for each attachment of the volume21:51
ildikovor maybe we reorganise things slightly and have a call that actually just asks for info21:51
hemna_that may open up other problems though.21:52
mriedemhemna_: not all volume drivers support multiattach right?21:52
hemna_mriedem, correct21:52
ildikovhemna_: you mean attaching the same volume to the same instance twice?21:52
hemna_lots of confusion and grey area here.21:52
hemna_ildikov, that too, but just on a different host.21:52
hemna_the unique key is really host + instance_uuid + volume_id21:53
mriedemthat would probably totally bork the constraints checking in nova for the bdm21:53
ildikovhemna_: yeap, exactly21:53
mriedemyeah21:53
mriedemnova is just checking volume id and instance uuid today21:53
hemna_and for live migration, it simply calls initialize_connection on the destination host21:53
hemna_which forces cinder to export a volume to a new host21:54
hemna_and then it removes the source attachment21:54
hemna_it kinda backdoors the 'attach' workflow on the destination compute host21:54
hemna_by bypassing reserve21:54
mriedemb/c it's already in-use21:54
hemna_yup21:55
hemna_and multi-attach isn't baked in21:55
hemna_etc.21:55
hemna_w/ multi-attach you can attach in-use volumes21:55
mriedemso, can we sort out the ref counting issue with creating a new attachment on each call to initialize_connection before the summit?21:55
hemna_and use the whole workflow21:55
mriedemif detach cleans those up, maybe that takes care of it, i'm not sure21:55
mriedemdetach = delete all volume attachments for this volume + instance21:56
mriedemidk21:56
mriedemdetach takes the attachment id though...21:56
mriedemso it's going to just delete that one21:56
ildikovit's supposed to delete just that one21:56
ildikovby ref counting issue you mean the same host issue?21:57
hemna_so I was just looking at the POC21:57
hemna_and it creates a brand new attachment on every initialize_connection call21:57
hemna_we'll have borked stuffs in the volume_attachment table during live-migration time.21:57
mriedemit's not just live migration,21:57
mriedemthere is code in nova compute manager that calls initialize_connection to refresh the connection info, let me find that quick21:58
hemna_mriedem, yah I think that's called during live migration21:58
*** nikhil_k is now known as nikhil21:58
hemna_driver_block_device.refresh_conn_infos()  virt/block_device.py21:59
hemna_that guy21:59
hemna_https://drive.google.com/a/hemna.com/file/d/0B1Kp6K43HLHydU1wWFVIN29tc2s/view21:59
hemna_:)21:59
mriedemat least live migration and resize21:59
hemna_resize too ?21:59
mriedemi'm talking about https://github.com/openstack/nova/blob/4ad414f3b1216393301ef268a64e61ca1a3d5be9/nova/compute/manager.py#L183321:59
mriedemand swap_volume https://github.com/openstack/nova/blob/4ad414f3b1216393301ef268a64e61ca1a3d5be9/nova/compute/manager.py#L488622:00
hemna_yah that calls refresh_conn_infos if true22:00
mriedemand swap volume calls it outright for the new volume22:00
hemna_I 'think' that's ok though22:01
scottdaso I don't see where we need  a new update attachment API...22:01
hemna_because cinder calls nova right?22:01
mriedemhemna_: for swap volume yeah, but you can also do swap volume w/o cinder22:01
hemna_and I think swap volume already marks the attachment as detached?22:01
hemna_I dunno22:01
hemna_I'd have to follow that mess of code as well22:01
hemna_mriedem, oh ok22:01
hemna_w/o cinder.22:01
mriedemthis is why i was asking for gate testing on ciner migration :)22:01
mriedemcinder22:01
hemna_damn22:02
hemna_ok I am now scared even more now.22:02
mriedemnow you know why i'm scared22:02
hemna_this is all feeling more and more like a giant hack22:02
hemna_I dunno, I kinda think reserve should return attachment_id22:02
hemna_and nova keeps it22:02
hemna_and uses it for calls to cinder22:02
hemna_does that solve anything22:03
mriedemwell, it'd be a single ref count at least22:03
hemna_if nova calls initialize_connection(....., attachment_id)22:03
hemna_Cinder wouldn't create a new attachment if it gets it passed in.22:03
hemna_it simply looks it up22:03
hemna_and updates the connector if needed and saves it into the attachment table22:04
mriedemalong those lines,22:04
mriedemif attach volume fails in nova, it calls: self.volume_api.unreserve_volume(context, bdm.volume_id)22:04
mriedemwhich could pass the attachment_id to delete it22:04
hemna_yah22:04
mriedemas would detach22:04
mriedemi mean, nova calls detach with the attachment id already so it would delete it then22:05
hemna_in the case of nova doing swap volume, it should be a new attachment really22:05
hemna_because it's a different volume22:05
mriedemyes22:05
hemna_the old attachment should be marked as detached, and the new volume attachment gets it's new attachment table entry and attachment_id.22:05
hemna_but, nova doesn't call reserve currently22:05
hemna_for swap22:05
mriedemok, so this is making more sense to me than option #2 now, but would need to know why creating a new attachment entry on every call to initialize_connection in option 2 wasn't a problem22:05
hemna_it was a problem22:06
hemna_it was overlooked IMHO22:06
hemna_I tried raising it several times.22:06
mriedemhemna_: nova reserves the new volume in the api for swap22:06
mriedemself.volume_api.reserve_volume(context, new_volume['id'])22:06
hemna_ah ok that's good22:06
mriedemand if swap fails22:06
mriedemself.volume_api.roll_detaching(context, old_volume['id'])22:06
mriedem                self.volume_api.unreserve_volume(context, new_volume['id'])22:06
hemna_and then it gets into the https://github.com/openstack/nova/blob/4ad414f3b1216393301ef268a64e61ca1a3d5be9/nova/compute/manager.py#L489822:06
mriedemso we should be covered there22:06
mriedemyeah, nova-api does the reserve call to cinder, then casts to nova compute to do the actual detach/attach of the old/new volumes22:07
hemna_I see the finally clause there22:07
hemna_it nukes the old attachment it looks like22:08
hemna_I presume that migrate_Volume_completion takes care of the correct state on the volume in that case.22:08
hemna_gawd this is confusing22:08
hemna_ildikov, I think we burned through out time here?22:10
mriedem:) now you want gate tests too?22:10
*** vilobhmm111 has quit IRC22:10
hemna_mriedem, yes22:10
mriedemmuwahahaha22:10
scottdaYeah, I gotta go myself.22:10
mriedemok, me too22:10
hemna_that panic feeling sets in....22:10
scottdaMaybe more talk in IRC, else see you all in Austing22:10
ildikovhemna_: I still have more than 5 hours till the taxis picks me up, so it's fine :)22:10
scottdadamn! I keep adding a 'g' to Austin22:10
hemna_ildikov, :)22:11
scottdaok, bye everyone22:11
mriedemlater22:11
ildikovC U in Austin22:11
hemna_ok I have to run to a meeting...22:11
*** mriedem has left #openstack-meeting-cp22:12
ildikovthanks guys, let's sort these things out next week22:12
*** rocky_g has quit IRC22:12
ildikovsafe travels22:12
ildikov#endmeeting22:12
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"22:12
openstackMeeting ended Wed Apr 20 22:12:21 2016 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:12
openstackMinutes:        http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2016/cinder_nova_api_changes.2016-04-20-21.00.html22:12
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2016/cinder_nova_api_changes.2016-04-20-21.00.txt22:12
openstackLog:            http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2016/cinder_nova_api_changes.2016-04-20-21.00.log.html22:12
*** sigmavirus24 is now known as sigmavirus24_awa22:15
*** diablo_rojo has quit IRC22:29
*** jungleboyj has quit IRC22:31
*** xyang1 has quit IRC22:43
*** tyr_ has quit IRC23:06
*** sdague has quit IRC23:12
*** vilobhmm11 has joined #openstack-meeting-cp23:29
*** mageshgv has quit IRC23:57
*** vilobhmm11 has quit IRC23:58
*** sdake__ has joined #openstack-meeting-cp23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!