Thursday, 2017-03-02

*** ducttape_ has quit IRC00:07
*** lamt has joined #openstack-meeting-cp00:12
*** ducttape_ has joined #openstack-meeting-cp00:29
*** lamt has quit IRC00:31
*** ducttape_ has quit IRC00:52
*** ducttape_ has joined #openstack-meeting-cp00:53
*** ducttape_ has quit IRC01:14
*** ducttape_ has joined #openstack-meeting-cp01:34
*** ducttape_ has quit IRC01:51
*** ducttape_ has joined #openstack-meeting-cp01:52
*** ducttape_ has quit IRC02:01
*** ducttape_ has joined #openstack-meeting-cp02:02
*** diablo_rojo has quit IRC02:38
*** ducttape_ has quit IRC02:46
*** ducttape_ has joined #openstack-meeting-cp02:47
*** ducttape_ has quit IRC02:52
*** gouthamr has quit IRC02:53
*** lamt has joined #openstack-meeting-cp05:02
*** lamt has quit IRC05:05
*** rderose has quit IRC05:21
*** rarcea has joined #openstack-meeting-cp07:24
*** rarcea has quit IRC07:53
*** rarcea has joined #openstack-meeting-cp08:05
*** DFFlanders has joined #openstack-meeting-cp08:41
*** DFFlanders has quit IRC10:51
*** sdague has joined #openstack-meeting-cp11:46
*** ducttape_ has joined #openstack-meeting-cp13:52
*** breton has quit IRC14:07
*** ducttape_ has quit IRC14:08
*** lamt has joined #openstack-meeting-cp14:11
*** lamt has quit IRC14:12
*** gouthamr has joined #openstack-meeting-cp14:19
*** lamt has joined #openstack-meeting-cp14:25
*** breton has joined #openstack-meeting-cp14:32
*** ducttape_ has joined #openstack-meeting-cp14:57
*** lamt has quit IRC15:01
*** lamt has joined #openstack-meeting-cp15:19
*** lamt has quit IRC15:33
*** bswartz has quit IRC15:34
*** lamt has joined #openstack-meeting-cp15:34
*** rakhmerov has quit IRC15:43
*** ativelkov has quit IRC15:44
*** ativelkov has joined #openstack-meeting-cp15:46
*** rakhmerov has joined #openstack-meeting-cp15:49
*** diablo_rojo has joined #openstack-meeting-cp15:57
*** lamt has quit IRC15:58
*** diablo_rojo has quit IRC15:59
*** ayoung has quit IRC16:00
*** diablo_rojo has joined #openstack-meeting-cp16:01
*** markvoelker has quit IRC16:07
*** ducttape_ has quit IRC16:08
*** ducttape_ has joined #openstack-meeting-cp16:09
*** ayoung has joined #openstack-meeting-cp16:10
*** markvoelker has joined #openstack-meeting-cp16:12
*** lamt has joined #openstack-meeting-cp16:18
*** lamt has quit IRC16:19
*** lamt has joined #openstack-meeting-cp16:31
*** ducttape_ has quit IRC16:39
*** ducttape_ has joined #openstack-meeting-cp16:39
*** mriedem has joined #openstack-meeting-cp16:58
ildikov#startmeeting cinder-nova-api-changes17:00
openstackMeeting started Thu Mar  2 17:00:30 2017 UTC and is due to finish in 60 minutes.  The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot.17:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.17:00
*** openstack changes topic to " (Meeting topic: cinder-nova-api-changes)"17:00
openstackThe meeting name has been set to 'cinder_nova_api_changes'17:00
jungleboyjo/17:00
lyarwoodo/17:00
ildikovDuncanT ameade cFouts johnthetubaguy jaypipes takashin alaski e0ne jgriffith tbarron andrearosa hemna erlon mriedem gouthamr ebalduf patrickeast smcginnis diablo_rojo gsilvis  xyang1 raj_singh lyarwood17:00
breitzo/17:00
mriedemlyarwood:17:00
mriedemoh he's here17:01
mriedemo/17:01
ildikov:)17:01
ildikovhi all :)17:01
lyarwoodmriedem: I can leave if you want to say something ;)17:01
jungleboyjildikov: You survived MWC ?17:01
mriedemlyarwood: no i want you to be very involved17:01
mriedemildikov: i'm out in about a half hour17:02
mriedemfyi17:02
ildikovjungleboyj: I have very small brain activity right now, but it's enough to keep me alive and breathig :)17:02
jungleboyjildikov: :-)17:02
ildikovmriedem: thanks for the note, then let's start with the activities in Nova17:02
ildikovthe remove check_attach patch got merged, thanks to everyone, who helped me out!17:03
ildikovone small thing is out of the way17:03
ildikovwe have a bunch of things up for review under the Nova bp: https://review.openstack.org/#/q/topic:bp/cinder-new-attach-apis17:03
ildikovlyarwood is working on some refactoring on the BDM and detach17:04
* johnthetubaguy wonders into the room a touch late17:04
lyarwoodyeah, just trying to get things cleaned up before we introduce v3 code17:04
lyarwoodthe bdm UUID stuff isn't directly related btw, we've been trying to land it for a few cycles17:05
lyarwoodbut I can drop it if it isn't going to be used directly in the end here17:05
ildikovjohnthetubaguy: mriedem: is there anything in that refactor code that would need to be discussed? Or the overall direction looks good?17:05
johnthetubaguymoving detach into the BDM makes sense to me17:06
johnthetubaguyI would love to see the number of if use_new_api things very limited to a single module, ideally17:06
mriedemmove detach into the bdm?17:06
mriedemi haven't seen that17:06
lyarwoodinto the driver bdm17:06
mriedemoh17:06
lyarwoodhttps://review.openstack.org/#/c/439520/17:06
johnthetubaguyyeah, sorry, that17:06
mriedemi saw the change to call detach before destroying the bdm, and left a comment17:06
mriedemmakes sense, but i feel there are hidden side effects17:07
lyarwoodthere's still a load of volume_api code in the compute api that I haven't looked at yet17:07
mriedembecause that seems *too* obvious17:07
johnthetubaguyit needs splitting into two patches for sure17:07
mriedemon the whole i do agree that having the attach code in the driver bdm and the detach code in the compute manager separately has always been confusing17:07
lyarwoodjohnthetubaguy: https://review.openstack.org/#/c/440693/117:07
mriedemyes https://review.openstack.org/#/c/440693/ scares me17:07
mriedemnot for specific reasons17:08
mriedemjust voodoo17:08
lyarwoodbdm voodoo17:08
johnthetubaguyheh, yeah17:08
mriedemthe cinder v3 patch from scottda seems to have stalled a bit17:09
mriedemi think that's an easy add which everyone agrees on,17:09
mriedemand that should also make cinder v3 the default config17:09
johnthetubaguyyeah, +1 keeping that one moving17:09
johnthetubaguythats the one with the context change, etc17:09
johnthetubaguywell, on top of that17:09
mriedemyeah the context change was merged in ocata i think17:10
mriedemit's already merged anyway17:10
johnthetubaguyyeah, that should keep moving17:10
lyarwoodshould we track the v3 by default patches against the bp?17:10
ildikovmriedem: I can pick the v3 switch up17:10
johnthetubaguylyarwood: probably17:10
ildikovlyarwood: we have them on this etherpad: https://etherpad.openstack.org/p/cinder-nova-api-changes17:11
lyarwoodildikov: thanks17:11
ildikovI wouldn't track it with the BP as my impression was that we want that in general17:11
mriedemlyarwood: we want the cinder v3 thing regardless17:11
mriedemildikov: right +117:11
lyarwoodunderstood17:12
mriedemso to keep things moving at a minimum this week,17:12
mriedemi think we do the bdm uuid and attachment_id changes17:12
ildikovmicroversions were/are the question17:12
mriedemand then also cinder v317:12
mriedemthose are pretty easy changes to get done this week17:12
mriedemas nothing is using them yet17:12
johnthetubaguywe just request 3.0 to start with I assume?17:12
ildikovI wonder whether we can switch to v3 and then add the negotiation for the mv17:12
ildikovmriedem: +117:13
*** stvnoyes has joined #openstack-meeting-cp17:13
mriedemmv?17:13
ildikovmicroversion :)17:14
johnthetubaguyyeah, I thought we said just use the base version for now, and get that passing in the gates right?17:14
mriedemjohnthetubaguy: sure yeah17:14
* ildikov is lazy to type the whole word all the time... :)17:14
mriedemlet's not overcomplicate the base change17:14
ildikovmy thinking as well17:14
johnthetubaguymriedem: +117:14
mriedemdefault to cinder v3, make sure it's there17:14
mriedemrelease note17:14
mriedemetc etc17:14
johnthetubaguysounds like we have lyarwood's BDM schema changes that can keep going also?17:14
mriedemyes17:15
ildikovand the earlier we switch the more testing we get even if 3.0 should be the same as v217:15
mriedemthose patches are what we should get done this week17:15
lyarwoodI can get that done17:15
johnthetubaguyyeah, sounds good17:15
mriedembdm schema changes (attachment_id and uuid) and cinder v317:15
johnthetubaguyyup, yup17:15
mriedemi think we want to deprecate nova using cinder v2 also, but that can be later and separate17:15
johnthetubaguyso, next week (still ignoring the spec), whats the next bit?17:16
mriedembut start the timer in pike17:16
johnthetubaguyI guess thats lyarwood's refactor stuff?17:16
mriedemprobably, and version negotiation17:16
mriedemto make sure 3.27 is available17:16
johnthetubaguydeciding yes or no to the BMD voodoo worries17:16
johnthetubaguytrue, we can put the ground work in for doing detach17:16
johnthetubaguyso if we are good with that, I have spec open questions to run through?17:18
ildikovwe can move forward with the attach/detach changes17:18
johnthetubaguyno... only detach17:18
johnthetubaguyleaving attach to the very, very end I think17:18
ildikovand have the microversion negotiation as a separate small change that needs to make it before anything else anyhow17:18
ildikovjohnthetubaguy: I meant the code to see how it works and not to merge everything right now17:19
ildikovjohnthetubaguy: but agree on finalizing detach first17:19
johnthetubaguyyeah, could do attach WIP on the end, so we can test detach for real17:19
johnthetubaguywe probably should actually17:19
ildikovmy point was mainly that we should not stop coding detach just because the mv negotiation is not fully naked yet17:20
johnthetubaguymv negotiation I thought was easy though17:20
ildikovjohnthetubaguy: jgriffith has a WIP up that does attach17:20
johnthetubaguyeither 3.27 is available, or its not right?17:20
jungleboyjRight.17:20
jungleboyjAnd the agreement was we would fall back to base V3 if 3.27 isn't available.17:21
ildikovjohnthetubaguy: I need to check whether the cinderclient changes are merged to get the highest supported version17:21
johnthetubaguyis_new_attach_flow_available = True or Flase17:21
ildikovsmth like, that's easy17:21
johnthetubaguyildikov: we don't want the highest supported version though, we want 3.27, or do you mean we need cinderclient to support 3.27?17:21
ildikovjust currently Nova tells Cinder what version it wants17:21
johnthetubaguyright, we currently always want the base version17:22
jungleboyjRight, need cinderclient to say what it can support.  I am not sure if that code is in yet.17:22
ildikovjungleboyj: me neither17:22
johnthetubaguywe can get the list of versions and see if 3.27 is available17:22
johnthetubaguyah, right, we need that I guess17:22
johnthetubaguyI mean its a simple REST call, so we could hold that patch if it stops us moving forward I guess17:23
johnthetubaguyanyways, that seems OK17:23
johnthetubaguythats next weeks thing to chase17:23
ildikovjohnthetubaguy: it's just the matter of agreeing how we get what version is supported by Cinder and then act accordingly in Nova17:23
johnthetubaguymriedem: can you remember what we do in novaclient for that discovery bit?17:24
mriedemhttps://review.openstack.org/#/c/425785/17:24
ildikovjohnthetubaguy: so for now just go with base v3 and see what made it into Cinder and add the rest as soon as we can17:24
mriedemthe cinder client version discovery is merged, but not released17:24
mriedemsmcginnis: need ^ released17:24
johnthetubaguycool, thats simple then17:24
ildikovok, I remember now, it needed a few rounds of rechecks to make it in17:25
jungleboyjArgh, mriedem beat me to it.17:25
mriedemildikov: so add that to the list of things to do - cinderclient release17:25
jungleboyjmriedem: +217:25
jungleboyjmriedem: You want me to take that over to Cinder.17:25
johnthetubaguy#action need new cinder client release so we have access to get_highest_client_server_version https://review.openstack.org/#/c/425785/17:25
mriedemjungleboyj: sure, anyone can propose the release17:26
mriedemsmcginnis has to sign off17:26
jungleboyjI'll take that.17:26
mriedemmake sure you get the semver correct17:26
ildikovmriedem: added a short list to the etherpad too with the small items in the queue we agreed on17:27
johnthetubaguy#link https://etherpad.openstack.org/p/cinder-nova-api-changes17:27
johnthetubaguyI would love to go over the TODOs in the spec17:29
johnthetubaguydoes that make sense to do now?17:30
ildikovjohnthetubaguy: I have to admit I couldn't get to the very latest version of the spec yet17:30
johnthetubaguythats fine, this is basically loose ends that came up post ptg17:30
johnthetubaguy#link https://review.openstack.org/#/c/373203/17/specs/pike/approved/cinder-new-attach-apis.rst17:30
ildikovjohnthetubaguy: but we can start to go over the TODOs17:31
johnthetubaguythe first one is the shared storage connection stuff17:31
johnthetubaguyI am tempted to say we delay that until after we get all the other things sorted17:31
johnthetubaguylyarwood: do you think thats possible? ^17:31
lyarwood+117:31
lyarwoodyeah17:31
johnthetubaguythere just seem to be complications we should look at separately there17:32
jungleboyjThat makes sense.  We need everything else stabilized first.17:32
johnthetubaguycool, we can do that then, split that out17:32
ildikovjohnthetubaguy: that will only be problematic with multi-attach only, right?17:32
johnthetubaguyildikov: I believe so, I think its new attachments, shared connections, multi-attach17:32
johnthetubaguy#action split out shared host connections from use new attachment API spec17:33
johnthetubaguyI think I am happy the correct things are possible17:33
johnthetubaguy(which is probably a bad sign, but whatever)17:33
johnthetubaguyso the next TODO17:33
johnthetubaguyevacuate17:33
johnthetubaguyhow do you solve a problem like... evacuate17:33
* jungleboyj runs for the fire exit17:34
johnthetubaguyheh17:34
johnthetubaguyso mdbooth added a great comment17:34
johnthetubaguyif you evacuate an instance, thats cool17:34
johnthetubaguynow detach a few volumes17:34
johnthetubaguythen a bit later delete the instance17:34
johnthetubaguysome time later the old host comes back from the dead and needs to get cleaned up17:35
johnthetubaguywe kinda have two options:17:35
johnthetubaguy(1) leave attachments around so we can try detect them (although finding them could be quite hard from just a migration object, when the instance has been purged from the DB)17:35
johnthetubaguy(2) delete attachments when we have success on the evacuate on the new destination host, and leave the hypervisor driver to be able to find the unexpected VMs (if possible) and do the right thing with the backend connections where it can17:36
johnthetubaguy(3) just don't allow an evacuated host to restart until the admin has manually tidied things up (aka re-imaged the whole box)17:37
lyarwoodthere's another option17:37
lyarwoodupdate the attachment with connector=None17:37
jungleboyjjohnthetubaguy:  I think number two makes the most sense to me.  If the instance has successfully evacuated to the new host those connections aren't needed on the original.  Right?17:37
lyarwoodthat's the same as terminate-connection right?17:37
johnthetubaguylyarwood: but why not just delete it?17:38
jungleboyjjohnthetubaguy:  ++17:38
lyarwoodjohnthetubaguy: so the source host is aware that it has clean up17:38
lyarwoodhas to*17:38
lyarwoodif it ever comes back17:38
johnthetubaguylyarwood: but the source host is only stored in the connector I believe?17:38
ildikovI think attachment-delete will clean up the whole thing on the Cinder side17:38
ildikovso no more terminate-connection17:39
johnthetubaguyso the problem, not sure if I stated it, is finding the volumes17:39
johnthetubaguyif the volume is detached...17:39
johnthetubaguyafter evacuate17:39
ildikovis the problem here that when the host comes up then it tries to re-initiate the connection, etc?17:40
johnthetubaguywe get the instance, we know it was evcuated, but we only check half of the volumes to clean up17:40
johnthetubaguyits trying to kill the connection17:40
johnthetubaguythe vm has moved elsewhere, so to stop massive data corruption, we have to kill the connection17:40
*** antwash has left #openstack-meeting-cp17:41
johnthetubaguyso I got distracted there17:41
johnthetubaguyyeah, we have the migration object and instances to tell the evacuate happened17:42
ildikovattachment-delete should take care of the target, the question is what happens when the host comes up again, I think that was raised last week, but I will read the comment properly now in the spec :)17:42
johnthetubaguybut its hard to always get the full list of volume attachments we have pre-evacuate at that point17:42
lyarwoodjohnthetubaguy: that's why I suggested only updating the attachment17:43
ildikovon the other hand I guess if the original host is not really down, neither fenced properly then we just look into the other direction like nothing happened and it's surely not our fault :)17:43
lyarwoodjohnthetubaguy: each attachment is unique to an instance, host and volume right?17:43
johnthetubaguylyarwood: you mean reuse the same attachment on the new host?17:43
lyarwoodnot instance sorry17:43
* johnthetubaguy is failing to visualise the change17:44
lyarwoodjohnthetubaguy: no, just keep the old ones with a blank connector that I assumed would kill any export / connection17:44
johnthetubaguyso I don't think that fixes the problem we are facing17:44
ildikovlyarwood: currently you cannot update without connector17:44
johnthetubaguyif the use does detach, we can't find that volume on the evcuated host17:44
johnthetubaguyuser does17:44
ildikovlyarwood: and attachment-delete will kill the export/connection17:44
*** ducttape_ has quit IRC17:45
lyarwoodjohnthetubaguy: but that would be a detach of the volume on the new host using a new attachment17:45
johnthetubaguylyarwood: thats fine, the problem is we still need to tidy up on the old host, but we didn't know we had to17:46
lyarwoodjohnthetubaguy: right so there's additional lookup logic needed there17:46
lyarwoodjohnthetubaguy: that we don't do today17:46
johnthetubaguyI guess we might need an API to list all valid attachments in Cinder, and gets all attachments on the host from os.brick, and delete all the ones that shouldn't be there17:46
johnthetubaguylyarwood: the problem is we don't have that data right now17:46
johnthetubaguyor there is no way to get that data, I mean17:46
johnthetubaguybecause we delete the records we wanted17:46
johnthetubaguyI thinking about this function: https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L62417:47
jungleboyjjohnthetubaguy: Wasn't that proposed at the PTG?17:47
johnthetubaguyjungleboyj: we thought we could get the data at the PTG, but I don't think we can17:47
johnthetubaguyif you delete the instance, or detach volumes, we loose the information we were going to use to look up what needs cleaning up17:47
johnthetubaguy(I think...)17:47
ildikovjohnthetubaguy: how does it work today?17:48
lyarwoodjohnthetubaguy: the data within nova yes, but we can reach out to cinder and ask for any additional attachments not associated with a bdm (from being detached etc)17:48
johnthetubaguylyarwood: I don't think cinder has those APIs today17:49
johnthetubaguylyarwood: if we delete the data in cinder it doesn't have it either17:49
johnthetubaguyso... lets back up to what ildikov said, and I forgot17:49
johnthetubaguystep 1: be as rubbish as today17:49
johnthetubaguystep 2: be less rubbish17:49
johnthetubaguyI was jumping to step 2 again, my bad17:50
johnthetubaguyso lets leave step 2 for another spec17:50
ildikovjohnthetubaguy: I'm not saying not to be less rubbish17:50
johnthetubaguyyep, yep17:50
ildikovI just hoped we're not that rubbish today and might see something in the old flow we're missing in the new...17:50
johnthetubaguyyeah, me too, thats my new proposal17:51
johnthetubaguyI forgot, we have the instances on the machine17:51
jungleboyj:-)  One step at a time.17:51
johnthetubaguywe destroy those17:51
johnthetubaguyso, worst case we just have dangling connections on the host17:51
johnthetubaguylike stuff we can't call os.brick for17:52
johnthetubaguybut I think we can always get a fresh connector, which is probably good enough17:52
breitzit seems dangerous to rely on any post evacuate cleanup (for things outside of nova - ie a cinder attachment) on the orig source host.  it seems like those things need to be cleaned up as part of the evacuate itself.  but perhaps i'm not understanding this correctly.17:52
johnthetubaguybreitz: evacuate is when the source host is dead, turned off, and possibly in a skip17:53
breitzright17:53
johnthetubaguybut sometimes, the host is brought back from the dead17:53
breitzbut the world moves on - so when that host comes back - we can't rely on it.17:53
johnthetubaguyif we just kill all the instances, we avoid massive data loss17:53
breitzsure - can't allow the instances to come back17:54
johnthetubaguywe keep migration records in the DB, so we can tell what has been evacuated, even if the instances are destroyed17:54
johnthetubaguybreitz: totally, we do that today17:54
breitzright17:54
johnthetubaguymy worry is we don't have enough data about the instance to clean up the volumes using os.brick disconnect17:54
johnthetubaguybut if we don't have that today, then whatever I guess17:54
johnthetubaguyso this goes back to17:55
breitzthat I get - that cleanup is what I'm saying needs to be done when moving to the new dest.17:55
ildikovjohnthetubaguy: if we remove the target on the Cinder side that should destroy the connection or it does not?17:55
johnthetubaguybreitz: but it can't be done, the host is dead? I am just worrying about the clean up on that host17:55
breitzand somehow that info needs to be presented.  not wait until the orig source comes back up to do.17:55
johnthetubaguyI think we just delete the attachments in cinder right away, to do the cinder tidy up17:56
lyarwoodildikov: we terminate the original hosts connections in cinder today17:56
johnthetubaguylyarwood: I see what you are saying now, we should keep doing that17:56
johnthetubaguywhich means delete the attachments now, I think17:56
breitzyes - do the delete attachments right away.17:56
ildikovjohnthetubaguy: lyarwood: yep, that's what I said too17:56
lyarwoodjohnthetubaguy: right, update would allow some cleanup by delete would be in-line with what we do today17:56
johnthetubaguylyarwood: the bit I was missing is we do terminate today17:56
lyarwoodjohnthetubaguy: yeah we do17:57
lyarwoodjohnthetubaguy: via detach in rebuild_instance17:57
ildikovjohnthetubaguy: delete is supposed to do that for you in the new API17:57
*** mriedem has quit IRC17:57
johnthetubaguyit makes sense, I just forgot that17:57
johnthetubaguyyeah, so right now that means delete attachment17:57
johnthetubaguywhich isn't ideal, but doesn't make things any worse17:57
johnthetubaguylets do that17:57
ildikovlyarwood: update at the moment is more finalizing attachment_create17:58
lyarwoodunderstood17:58
ildikovlyarwood: you cannot update without connector as that would also mean you're putting back the volume to 'reserved' state and you don't want to do that here17:58
lyarwoodah17:58
johnthetubaguyildikov: we kinda do that by creating a second attachment instead17:59
ildikovlyarwood: and most probably neither in general17:59
ildikovjohnthetubaguy: yes, but that reserves the volume for the new host at least17:59
johnthetubaguyso during the evacuate we don't want someone else "stealing" the volume17:59
johnthetubaguythe new attachment does that fine17:59
johnthetubaguywe just need to create the new one (for the new host) before we delete the old one18:00
johnthetubaguylyarwood: is that the order today?18:00
ildikovthat will work18:00
lyarwoodjohnthetubaguy: hmmm I think we terminate first18:00
lyarwoodjohnthetubaguy: yeah in _rebuild_default_impl we detach that inturn terminates the connections first before spawn then initializes the connection on the new host18:02
johnthetubaguylyarwood: I wonder about creating the attachment in the API, and adding it into the migration object?18:02
johnthetubaguyah, right, there it is: https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L265118:02
lyarwoodjohnthetubaguy: yeah, we could update after that18:02
johnthetubaguylyarwood: thats probably simpler18:03
johnthetubaguylyarwood: I was actually wondering in the live-migrate case, we have two sets of attachment_ids, when do we update the BDM, I guess on success, so maybe keep the pending ones in the migration object?18:04
lyarwoodjohnthetubaguy: yup, we almost did the same with connection_info in the last cycle18:04
johnthetubaguylyarwood: yeah, thats why I was thinking in evacuate we could copy that18:05
johnthetubaguybut I am probably over thinking that one18:05
johnthetubaguyyour idea sounds simpler18:05
johnthetubaguyworks on both code paths too18:05
ildikov+1 on similar solutions18:05
lyarwoodcool, I need to drop now, I'll follow up with the BDM uuid and attachment_id patches in the morning \o_18:06
ildikovwe're out of time for today18:07
johnthetubaguyyeah, I need to run too18:07
johnthetubaguythanks all18:07
ildikovjohnthetubaguy: can we consider evacuate good for now?18:07
ildikovjohnthetubaguy: or it will need more chats on the next meeting?18:07
johnthetubaguyyep, thats both of my TODOs covered18:07
ildikovjohnthetubaguy: great, thanks for confirming18:08
ildikovok let's focus on the smaller items to merge this week and until the next meeting18:08
ildikovthank you all!18:08
johnthetubaguywould love +1s or -1s on the spec too :)18:08
ildikov#action everyone to review the Nova spec!18:09
jungleboyj++18:09
ildikovjohnthetubaguy: ack :)18:09
ildikov#endmeeting18:09
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"18:09
openstackMeeting ended Thu Mar  2 18:09:28 2017 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)18:09
openstackMinutes:        http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2017/cinder_nova_api_changes.2017-03-02-17.00.html18:09
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2017/cinder_nova_api_changes.2017-03-02-17.00.txt18:09
openstackLog:            http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2017/cinder_nova_api_changes.2017-03-02-17.00.log.html18:09
jungleboyjThanks all!18:10
*** MarkBaker has joined #openstack-meeting-cp18:15
*** lamt has quit IRC18:16
*** ducttape_ has joined #openstack-meeting-cp18:45
*** ducttape_ has quit IRC18:50
*** MarkBaker has quit IRC19:02
*** rarcea has quit IRC19:31
*** Rockyg has joined #openstack-meeting-cp19:34
*** lamt has joined #openstack-meeting-cp19:35
*** lamt has quit IRC19:36
*** lamt has joined #openstack-meeting-cp19:37
*** ayoung has quit IRC19:53
*** ducttape_ has joined #openstack-meeting-cp20:16
*** ducttape_ has quit IRC20:21
*** lamt has quit IRC20:23
*** lamt has joined #openstack-meeting-cp20:25
*** rocky_g has joined #openstack-meeting-cp20:38
*** DFFlanders has joined #openstack-meeting-cp20:47
*** lamt has quit IRC21:22
*** ducttape_ has joined #openstack-meeting-cp21:47
*** ducttape_ has quit IRC21:52
*** anteaya has quit IRC21:54
*** anteaya has joined #openstack-meeting-cp22:06
*** diablo_rojo_phon has joined #openstack-meeting-cp22:14
*** gouthamr has quit IRC22:33
*** gouthamr has joined #openstack-meeting-cp22:56
*** breitz has quit IRC23:01
*** breitz has joined #openstack-meeting-cp23:02
*** ducttape_ has joined #openstack-meeting-cp23:04
*** rocky_g has quit IRC23:05

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!