Thursday, 2017-03-23

*** gouthamr has joined #openstack-meeting-cp00:27
*** knangia has quit IRC00:51
*** lifeless_ is now known as lifeless01:00
*** gouthamr has quit IRC01:16
*** raj_sing- has joined #openstack-meeting-cp01:46
*** edtubill has joined #openstack-meeting-cp02:33
*** edtubill has quit IRC02:57
*** gouthamr has joined #openstack-meeting-cp03:00
*** gouthamr has quit IRC03:19
*** knangia has joined #openstack-meeting-cp03:54
*** rderose has quit IRC04:33
*** markvoelker has quit IRC05:08
*** markvoelker has joined #openstack-meeting-cp05:09
*** markvoelker has quit IRC05:13
*** reed has quit IRC05:30
*** reed has joined #openstack-meeting-cp05:30
*** aunnam has joined #openstack-meeting-cp05:42
*** knangia has quit IRC06:01
*** sheeprine has quit IRC06:04
*** tonyb_ has joined #openstack-meeting-cp06:06
*** sheeprine has joined #openstack-meeting-cp06:08
*** kbyrne_ has joined #openstack-meeting-cp06:09
*** markvoelker has joined #openstack-meeting-cp06:09
*** hemna_ has joined #openstack-meeting-cp06:12
*** tonyb has quit IRC06:13
*** kbyrne has quit IRC06:13
*** hemna has quit IRC06:13
*** kbyrne_ is now known as kbyrne06:13
*** markvoelker has quit IRC06:13
*** sheeprine has quit IRC07:02
*** reed has quit IRC07:03
*** dmellado has quit IRC07:03
*** reed has joined #openstack-meeting-cp07:04
*** dmellado has joined #openstack-meeting-cp07:08
*** sheeprine has joined #openstack-meeting-cp07:09
*** reed has quit IRC07:17
*** reed has joined #openstack-meeting-cp07:19
*** markvoelker has joined #openstack-meeting-cp08:10
*** rarcea has joined #openstack-meeting-cp08:12
*** markvoelker has quit IRC08:14
*** MarkBaker has quit IRC08:20
*** david-lyle has quit IRC09:11
*** david-lyle has joined #openstack-meeting-cp09:13
*** david-lyle_ has joined #openstack-meeting-cp09:27
*** david-lyle has quit IRC09:27
*** MarkBaker has joined #openstack-meeting-cp10:04
*** markvoelker has joined #openstack-meeting-cp10:11
*** markvoelker has quit IRC10:16
*** MarkBaker has quit IRC10:19
*** reed has quit IRC11:26
*** reed has joined #openstack-meeting-cp11:33
*** raj_sing- has quit IRC11:36
*** aunnam has quit IRC11:36
*** MarkBaker has joined #openstack-meeting-cp11:40
*** markvoelker has joined #openstack-meeting-cp12:13
*** markvoelker has quit IRC12:17
*** markvoelker has joined #openstack-meeting-cp12:46
*** raj_singh has quit IRC12:56
*** raj_singh has joined #openstack-meeting-cp12:57
*** MarkBaker has quit IRC13:19
*** MarkBaker has joined #openstack-meeting-cp13:20
*** gouthamr has joined #openstack-meeting-cp13:21
*** gouthamr has quit IRC14:44
*** hemna_ is now known as hemna14:50
*** MarkBaker has quit IRC15:07
*** ttx has quit IRC15:44
*** ttx has joined #openstack-meeting-cp15:46
*** knangia has joined #openstack-meeting-cp15:46
*** aunnam has joined #openstack-meeting-cp15:53
*** lamt has joined #openstack-meeting-cp15:56
*** david-lyle_ is now known as david-lyle15:58
*** MarkBaker has joined #openstack-meeting-cp16:05
*** mriedem has joined #openstack-meeting-cp16:55
ildikov#startmeeting cinder-nova-api-changes17:00
openstackMeeting started Thu Mar 23 17:00:14 2017 UTC and is due to finish in 60 minutes.  The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot.17:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.17:00
*** openstack changes topic to " (Meeting topic: cinder-nova-api-changes)"17:00
openstackThe meeting name has been set to 'cinder_nova_api_changes'17:00
ildikovDuncanT ameade cFouts johnthetubaguy jaypipes takashin alaski e0ne jgriffith tbarron andrearosa hemna erlon mriedem gouthamr ebalduf patrickeast smcginnis diablo_rojo gsilvis  xyang1 raj_singh lyarwood breitz17:00
jungleboyjo/17:00
smcginniso/17:00
jungleboyjsmcginnis:  Jinx17:00
mriedemo/17:00
jungleboyjildikov: Can you add me to your ping list please?17:01
ildikovjungleboyj: sure17:01
johnthetubaguyo/17:01
jungleboyjThank you.17:01
ildikovjungleboyj: done17:01
ildikovhi all17:01
ildikovlet's start :)17:01
ildikovso we have lyarwood's patches out17:02
ildikovI saw debates on get_attachment_id17:02
ildikovis there a chance to get to a conclusion on what to do with that?17:03
ildikovjohnthetubaguy: did you have a chance to catch mdbooth?17:03
mdboothHe did17:03
johnthetubaguyyeah, I think we are on the same page17:04
johnthetubaguyit shouldn't block anything17:04
mriedemi'm +2 on the bdm.attachment_id changes17:04
mriedemor was17:04
johnthetubaguyyeah, I am basically +2 as well, but mdbooth suggested waiting to see the first patch that reads the attacment_id, which I am fine with17:04
ildikovmdbooth: johnthetubaguy: what's the conclusion?17:04
johnthetubaguybasically waiting to get the patch on top of that that uses the attachment id17:05
johnthetubaguywhich is the detach volume patch, I think17:05
* mdbooth thought they looked fine, but saw no advantage merging them until they're used. Merging early risks a second db migration if we didn't get it right first time.17:05
* mdbooth is uneasy about not having an index on attachment_id, but can't be specific about that without seeing a user.17:06
*** MarkBaker has quit IRC17:06
ildikovis it a get_by_attachment_id that the detach patch uses? I think it's not17:06
ildikovit seems like we're getting ourselves into a catch 22 with two cores on the +2 side17:07
mriedemwe need jgriffith's detach patch restored and rebased17:07
jgriffithsorry... late17:07
mriedemhttps://review.openstack.org/#/c/438750/17:07
mriedemhttps://review.openstack.org/#/c/438750/2/nova/compute/manager.py17:08
mriedemthat checks if the bdm.attachment_id is set and makes the detach call decision based on that17:08
mriedemwe list the BDMs by instance, not attacment id17:08
mriedembut we check the attachment_id to see if we detach with the new api or old api17:08
johnthetubaguymriedem: +117:08
jgriffithmriedem that's wrong :)17:09
ildikovmriedem: but that will not get the get_by_attachment_id usage for us17:09
mriedemildikov: we don't need get_by_attachment_id17:09
mriedemjgriffith: why is that wrong?17:09
jgriffithmriedem never mind17:10
ildikovmriedem: I thought that's why we hold the attachment_id patches17:10
jgriffithmriedem I don't think I'm following your statement... going through it again17:10
mriedemjgriffith: so are you planning to restore and rebase that on top of https://review.openstack.org/#/c/437665/17:10
mriedemildikov: that's why mdbooth wanted to hold them,17:10
jgriffithmriedem when it looks like it's going to land yes17:10
mriedemwhich i don't agree with17:11
jgriffithmriedem wait... don't agree with what?17:11
mriedemjgriffith: i'm ready to go with those bdm.attachment_id changes17:11
mriedemsorry,17:11
mriedemi don't agree that we need to hold the bdm.attachment_id changes17:11
ildikovmriedem: I'm on that page too17:11
mriedemthey are ready to ship17:11
mriedemhttps://review.openstack.org/#/c/437665/ and below17:11
ildikovjohnthetubaguy: ^^?17:11
jgriffithmriedem then yes, I'll do that today, I woul dLOVE to do it after it merges if possible :)17:11
mdboothmriedem: I don't necessarily disagree, btw. They're just not being used yet, so I thought we might as well merge them later.17:11
jgriffithso in an hour or so :)17:11
mriedemmdbooth: they aren't being used b/c jgriffith is waiting for us to merge them17:12
mriedemhence the catch-2217:12
mriedemso yeah we agree, let's move forward17:12
mdboothmriedem: Does jgriffith have a nova patch which uses them?17:12
mdboothThese aren't exposed externally yet.17:12
jgriffithmdbooth  I did yes17:12
ildikovmriedem: thanks for confirming me :)17:12
mriedemmdbooth: https://review.openstack.org/#/c/438750/ will yes17:12
johnthetubaguyI could go either way, merge now, or rebase old patch on top. Doing neither I can't subscribe to.17:12
ildikovjohnthetubaguy: mriedem: let's ship it17:13
ildikovjohnthetubaguy: mriedem: one item done, move to the next one17:13
* mdbooth is confused. That patch can't use attachment_id because it passed tests.17:13
mdboothAnd it doesn't 'have attachment_id17:13
mriedemmdbooth: jgriffith and lyarwood originally wrote duplicate changes17:14
mriedemat the PTG17:14
mriedemor shortly after17:14
mriedemjgriffith's patch ^ relied on his version of bdm.attachment_uuid17:14
ildikovmdbooth: to be honest the detach flow currently built on the assumption we don't have attachment_id set and therefore it uses the old detach flow17:14
mriedemwe dropped those and went with lyarwood's17:14
mdboothOk :)17:14
mriedemso jgriffith will have to rebase and change attachment_uuid to attachment_id in his change17:14
mriedemwhat's next?17:15
ildikovmriedem: so we ship or hold now?17:15
mriedemwe merge lyarwood's bdm.attachment_id changes17:16
ildikovmriedem: awesome, thanks17:16
hemnasweet17:16
mriedemi plan on going over johnthetubaguy's spec again this afternoon but honestly,17:16
mriedemat this point, barring any calamity in there,17:16
mriedemi'm going to just approve it17:16
jungleboyjmriedem: +217:16
ildikovmriedem: I would just ship that too17:16
ildikovmriedem: looks sane to me and we cannot figure everything out on paper anyhow17:17
johnthetubaguywe all agree the general direction now, so yeah, lets steam ahead and deal with the icebergs as they come17:17
ildikovjohnthetubaguy: +117:17
ildikovwe need to get the cinderclient version discovery patch back to life17:18
* ildikov looks at jgriffith :)17:18
jgriffithildikov johnthetubaguy so my understanding is you guys wanted that baked in to the V3 detach patch17:18
jgriffithso I plan on that being part of the refactor against lyarwood 's changes17:19
mriedemi'm not sure we need that,17:19
mriedemif the bdm has attachment_id set,17:19
mriedemit means we attached with new enough cinder17:19
mriedemso on detach, if we have bdm.attachment_id, i think we assume cinder v3 is there and just call the delete_attachment method or whatever17:19
johnthetubaguyso I am almost +2 on that refactor patch, FWIW, there is just a tiny nit left17:19
ildikovmriedem: we need that some time before implementing attach17:20
mriedemthe version checking happens in the attach method17:20
mriedems/method/patch/17:20
jgriffithmriedem that works17:20
mriedemildikov:17:20
mriedemyes it comes before the attach patch17:20
mriedemwhich is the end17:20
ildikovmriedem: ok, then we're kind of on the same page I think17:20
johnthetubaguyyeah, attach is version negotiation time, everyone else just checks attachment_id to see if they have to use the new version or not17:20
jgriffithand just hope something with the stupid migrate or HA stuff didn't happen :)17:20
mriedemwell, i think we'd get a 400 back if the microversion isn't accepted17:21
mriedemand we fail the detach17:21
jgriffithmriedem correct17:21
mriedemwe can make the detach code smarter wrt version handling later if needed17:21
johnthetubaguythats all legit, seems fine17:21
jgriffithmriedem and I can revert to the old call in the cinder method itself17:21
mriedemi just wouldn't worry about that right now17:21
johnthetubaguyno revert, just fail hard17:21
mriedemright hard fail17:21
jgriffithummm... ok, you's the bosses17:21
mriedemwe want to eventually drop the old calls,17:22
johnthetubaguyso we only get in that mess if you downgraded Cinder after creating the attachment17:22
mriedemso once we attach new path, we detach new path17:22
mriedemor fail17:22
mriedemyeah if you downgrade cinder, well,17:22
mriedemthat's on you17:22
johnthetubaguyright, dragons17:22
ildikovjohnthetubaguy: if someone does that then I would guess they might have other issues too anyhow...17:22
johnthetubaguyildikov: right, its just not... allowed17:23
jungleboyjjohnthetubaguy:  I think that is a very unlikely edge case.17:23
jungleboyjSomething has really gone wrong in that case.17:23
mriedemok what's next?17:23
ildikovcertainly not something we would want to support or encourage people to do17:23
johnthetubaguy... so we fail hard in detach, if attachment_id is present, but the microversion request goes bad, simples17:23
mriedemi think the detach patch is the goal for next week yes?17:23
ildikovI think we need to get detach working17:24
mriedemis there anything else?17:24
johnthetubaguythe refactor is the question17:24
mriedemthis? https://review.openstack.org/#/c/439520/17:24
johnthetubaguywe could try merge lyarwood's refactor first, I think its crazy close to being ready after mdbooth's hard work revieing it17:24
johnthetubaguythats the one17:25
ildikovjohnthetubaguy: ah, right, the detach refactor17:25
johnthetubaguyI think it will make the detach patch much simpler if we get the refactor in first17:25
johnthetubaguymdbooth: whats your take on that ^17:25
mriedemok i haven't looked at the refactor at all yet17:25
* mdbooth reads17:26
johnthetubaguyto be clear, its just the above patch, and not the follow on patch17:26
mriedemjgriffith's detach patch is pretty simple, it's just a conditional on the bdm.attachment_id being set or not17:26
ildikovjohnthetubaguy: I think lyarwood is out of office this week, so if that's the only thing missing we can add that and get this one done too17:26
johnthetubaguymriedem: hmm, I should double check why I thought it would be a problem17:27
mdboothI haven't had a chance to look at lyarwood's updated refactor in detail yet (been buried downstream for a few days), but when I glanced at it it looked like a very simple code motion which brings the code inline with attach.17:27
mdboothIt looked low risk and obvious (barring bugs). Don't think I've seen the other patches.17:28
johnthetubaguyso, I remember now...17:28
johnthetubaguyhttps://review.openstack.org/#/c/438750/2/nova/compute/manager.py17:28
johnthetubaguywe have many if statements17:28
johnthetubaguyI think after lyarwood's patch we have many less, or at least, they are in a better place17:29
johnthetubaguyif thats not the case, then I think the order doesn't matter17:29
mdboothjohnthetubaguy: Right. All the mess is in once place.17:30
mdboothCentralised mess > distributed mess17:30
johnthetubaguyyeah, mess outside the compute manager17:30
mriedemjohnthetubaguy: yeah that's true17:30
mriedemok so i'm cool with that17:30
mriedemif you guys think the refactor is close17:30
ildikov+117:31
johnthetubaguyso in summary, I guess that means jgriffith to rebase on top of https://review.openstack.org/#/c/43952017:32
johnthetubaguy?17:32
mriedemjohnthetubaguy: mdbooth: are one of you going to address the remaining nit?17:32
mriedemit's just defining the method in the base class right?17:33
mriedemwe should be using ABCs there...but that's another story17:33
mdboothmriedem: It's open in my browser. I'll probably look in the morning.17:33
mriedemmdbooth: ok17:33
mriedemotherwise i'll address johnthetubaguy's comment17:33
mriedemand he can +2 if he's happy17:33
johnthetubaguyI was going to do that now17:33
mriedemok17:33
mriedemyes so let's rebase https://review.openstack.org/#/c/438750/ on top of https://review.openstack.org/#/c/43952017:34
mriedemand the goal for next week is detach being done17:34
johnthetubaguy#info let's rebase https://review.openstack.org/#/c/438750/ on top of https://review.openstack.org/#/c/439520 and the goal for next week is detach being done17:34
jgriffithmriedem johnthetubaguy thanks, that the one I pointed out this morning when I tried to hijack your other meeting :)17:34
ildikovmriedem: johnthetubaguy: +1, thanks17:35
ildikovok, it seems we're done for today17:36
ildikovanything else from anyone?17:36
mriedemok, so let's be done with multiattach talk for this meeting.17:36
mriedemyes17:36
mriedemat the ptg, we talked about a cinder imagebackend in nova17:36
mriedemmdbooth is interested in that as well,17:36
mriedembut we need a spec,17:36
mriedemso i'm wondering if mdbooth or jgriffith would be writing a spec17:36
jgriffithmriedem so I have some grievances with that approach :)17:37
smcginnisTime for the airing of grievances.17:37
mriedemair them out17:37
jgriffithbring out the aluminum pole17:37
mriedemi'll get the festivus pole17:37
smcginnis;)17:37
jgriffithHere's the thing... I know you all hate flavor extra-specs, but lord it's simple17:38
mdboothjgriffith: I missed the PTG unfortunately. Do you want to have a detailed chat tomorrow?17:38
jgriffithmore importantly, there's a side effect of the cinder image-backend that I didn't think of17:38
jgriffithit's all or nothing17:38
jgriffithso you do that and you loose the ability to do ephemerals if you wanted to17:38
mriedemi thought your whole use case was "i have cinder for storage, i want to use it for everything"17:38
mriedem"i don't have any local storage on my computes"17:38
jgriffithwhich meh... ok, I could work around that17:39
jgriffithThat's one use case yes17:39
johnthetubaguyis this per compute node? I think thats OK?17:39
mriedemyes it's per compute node17:39
mriedemso i think that's fine17:39
jgriffithso that's my question :)17:39
jgriffiththat addresses my customers that have ceph...17:39
mriedemif you really want ephemeral, then you have hosts with ephemeral17:39
johnthetubaguythats totally fine right, some flavors go to a subset of the boxes17:39
mriedemand put them in aggregates17:39
mriedemright17:39
jgriffithBUT, how do I schedule... because it seems you don't like scheduler hints either ;)17:40
mriedemyou have your cheap flavors go to the ephemeral aggregates17:40
mriedemand your "enterprise" flavors go to the cinder-backend hosts17:40
jgriffithmriedem so we use flavors?17:40
mriedemjgriffith: the flavors tie to the aggregates17:40
mriedemyes17:40
jgriffithisn't that how I started this?17:40
mriedemno17:40
johnthetubaguyjgriffith: the approach we are talking about uses the normal APIs, no hints required17:40
mriedemyou started by baking the bdm into extra specs which we're not doing17:40
mriedemright, this is all normal stuff17:40
jgriffithmriedem flavor extra-specs TBC17:41
mdboothIncidentally, I've been trying to get the virtuozzo folks to implement this for their stuff: I'd like imagebackend to be stored in metadata attached to the bdm17:41
mdboothTo me, this is the most sane way for this stuff to co-exist17:41
mriedemjgriffith: but standard extra specs that are tied to aggregate metadata17:41
mdboothHaving it hard-coded in nova.conf is extremely limiting17:41
mriedemwe also want the cinder imagebackend for other storage drivers, like scaleio17:41
mriedemand i'm sure veritas at some point17:42
johnthetubaguymdbooth: so there is a thing where we should support multiple ephemeral backends on a single box at some point, and we totally need that, I just don't think thats this (I could be totally wrong)17:42
mdboothI believe there are going to be mixed use cases. For example, HPC where you want some of your storage to be local and really fast, even at the cost of persistence.17:43
mriedemwe talked about multiple ephemeral backends at the ptg didn't we?17:43
mriedemthe vz guys were talking about something like that, but i might be misremembering the use case17:43
mdboothWell the cinder imagebackend would be all or nothing as it stands17:43
mdboothBecause it's always all or nothing right now17:44
johnthetubaguyso the HPC folks care about resource usage being optimal, so they would segregate the workloads that want SSD, and just use volumes for the extra stroage, I think...17:44
mriedemright,17:44
johnthetubaguybut thats probably a rat hole we might want to dig out of17:44
mdboothmriedem: Yes, the vz guys want to add a new hard-coded knob to allow multiple backend17:44
mriedemi think if you have different classes of storage needs you have aggregates and flavors17:44
mdboothI think that's just adding mess to mess17:44
johnthetubaguyjgriffith: do you still support local volumes?17:44
mriedemjohnthetubaguy: they don't17:44
mriedemthat came up again in the ML recently17:44
johnthetubaguyah, that got taken out17:44
mriedemi can dig up the link17:45
jgriffithjohnthetubaguy I don't know what you mean17:45
mriedemLocalBlockDevice17:45
mriedemor whatever cinder called it17:45
johnthetubaguyyeah, like a qcow2 on local filesystem17:45
jgriffithOh!17:45
jgriffithNo, we removed that17:45
jgriffithI'm happy to entertain a better way to do it, but not for Pike17:45
smcginnisTechnically, it's still there and deprecated and will be removed in Q.17:45
johnthetubaguyso... I kinda see a world where all storage is in Cinder only, once we add that back in (properly)17:45
jgriffithwhen I say entertain, I mean implement17:45
jgriffithI already know how to do it17:45
johnthetubaguybut the cinder ephemeral backend seems a good starting place17:46
jgriffithjohnthetubaguy I agree with that17:46
smcginnisjohnthetubaguy: +117:46
mdbooth+117:46
jgriffithI'm just concerned WRT if we'll get there or not17:46
johnthetubaguyso if we don't write the spec, we certain not to get there17:46
mriedemhttp://lists.openstack.org/pipermail/openstack-dev/2016-September/104479.html is the thread17:46
jgriffithLooking at the *tricks* some backends have used to leverage Nova Instances makes it disturbing17:46
jgriffithjohnthetubaguy touchet17:47
johnthetubaguyjgriffith: I had to say that, the urge was too strong :)17:47
jgriffithI was hoping to make one last ditch effort at a plea for flavor extra-specs... but I see that ship has certainly sailed and I fell off the pier17:47
johnthetubaguymriedem: thanks thats the one17:47
mriedemhttp://lists.openstack.org/pipermail/openstack-dev/2017-February/112847.html was picking it back up during the PTG17:47
mriedemi've also asked jaypipes to chime in on that thread,17:47
jgriffithjohnthetubaguy I would've thought less of you if you hadn't said it ;)17:47
mriedembecause it came up in the nova channel again yesterday17:48
mriedemand the answer was traits and placement,17:48
mriedembut i'm not sure how that works yet in that scenario17:48
mriedemi need the answer to this http://lists.openstack.org/pipermail/openstack-dev/2017-February/112903.html17:49
johnthetubaguymriedem: yeah, we get into ensemble scheduler that was proposed at one point, but lets not go there, but worth finding the notes on that17:49
mriedemi think the answer during the PTG was,17:50
mriedemyou have a resource provider aggregate,17:50
mriedemand the compute and storage pool are in the aggregate,17:50
johnthetubaguymriedem: I think we need the aggregate to have both resource providers in them17:50
johnthetubaguyand a resource type that matches the volume provider17:50
mriedemand there is a 'distance' attribute on the aggregate denoting the distance between providers,17:50
mriedemand if you want compute and storage close, then you request the server with distance=017:50
johnthetubaguyso there is a spec about mapping out the DC "nearness" in aggregates17:50
jgriffithgood lord17:51
johnthetubaguywe can't escape that one forever17:51
jgriffithLook.. there's another idea you might want to consider17:51
mriedemjohnthetubaguy: i think that's essentially what jay told me this would be17:51
jgriffithWe land the cinder image-backend17:51
johnthetubaguymriedem: yeah, sounds like the same plan17:51
jgriffithHave a cinder-agent or full cinder-volume service on each compute node...17:51
jgriffithIf you want something fast, place on said node with cinder capacity, skip the iscsi target and use it17:52
jgriffithbut the fact is, all this talk about "iscsi isn't fast enough" has proven false in most cases17:52
johnthetubaguywe are really just talking about how that gets modelled in the scheduler, we are kinda 60% there right now I think17:52
jgriffithjohnthetubaguy I know, but i"m suggesting before doing the other 40% we consider an alternative17:52
mriedemshould we suggest this as a forum topic?17:53
johnthetubaguyjgriffith: depends on how much you spend on networking vs the speed you want, agreed its possible right now17:53
mriedemthe "i want to use cinder for everything" thing has come up a few different ways17:53
johnthetubaguymriedem: we might want to draw this up a bit more crisply first, but yeah, I think we should17:53
jgriffithmriedem I'm willing to try and discuss it17:53
jgriffithmriedem but honestly if it's like other topics that's already decided I don't want to waste anyones time, including my own17:53
* johnthetubaguy sits on his hands so he doesn't say he will try to write this up17:53
mriedemwell the point of the forum session is to get ops and users in the room too17:54
jungleboyjmriedem: ++17:54
mriedemanyway, i can take a todo to try and flesh out something for that17:54
johnthetubaguymriedem: I am happy to review that17:54
johnthetubaguyso the baby step towards that is Cinder backend ephemeral driver17:54
mriedemyeah17:55
johnthetubaguywhich helps the "diskless" folks17:55
jgriffithjohnthetubaguy the good thing is that certainly does open up many more long term possibilities17:55
johnthetubaguyit seems aligned with our strategic direction17:55
jungleboyjThere has been enough request around the idea that we should keep trying to move something forward.17:55
* johnthetubaguy puts down management books17:55
jgriffithjungleboyj which request?17:56
jgriffithor... I mean "which idea"17:56
mriedemusing cinder for everything17:56
jgriffithlocal disk?  Ephemeral Cinder?  Cinder image-backend17:56
mriedemalways boot from volume17:56
jgriffithmriedem thank you17:56
mriedem3 minutes left17:57
jgriffithI get distracted easily17:57
jungleboyjEphemeral Cinder17:57
johnthetubaguyroot_disk = -1, etc17:57
johnthetubaguyso next steps...17:57
johnthetubaguyforum session, and nova spec on cinder ephemeral back end?17:57
johnthetubaguymriedem takes the first one17:57
jungleboyjSo, the plan is that the question of being able to co-locate volumes and instances will be handled by the placement service?17:57
johnthetubaguythe second one?17:57
mriedemyup17:57
mriedemjungleboyj: that will be part of the forum discussion i think17:58
jungleboyjmriedem: Great17:58
mriedemwho's going to start the nova spec for the cinder image backend?17:58
jgriffithmriedem I'll do it17:58
jgriffithI promise17:58
mriedemok, work with mdbooth17:58
mriedemhe's our imagebackend expert17:58
jgriffithmdbooth Lucky you :)17:58
johnthetubaguyI want to review both of those things when they exist17:58
jgriffithmdbooth I meant that you get to work with me, not an enviable task17:59
mdboothjgriffith: Hehe17:59
johnthetubaguy#action mriedem to raise forum session about cinder managing all storage, scheduling, etc18:00
johnthetubaguy#action jgriffith to work with mdbooth on getting a spec together for the cinder image backend18:00
ildikovand a timeout warning18:00
johnthetubaguysomething like that?18:00
mriedemyes18:00
mriedemworks for me18:00
johnthetubaguysweet18:00
mriedemtimes up18:00
johnthetubaguyI guess we are done done18:00
jungleboyj:-)18:00
ildikovawesome, thank you all!18:01
smcginnisthansk18:01
ildikovgreat meeting today :)18:01
ildikovsee you next week :)18:01
ildikov#endmeeting18:01
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"18:01
openstackMeeting ended Thu Mar 23 18:01:26 2017 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)18:01
openstackMinutes:        http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2017/cinder_nova_api_changes.2017-03-23-17.00.html18:01
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2017/cinder_nova_api_changes.2017-03-23-17.00.txt18:01
openstackLog:            http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2017/cinder_nova_api_changes.2017-03-23-17.00.log.html18:01
*** mriedem has left #openstack-meeting-cp18:01
jungleboyjThanks ildikov !18:01
*** rarcea has quit IRC20:08
*** sdague has quit IRC21:10
*** lamt has quit IRC21:10
*** lamt has joined #openstack-meeting-cp21:12
*** MarkBaker has joined #openstack-meeting-cp21:28
*** lamt has quit IRC21:28
*** scottda has quit IRC22:04

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!