Monday, 2016-11-21

*** jamespage has quit IRC00:12
*** jamespage has joined #openstack-meeting-cp00:22
*** jamespage has joined #openstack-meeting-cp00:22
*** greghaynes has quit IRC00:33
*** martinzajac has joined #openstack-meeting-cp00:40
*** greghaynes has joined #openstack-meeting-cp00:42
*** martinzajac has quit IRC00:45
*** sheeprine has quit IRC00:53
*** sheeprine has joined #openstack-meeting-cp00:53
*** tovin07 has joined #openstack-meeting-cp01:40
*** tovin07_ has joined #openstack-meeting-cp01:49
*** amrith has quit IRC02:00
*** amrith has joined #openstack-meeting-cp02:02
*** amrith has quit IRC02:02
*** amrith has joined #openstack-meeting-cp02:04
*** amrith has quit IRC02:04
*** amrith has joined #openstack-meeting-cp02:06
*** amrith has quit IRC02:07
*** amrith has joined #openstack-meeting-cp02:07
*** amrith has quit IRC02:10
*** sheeprine has quit IRC02:11
*** amrith has joined #openstack-meeting-cp02:12
*** sheeprine has joined #openstack-meeting-cp02:12
*** rderose has quit IRC02:56
*** rderose has joined #openstack-meeting-cp02:57
*** harlowja has quit IRC02:57
*** SergeyLukjanov has quit IRC03:12
*** SergeyLukjanov has joined #openstack-meeting-cp03:17
*** zhurong has joined #openstack-meeting-cp03:27
*** tqtran has joined #openstack-meeting-cp03:35
*** tqtran has quit IRC03:39
*** zhurong has quit IRC03:58
*** zhurong has joined #openstack-meeting-cp03:59
*** zhurong has quit IRC04:26
*** mae has joined #openstack-meeting-cp04:38
*** mae has quit IRC04:44
*** prateek has joined #openstack-meeting-cp05:03
*** zhurong has joined #openstack-meeting-cp05:11
*** zhurong has quit IRC05:31
*** zhurong has joined #openstack-meeting-cp05:35
*** alij has joined #openstack-meeting-cp06:50
*** alij has quit IRC07:03
*** zhurong has quit IRC07:16
*** alij has joined #openstack-meeting-cp07:22
*** tqtran has joined #openstack-meeting-cp07:35
*** tqtran has quit IRC07:35
*** rarcea has joined #openstack-meeting-cp07:36
*** mars has joined #openstack-meeting-cp07:44
*** alij has quit IRC08:01
*** alij has joined #openstack-meeting-cp08:15
*** alij has quit IRC08:20
*** alij has joined #openstack-meeting-cp08:35
*** mars has quit IRC08:43
*** olaph has quit IRC08:50
*** olaph has joined #openstack-meeting-cp08:50
*** alij has quit IRC08:58
*** alij has joined #openstack-meeting-cp09:05
*** gouthamr has joined #openstack-meeting-cp09:24
*** tovin07_ has quit IRC09:33
*** alij has quit IRC09:40
*** alij has joined #openstack-meeting-cp10:04
*** tovin07_ has joined #openstack-meeting-cp10:04
*** tovin07_ has quit IRC10:08
*** alij has quit IRC11:25
*** alij has joined #openstack-meeting-cp11:27
*** alij has quit IRC12:00
*** alij has joined #openstack-meeting-cp12:01
*** alij has quit IRC12:43
*** prateek has quit IRC13:14
*** alij has joined #openstack-meeting-cp13:20
*** lamt has joined #openstack-meeting-cp13:24
*** alij has quit IRC14:02
*** alij has joined #openstack-meeting-cp14:19
*** alij has quit IRC14:19
*** kbyrne has quit IRC14:27
*** kbyrne has joined #openstack-meeting-cp14:28
*** superdan is now known as dansmith14:43
*** prateek has joined #openstack-meeting-cp15:00
*** diablo_rojo_phon has joined #openstack-meeting-cp15:30
*** prateek has quit IRC15:31
*** piet_ has joined #openstack-meeting-cp15:40
*** bknudson has joined #openstack-meeting-cp15:59
*** chrisplo has quit IRC16:24
*** jgriffith has quit IRC16:28
*** rarcea has quit IRC16:52
*** stvnoyes has joined #openstack-meeting-cp16:53
*** mriedem has joined #openstack-meeting-cp16:58
mriedemildikov: are we doing a hangout today?16:58
ildikovmriedem: I don't think we planned, but can do16:59
ildikovmriedem: I can create a link quickly if we have enough people around16:59
mriedemwe can see who's around17:00
mriedemi've made 0 progress in this area since last week17:00
mriedemso don't have much to add17:00
ildikov#startmeeting cinder-nova-api-changes17:01
openstackMeeting started Mon Nov 21 17:01:01 2016 UTC and is due to finish in 60 minutes.  The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot.17:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.17:01
*** openstack changes topic to " (Meeting topic: cinder-nova-api-changes)"17:01
openstackThe meeting name has been set to 'cinder_nova_api_changes'17:01
ildikovscottda ildikov DuncanT ameade cFouts johnthetubaguy jaypipes takashin alaski e0ne jgriffith tbarron andrearosa hemna erlon mriedem gouthamr ebalduf patrickeast smcginnis diablo_rojo gsilvis  xyang1 raj_singh lyarwood17:01
*** xyang1 has joined #openstack-meeting-cp17:01
mriedemo/17:01
scottdahi17:01
ildikovmriedem: I would offer that I ping you to do home work this week, but Thanksgiving kills my joy of annoying people...17:02
johnthetubaguyo/17:02
ildikovI know that smcginnis and patrickeast are traveling this week17:02
mriedemmy todo this week is to review johnthetubaguy's updates to https://review.openstack.org/#/c/373203/ which i didn't get to last week because of the nova spec freeze17:03
mriedembut i'm also straddling some internal things i need to work on, and it's a short week17:03
ildikovmriedem: the Cinder side is updated too17:03
ildikovmriedem: code #link https://review.openstack.org/#/c/387712/17:04
ildikovmriedem: spec #link https://review.openstack.org/#/c/361512/17:04
hemnayough17:04
ildikovthere is a comment from jaypipes regarding the reserve part, which creates an empty attachment and also changes the volume state to 'reserved'17:05
johnthetubaguyI am curious if they are both pointing at the same thing17:05
ildikovjohnthetubaguy: that's just illusion :)17:05
mriedemi'd kind of hoped we wouldn't have reserve, and we'd just have nova-api call create_attachment which would create the empty attachment and set the volume status to 'attaching'17:05
hemnaI think the reserve should return the attachment uuid17:06
mriedemthen update_attachment in nova-compute when we have the connector details from os-brick17:06
hemnathat's what I asked for a while back17:06
hemnaI think it just makes the entire flow more explicit and sane17:06
mriedemeither way i think we agree to create the attachment from nova-api17:06
hemnamriedem, the problem with that is the create_attachment has to call the cinder backend17:06
hemnawhich is slow17:06
johnthetubaguyso my spec calls reserve create attachment17:07
ildikovmriedem: the current reserve does this except the name of the volume state17:07
mriedemhemna: even if nova doesn't provide a connector?17:07
johnthetubaguyright, connector is later17:07
johnthetubaguy(optionally, later)17:07
hemnawhy would it call it w/o a connector?17:07
mriedemi'm cool with leaving a reserve call in17:07
ildikovjohnthetubaguy: the Cinder API proposal is not finalized yet completely, so if the functionality is the same I think we can wait with updating the name in your spec17:07
johnthetubaguyit would do that to reserve the volume from other people17:07
hemnajust to reserve?17:07
mriedemit's just a bit odd that reserve will create an attachment too17:07
mriedemand we'll also have a POST attachment17:08
ildikovjohnthetubaguy: but basically the answer is yes :)17:08
hemnawell it's 'odd' either way17:08
hemnacalling create_attachment, which doesn't create an attachment but is used to reserve17:08
mriedemyes it's odd either way i agree17:08
hemnakinda....hidden API17:08
mriedemit would create an attachment record in the db17:08
ildikovmriedem: I know, it's attachment-reserve atm though17:08
hemnaI prefer having a named API for the capability17:08
hemnathat returns the attachment uuid17:08
mriedemsure that's fine17:09
mriedemos-reserve creates an empty attachment and returns the id17:09
mriedems/id/uuid/17:09
mriedemnova stores the attachment uuid in the bdm17:09
ildikovmriedem: reserve only asks for a volume_id and an instance_id IIRC17:09
hemnayuh17:09
johnthetubaguyif you look at build instance, and how retry works, it might change your mind about reserve being separate to attachment uuid, at least thats what pushed me the way I suggested in the current Nova spec17:09
mriedemthen nova-compute uses the bdm.attachment_id to update the attachment later with the connector17:09
mriedemjohnthetubaguy: with bfv wouldn't we still reserve the volume in the api?17:10
mriedemin the case that a volume_id is provided for server create i mean17:10
mriedemi'm trying to think of it like with ports17:10
ildikovI would guess that reserving the volume or move it to 'attaching' state or smth is needed in the API anyhow17:11
mriedemwe don't do that today for bfv i don't think, but i think that's wrong17:11
johnthetubaguymriedem: so attaching a volume during boot, I like to think about that separately from bfv, yes you reserve in the API, and then the compute17:11
ildikovmriedem: no it's not there today in BFV17:12
johnthetubaguymriedem: yeah, its broken today, but thats what we should do17:12
ildikovmriedem: I have a patch for that though on review17:12
ildikovmriedem: I have a few questions on the review to ensure we're on the same page, so if you can take a look that would be pretty great too :)17:12
ildikovremove check_attach patch #link https://review.openstack.org/#/c/335358/17:13
johnthetubaguycreated a new section in here: https://etherpad.openstack.org/p/ocata-nova-priorities-tracking17:15
ildikovjohnthetubaguy: coolio, thanks!17:16
ildikovso I guess we agree that having smth to call from the API that locks the volume and returns an attachment_id is what we would like to have17:17
ildikovis my understanding correct?17:17
mriedemi think so17:18
scottdaThat's my understanding17:18
johnthetubaguythe cinder spec seems to include reserver already17:18
ildikovjohnthetubaguy: it does, but the functionality of that attachment-reserve is what I typed in above17:19
ildikovso it seems that we kind of have the functionality, but don't like the naming17:19
* ildikov maybe oversimplifies the situation17:19
*** chrisplo has joined #openstack-meeting-cp17:21
johnthetubaguyactually, I am not sure if reserve works for the retry build instance on a different host17:21
johnthetubaguywe want to keep the volume attached to the instance, but we need to do a detach on the old host, so we end up with two attachments17:22
johnthetubaguyI am not sharing enough context there probably17:22
johnthetubaguyso instance is building on host 117:22
johnthetubaguyvolume 1 is attached to host 117:22
johnthetubaguyit fails, boo17:22
ildikovdoes retry build equal to evacuate?17:22
johnthetubaguyclose, but not quite17:23
johnthetubaguywe can actually clean up host 1, and host 1 is still alive17:23
johnthetubaguybut we do need to keep it reserved during the process17:23
ildikovjohnthetubaguy: isn't it like live migrate?17:24
johnthetubaguyyeah, basically the same17:24
ildikovwhere we have two attachments for the same instance?17:24
johnthetubaguyyeah17:24
mriedemjohnthetubaguy: we don't reserve from the compute17:24
mriedemjust like with ports, we bind on each host17:25
ildikovmriedem: +117:25
mriedemand cleanup any garbage on retry17:25
johnthetubaguyah, so maybe build is different17:25
johnthetubaguyah, oops, it is17:25
mriedemnote that today we don't retry on BFV failures with attachment errors17:25
johnthetubaguywe diconnect host 117:25
johnthetubaguythen go to host 217:25
johnthetubaguydo we are not a host 2 at the time we want to clean up host 117:25
mriedemhttps://review.openstack.org/#/c/246505/ attempts to retry when bfv fails attachment on host 117:26
johnthetubaguys/do/so/17:26
mriedemthe author has basically abandoned ^17:26
johnthetubaguyI really mean when there is a failure for some other reason, we need to sort out the volume attachments17:26
mriedemi guess it depends on what we need to cleanup17:27
mriedemi think of it like,17:27
mriedemtoday we update the port's host binding details,17:27
mriedemon failure we don't clean that up17:27
ildikovmriedem: I think the idea was to handle what we can with a new attachment17:27
mriedemwe probably should...but we don't today,17:27
ildikovmriedem: and move old ones to error state if needed maybe17:27
mriedemwith volumes i think of that like we update the vol attachment with the connector from that host17:27
mriedemildikov: having 3 vol attachments in error state sucks17:28
mriedemwho cleans those up? what does it mean for the volume?17:28
ildikovI'm fine with just deleting the attachments17:28
mriedemwell,17:28
mriedemi think we have a single vol attachment per vol and instance17:28
mriedemand the host changes17:28
mriedemper the connector17:28
ildikovI think there is at least one part of johnthetubaguy's patch where we want to move it to error state and clean up later I guess17:28
mriedemthat's what we have with ports17:28
johnthetubaguyildikov: thats evacaute I think17:29
mriedemi could see that for evacuate17:29
ildikovjohnthetubaguy: yeap, I think so too17:29
mriedemcleanup the error attachments when the compute comes back17:29
mriedemevacuate is a special sort of terrible17:29
johnthetubaguymriedem: I forgot, we are missing reserve on ports today, technically17:29
mriedemjohnthetubaguy: yeah i know17:29
ildikovmriedem: we certainly have an agreement there17:29
johnthetubaguymriedem: they do have a plan to fix that already17:29
mriedemjohnthetubaguy: but we don't reserve for BFV today either, so it's similar17:29
johnthetubaguyyeah, its similar17:30
ildikovI hope we fix that soon, I mean the BFV reserve case17:30
ildikovit would be great to clean up check_attach before introducing a new Cinder flow IMHO17:31
johnthetubaguyso I think this is how I described the retry build in the spec: https://review.openstack.org/#/c/373203/13/specs/ocata/approved/cinder-new-attach-apis.rst@23717:32
johnthetubaguyits basically the same thing we are desribing17:32
johnthetubaguyI just wanted to avoid a window where the volume is not reserved17:32
johnthetubaguyoops, wrong line17:32
johnthetubaguyhttps://review.openstack.org/#/c/373203/13/specs/ocata/approved/cinder-new-attach-apis.rst@27317:32
ildikovjohnthetubaguy: is this a case when we only reserved the volume and nothing else happened due to any reason before retry?17:35
johnthetubaguyI was meaning when the volume is fully attached and working on host 1, but we still need to retry the build and move to host 217:35
ildikovjohnthetubaguy: in any case the attachment-create call is now create_and_or_update practically17:35
ildikovjohnthetubaguy: so if you call it from the compute after having a connector it reserves the volume and does everything else as well17:36
johnthetubaguyI don't know what the new host will be, when host 1 is being disconnected, I think...17:36
johnthetubaguyso I am calling reserve on an already attached volume, basically17:37
johnthetubaguybut with a matching instance_uuid17:37
johnthetubaguyto make sure it says reserved when host 1 is disconnected17:37
johnthetubaguyI am just wondering if that will work with the current definition of reserver17:37
johnthetubaguyreserve17:37
ildikovI need to double check, it might does17:38
ildikovso basically we need to call something that ensure the volume is locked, when we know retry is needed17:39
johnthetubaguyI think, yes17:39
johnthetubaguyright now it looks like we just skip any cleanup, but I could be wrong17:40
ildikovas we need the scheduler to figure out the host first so we can have the connector, but it takes some time and we don't want to leave the volume available for that window17:40
mriedemleft a comment in the review in that section17:40
mriedemsince i haven't read the full spec since it was last updated i might be missing context,17:41
johnthetubaguymriedem: yeah, I was just digging, I couldn't see any cleanup on reschedule either17:41
mriedembut really wanted at a high level that the volume is reserved (locked) in n-api17:41
mriedemand the computes update the connector per compute, per retry17:41
mriedemjohnthetubaguy: there isn't cleanup b/c we don't retry for bfv failures17:41
mriedemthe only thing we cleanup is calling os-detach and os-terminate_connection17:42
johnthetubaguyso... its the update the connector bit17:42
johnthetubaguywe can't really do that, I thought, as that would break all our locking around shared connections17:43
johnthetubaguyildikov: did we get anywhere on deciding if they actually exist?17:43
ildikovso with live migrate we create a new attachment, with evacuate we also and mark the old one as in error state and the rest we would like to just update the existing attachment?17:43
ildikovjohnthetubaguy: they exist as I saw from the discussions on the Cinder channel last week after the meeting17:44
johnthetubaguyOK17:44
johnthetubaguyI would prefer to always create a new attachment, and delete the old one, just so the detach logic is always the same17:44
johnthetubaguybut maybe I am overthinking it17:44
ildikovjohnthetubaguy: I think there are still differences in how those connections are handled, but we don't have patrickeast around this time to ask as he had one of the examples17:45
mriedemjohnthetubaguy: so then nova-api and nova-compute are creating attachments?17:45
mriedemi don't really like that17:45
ildikovjohnthetubaguy: not having exceptions is what I favor too17:45
johnthetubaguyoh, true, they are in this model17:45
johnthetubaguybut that would be with the server token + user token, once we get that fix in17:46
mriedemnot sure how that matters17:46
mriedemit actually makes it more complicated17:46
mriedemb/c then the user listing attachments on a volume doesn't see the ones that nova created17:46
mriedemand can't clean them up17:46
johnthetubaguyits just an RBAC thing, not an owner thing, but thats a separate conversation I guess17:47
johnthetubaguyI don't mean RBAC, I really mean expiry in this context17:47
johnthetubaguyso neutorn port bindings have some stuff that will need a service token (setting the host), but lets back out of that17:48
johnthetubaguyI get its simpler to just update the connector17:48
johnthetubaguyrather than create empty, delete old one, etc17:48
johnthetubaguyI just like disconnect always deleting the attachment17:48
johnthetubaguythat seemed simpler17:48
johnthetubaguyI guess they are logically the same for the cases that matter17:49
johnthetubaguyah, no...17:49
mriedemdisconnect is cleanup on the host via os-brick though right?17:49
johnthetubaguyshared volume, you need to delete (or update) the attachment with the lock held17:49
johnthetubaguyshared host connection I mean17:50
ildikovmriedem: I believe yes, at least that's my understanding17:50
johnthetubaguythe lock being local to host 117:50
mriedemit's fine to cleanup from a failed connection with the lock held,17:50
mriedemi don't see how that messes with the cardinality of how many attachments we have17:50
johnthetubaguyI am talking about a VM that fails after the volume is attached, and has to re try the build on a new host17:51
johnthetubaguyalthough you said thats not a thing, I am sure I saw that happen... but maybe thats some crazy local patch we had17:51
mriedemsure, before we reschedule why can't we disconnect (os-brick) from host and remove the connector from the attachmnet?17:51
*** markvoelker_ has joined #openstack-meeting-cp17:52
johnthetubaguyremove connector works, rather than delete, but then it is a special case in the volume_api17:52
ildikovcan we update as opposed to remove?17:52
mriedemthis is what we cleanup on a failed attach today https://github.com/openstack/nova/blob/a58c7e5173103755f5a60f0a3ecede77e662ada3/nova/virt/block_device.py#L29917:52
*** markvoelker has quit IRC17:53
mriedemand for vol attach we unreserve in the compute manager here https://github.com/openstack/nova/blob/a58c7e5173103755f5a60f0a3ecede77e662ada3/nova/compute/manager.py#L473117:53
mriedemb/c we reserved in the api17:53
mriedem(for attach vol, not BFV)17:54
mriedemwherever we end up, i just don't like n-api creating an attachment, n-cpu updating it, maybe deleting it and creating a new attachment on reschedule,17:54
johnthetubaguyin my spec, unreserve doesn't exist, that just happens when you delete an attachment17:54
mriedemi'd rather see n-api reserve the volume (no attachment created), n-cpu handles CRUD ops on attachments,17:54
mriedemor n-api creates attachment, n-cpu updates it, that's it17:55
mriedemyeah maybe n-cpu deletes the attachment at the end if things fail17:55
mriedemlike unreserve17:55
*** markvoelker has joined #openstack-meeting-cp17:55
ildikovexcept live migrate I guess where we need the two attachments?17:55
mriedemconductor would create the 2 attachments for live migration i'd think17:56
mriedemanyway, i'm sure it's in john's spec and i just need to sit down with it for awhile17:56
johnthetubaguyso I guess I don't understand why we would want reserve/unreserve as separate calls, I much prefer them just being an attachment thats in a specific state. there must be something I am missing here17:56
*** markvoelker_ has quit IRC17:57
mriedemjohnthetubaguy: i agree with you there17:57
mriedembut it does bake extra logic into the cinder api,17:57
mriedemso i'm guessing that's why cinder people are opposed to that17:57
mriedemand by cinder people i mean hemna17:57
mriedemjohnthetubaguy: just to confirm, create_attachment w/o a host/connector == reserve for you right?17:58
johnthetubaguymriedem: yeah17:58
ildikovin the plans we don't have unreserve17:58
mriedemand update_attachment + connector == attached17:58
mriedemi.e. initialize_connection17:58
johnthetubaguyyeah17:58
mriedemok we're on the same page then17:58
mriedemildikov: unreserve == delete attachment i think17:59
ildikovmriedem: yeap, I think delete kind of contains that too in this sense17:59
johnthetubaguymriedem: yeah, in my head delete = unreserve + terminate connection (on cinder side, if required)17:59
ildikovmriedem: as the current reserve creates the empty attachment it kind of covers what you described above18:00
mriedemi hope the spec has a mapping of old to new world things :)18:00
mriedemso we can talk the same language for people not involved in this for the last year18:00
johnthetubaguymy one tries to do that18:00
mriedemwe're out of time18:00
johnthetubaguyhttps://review.openstack.org/#/c/373203/13/specs/ocata/approved/cinder-new-attach-apis.rst@8618:00
ildikovmriedem: we might think about what the volume state is called there though...18:00
mriedemjohnthetubaguy: tops18:01
mriedemis that the correct british thing to say?18:01
johnthetubaguy:)18:01
ildikovLOL :)18:01
johnthetubaguywe say lots of random rubbish to mean the same thing18:02
ildikovjohnthetubaguy: mriedem: should we move the discussion to the Nova spec regarding the edge cases?18:02
mriedemthat's english in general i believe18:02
mriedemildikov: yes18:02
johnthetubaguyyeah, I am good with that18:02
ildikovjohnthetubaguy: mriedem: and we can argue on the Cinder API on the Cinder spec18:02
johnthetubaguythat sounds logical, I need to look at that update18:03
ildikovmriedem: if you can get to the remove check_attach patch before Thanksgiving that would be great as I can update the code while you're enjoying turkey :)18:04
ildikovjohnthetubaguy: cool, I'm trying my best to keep it in sync with the code and discussions18:05
johnthetubaguyildikov: awesome, thank you!18:06
ildikovjohnthetubaguy: we would like to get the Cinder part merged as soon as possible, so if you can raise any concerns on the spec or the API part of the code that would be great18:06
johnthetubaguyhonestly, its hard to agree to the API before understanding if it covers all the cases we need18:06
johnthetubaguyhaving said that, it feels like we are really quite close now18:06
ildikovyeap and if we can keep things simple like how mriedem described to create and update the attachment where needed then we should be good to go18:07
ildikovlike having the basic API calls on the Cinder side and play Lego with it in Nova as we need it sounds the way to go I think18:08
mriedemas long as the cinder api is close and uses microversions we can tweak it over time too18:08
johnthetubaguybeing able to reset the host connection is probably the simplification bit thats useful18:08
johnthetubaguymriedem: good point18:08
ildikovmriedem: it does, scottda is our person on the microversions part when we get there18:09
mriedemanyway, times up, i've got to move on here, i'll try to review all things under the sun and not sleep the rest of this week18:09
ildikovmriedem: thanks in advance18:09
ildikovmriedem: have a nice Thanksgiving!18:09
mriedemthanks18:10
ildikovall right, see you all next week!18:11
ildikovand on the reviews in the meantime :)18:11
ildikovthanks and have a nice rest of the day!18:11
ildikov#endmeeting18:11
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"18:11
openstackMeeting ended Mon Nov 21 18:11:52 2016 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)18:11
openstackMinutes:        http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2016/cinder_nova_api_changes.2016-11-21-17.01.html18:11
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2016/cinder_nova_api_changes.2016-11-21-17.01.txt18:11
openstackLog:            http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2016/cinder_nova_api_changes.2016-11-21-17.01.log.html18:11
*** harlowja has joined #openstack-meeting-cp18:14
*** jgriffith has joined #openstack-meeting-cp18:16
*** jgriffith has quit IRC18:16
*** tqtran has joined #openstack-meeting-cp18:23
*** gouthamr has quit IRC19:07
*** stream10 has joined #openstack-meeting-cp19:13
*** diablo_rojo_phon has quit IRC19:38
*** piet_ has quit IRC20:09
*** brault has quit IRC20:25
*** stream10 has quit IRC21:36
*** lamt has quit IRC23:08
*** mriedem has quit IRC23:12
*** lamt has joined #openstack-meeting-cp23:13
*** xyang1 has quit IRC23:37
*** lamt has quit IRC23:40
*** tqtran has quit IRC23:42
*** tqtran has joined #openstack-meeting-cp23:42
*** openstack has joined #openstack-meeting-cp23:44
*** ChanServ sets mode: +o openstack23:44

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!