Thursday, 2017-04-20

*** brault has quit IRC00:22
*** brault has joined #openstack-meeting-cp00:28
*** knangia has quit IRC00:31
*** topol has joined #openstack-meeting-cp01:12
*** stvnoyes has quit IRC01:26
*** stvnoyes has joined #openstack-meeting-cp01:26
*** david-lyle has quit IRC01:35
*** stvnoyes has quit IRC01:46
*** stvnoyes has joined #openstack-meeting-cp01:47
*** david-lyle has joined #openstack-meeting-cp01:50
*** topol has quit IRC02:33
*** gouthamr has joined #openstack-meeting-cp02:45
*** topol has joined #openstack-meeting-cp03:05
*** topol has quit IRC03:10
*** felipemonteiro_ has joined #openstack-meeting-cp03:14
*** felipemonteiro__ has quit IRC03:18
*** Rockyg has quit IRC04:01
*** lamt has joined #openstack-meeting-cp05:19
*** MarkBaker has quit IRC05:24
*** MarkBaker has joined #openstack-meeting-cp05:36
*** MarkBaker has quit IRC05:46
*** MarkBaker has joined #openstack-meeting-cp05:49
*** gouthamr has quit IRC05:54
*** lamt has quit IRC06:14
*** lamt has joined #openstack-meeting-cp06:22
*** david-lyle has quit IRC06:23
*** lamt has quit IRC06:27
*** david-lyle has joined #openstack-meeting-cp06:37
*** topol has joined #openstack-meeting-cp06:46
*** topol has quit IRC06:50
*** david-lyle has quit IRC07:02
*** david-lyle has joined #openstack-meeting-cp07:03
*** knangia has joined #openstack-meeting-cp07:38
*** david-lyle has quit IRC08:47
*** cartik has joined #openstack-meeting-cp08:59
*** cartik has quit IRC09:07
*** cartik has joined #openstack-meeting-cp09:39
*** MarkBaker has quit IRC09:48
*** MarkBaker has joined #openstack-meeting-cp10:01
*** MarkBaker has quit IRC10:05
*** topol has joined #openstack-meeting-cp10:08
*** topol has quit IRC10:13
*** MarkBaker has joined #openstack-meeting-cp10:18
*** sdague has joined #openstack-meeting-cp10:21
*** markvoelker has quit IRC11:06
*** markvoelker has joined #openstack-meeting-cp11:06
*** markvoelker has quit IRC11:11
*** topol has joined #openstack-meeting-cp11:16
*** benj_ has quit IRC12:00
*** cartik has quit IRC12:01
*** benj_ has joined #openstack-meeting-cp12:32
*** markvoelker has joined #openstack-meeting-cp12:39
*** lamt has joined #openstack-meeting-cp13:03
*** lamt has quit IRC13:23
*** hypothermic_cat is now known as ildikov13:25
*** cartik has joined #openstack-meeting-cp13:42
*** david-lyle has joined #openstack-meeting-cp13:46
*** cartik has quit IRC13:48
*** lamt has joined #openstack-meeting-cp13:55
*** topol has quit IRC14:04
*** lamt has quit IRC14:09
*** david-lyle has quit IRC14:14
*** gouthamr has joined #openstack-meeting-cp14:19
*** jaugustine has joined #openstack-meeting-cp14:26
*** topol has joined #openstack-meeting-cp14:46
*** lamt has joined #openstack-meeting-cp14:49
*** jaugustine has quit IRC14:55
*** lamt has quit IRC15:03
*** david-lyle has joined #openstack-meeting-cp15:13
*** lamt has joined #openstack-meeting-cp15:24
*** lamt has quit IRC15:35
*** lamt has joined #openstack-meeting-cp15:43
*** mriedem has joined #openstack-meeting-cp15:57
*** mgiles has joined #openstack-meeting-cp15:59
*** david-lyle has quit IRC15:59
ildikov#startmeeting cinder-nova-api-changes16:00
openstackMeeting started Thu Apr 20 16:00:05 2017 UTC and is due to finish in 60 minutes.  The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
mriedemo/16:00
*** openstack changes topic to " (Meeting topic: cinder-nova-api-changes)"16:00
openstackThe meeting name has been set to 'cinder_nova_api_changes'16:00
mriedemo/ o/ o/16:00
johnthetubaguyo/16:00
ildikovDuncanT ameade cFouts johnthetubaguy jaypipes takashin alaski e0ne jgriffith tbarron andrearosa hemna erlon mriedem gouthamr ebalduf patrickeast smcginnis diablo_rojo gsilvis  xyang1 raj_singh lyarwood breitz jungleboyj16:00
lyarwoodo/16:00
hemna\o16:00
ildikovmriedem: hi :)16:00
smcginnis./16:00
patrickeasto/16:01
jungleboyjo/16:01
ildikovI think we can start :)16:01
stvnoyeso/16:02
ildikovso ongoing things are updating the detach flow in Nova16:02
ildikovand remove check_detach at this point16:02
mriedemhttps://review.openstack.org/#/c/438750/ has a +216:03
lyarwooda quick nit on that, do we also need to delete the attachment_id from the bdm when we are removing the attachment?16:03
ildikovmriedem: awesome, thank you :)16:03
mriedemoh hmm16:04
ildikovlyarwood: actually we should :S16:04
johnthetubaguydoes the BDM get deleted at that point?16:04
johnthetubaguyI guess not always16:04
lyarwoodyeah sorry trying and failing to think of a use case where it would remain, ignore me, I'll follow up in the review if I find one16:05
mriedemwell,16:05
mriedemit's a good question,16:05
johnthetubaguyterminate connection ones16:05
mriedembut things that are keying off attachment_id would see it's not set and then do any old path stuff16:06
johnthetubaguybut wait, no attachment delete there16:06
johnthetubaguymriedem: good point16:06
johnthetubaguyalthough after you detach a bdm, not sure what you would do with it, unless its already attached somewhere else or about to be attached somewhere else16:06
lyarwoodbut that's only on the detach flow right? attach isn't going to use attachment_id, it's just always going to use the new flow.16:06
johnthetubaguylyarwood: I think that might be the key bit16:07
mriedemdetach_volume in the compute manager deletes the bdm16:07
mriedemafter the detach16:07
ildikovlyarwood: attach will set it in the BDM16:07
mriedemunless it's doing a rebuild16:07
*** david-lyle has joined #openstack-meeting-cp16:07
ildikovlyarwood: attach will use microversions to decide what flow to use16:07
mriedemhmm, this is going to be weird with rebuild/evacuate16:08
mdbootho/16:08
mriedemi believe rebuild models things with a single bdm16:08
johnthetubaguywell, on the new host its an attach flow16:08
johnthetubaguy... I wonder if we really detach the attachment, rather than the BDM, oh my, my head is mashed :(16:09
ildikovmriedem: you mean updating or not updating the attachment_id?16:10
ildikovmriedem: johnthetubaguy: do you have kind of a state machine for the BDM?16:11
ildikovjust to try to figure out what happens with it in each use case16:11
mriedemhells no16:12
johnthetubaguyI think we wrote it out in the spec16:12
johnthetubaguyI wouldn't want to go down that path here16:12
mriedemjohnthetubaguy: so rebuild is either going to delete the old bdm and create a new bdm, or update the single bdm with the new attachment_id16:12
mriedemi think16:12
johnthetubaguyyeah, I think it does the update16:12
johnthetubaguyyou know connection_info16:12
mriedembecause it's the same volume16:12
johnthetubaguywe have to store that16:12
mriedemsame volume, different attachment16:12
johnthetubaguyso we don't loose it16:13
mriedemyeah so i'm still +216:13
johnthetubaguyI think attachment_id will have to be done in the same way16:13
johnthetubaguyyeah, I think me too16:13
johnthetubaguyeither way, I think its fine with what uses that code right now16:13
mriedemyeah that too,16:13
mriedemwe can flush out other demons later16:13
johnthetubaguy+116:13
johnthetubaguyits a problem for live-migrate16:14
johnthetubaguyetc16:14
ildikovyeap, I'm trying to organize the chain of patches right now16:14
ildikovI had to accept I cannot really avoid doing that... :/16:14
mriedemyeah i was trying to figure out what to review next16:14
mriedemsince it's not a series that's hard16:14
mriedemi guess https://review.openstack.org/#/c/456896/ is next?16:15
mriedemsince some things build on top of that16:15
johnthetubaguyhmm... is there not a way to do an explicit flow16:16
johnthetubaguylike API operation X16:16
mriedemi think that's what ildikov tried to do16:16
mriedemthe common things are at the bottom,16:16
ildikovmriedem: that can be16:16
mriedembut then each api is handled separately16:16
mriedemnot in a chain16:16
johnthetubaguyah, my bad, I see16:16
mriedemi just don't personally want to handle live migration right now...16:17
johnthetubaguyyeah16:17
ildikovmriedem: I tried to re-use terminate_connection, re your comment on try not to do too much copy-paste there16:17
johnthetubaguymaybe migrate would be easier, or rebuild?16:17
johnthetubaguyhonestly, I would add a new method, and consolidate if it makes sense16:17
johnthetubaguyto stop accidentally using the new stuff, I was thinking16:18
ildikovmriedem: but it's not crystal clean yet whether that was a good idea or not16:18
ildikovmriedem: I meant _terminate_volume_connections to be clear16:18
mriedemildikov: in https://review.openstack.org/#/c/456896/3/nova/compute/manager.py ?16:18
mriedemok i see16:18
johnthetubaguyyou know what, we need create before delete16:19
ildikovmriedem: yes16:19
ildikovmriedem: attachment_create is there by mistake in that patch16:20
ildikovjohnthetubaguy: accidentally use new stuff?16:20
mriedemildikov: i agree with lyarwood in that change, comments inline16:21
mriedemi don't see anything calling _terminate_volume_connections that already has a connector in scope16:21
mriedemunless those come later in the series16:21
johnthetubaguyildikov: I will describe more in the patch16:21
ildikovjohnthetubaguy: ok16:21
johnthetubaguyah, so I think I mean what mriedem just said16:21
johnthetubaguyoops, maybe not16:22
mriedemi guess swap_volume is the one that already has the connector16:22
mriedemso https://review.openstack.org/#/c/456896/3/nova/compute/manager.py is prematurely optimizing16:22
ildikovmriedem: we have it here for instance: https://review.openstack.org/#/c/456988/1/nova/compute/manager.py16:23
johnthetubaguydoesn't attachment_create need the connector here?16:24
johnthetubaguymy bad, its the wrong one16:24
mriedemildikov: yeah we don't need that yet in https://review.openstack.org/#/c/456896/3/nova/compute/manager.py16:24
mriedemcomments inline16:24
mriedemanyway, we could review by committee later :)16:24
mriedemshould we move on?16:24
ildikovjohnthetubaguy: I thought it just reserves at this point, but we might actually want to connect as well16:25
johnthetubaguyI don't think you do the connect in there right now16:25
mriedemcreating an attachment in the _terminate_volume_connections method is wrong16:25
johnthetubaguyI think thats correct, because we are on the wrong host to get the destination connector16:25
johnthetubaguymriedem: I think its only needed for the old API case16:25
ildikovI only split the code into multiple patches and that was it16:25
mriedemso whatever patches do that, like swap_volume, will need something different16:26
ildikovmy bad, but I didn't have more bandwidth for it this week :(16:26
johnthetubaguymriedem: ++16:26
ildikovmriedem: when you create an attachment without connector that only keeps the volume reserved16:26
johnthetubaguymriedem: I think thats what I was trying to say earlier with my create a new method for this, it feels like we have many uses, not all need to do the attachment create16:27
ildikovmriedem: we can do that later, but then the volume will be out there available16:27
johnthetubaguyoh, its worse than that isn't it....16:27
johnthetubaguythe attachment delete is needed when we both detatch and terminate connection, but when we just terminate connection, we probably want to create then delete16:27
ildikovI added a keep_reserved boolean to that method later on and when that's True that's when I wanted to do the attachment_create to reserve16:28
johnthetubaguyI am not sure I understood what I just said there16:28
johnthetubaguyhonestly, I would do swap with all brand new code, then we look for shared code later?16:28
ildikovthe first patch there shouldn't have attachment_create regardless16:28
johnthetubaguyI know thats going to mean cut and paste in the mean time16:28
smcginnisjohnthetubaguy: Seems a reasonable approach though.16:29
johnthetubaguymriedem: do you think that would work ^16:29
smcginnisAs long as the clean up work gets identified and done.16:29
johnthetubaguyI just think we are going to confuse ourselves with premature optimizations here16:29
ildikovjohnthetubaguy: works for me, I only tried this per request from mriedem, I can switch back to new code16:29
johnthetubaguyits a good idea to try, but its bending my mind reviewing this stuff16:29
johnthetubaguyagreed we need no cut and paste in the end, like smcginnis said16:30
smcginnisLet's get it working first, then get it cleaned up.16:30
johnthetubaguyso... I am thinking about to nova-network and neutron integration16:30
mriedemi think re-using code is fine, if it's done properly,16:30
jungleboyjsmcginnis:  +216:30
ildikovfair enough, I will reorganize this when I do that chaining up16:30
mriedemthere are just simple mistakes in these patches16:30
mriedemthat once removed are not a problem16:30
johnthetubaguymriedem: maybe16:31
mriedemi've commented in two of them16:31
lyarwoodildikov: is bandwidth going to be an issue over the next week? Happy to take on swap_volume for example if it is.16:31
mriedemso what's the issue with the keeping the volume reserved think?16:31
mriedemthat's for swap volume?16:31
mriedembecause as it stands, the code is deleting the attachment and then creating a new attachment w/o a connector to fake out that the volume is still reserved16:32
mriedemthat's lossy16:32
mriedemwe have to have a better way to make that atomic16:32
mriedemcan't we update the attachment to drop the connector?16:32
johnthetubaguyyou don't need the connector16:32
ildikovlyarwood: let's sync up after the meeting as if I put all these into one chain then it's tough to share the work on it16:32
johnthetubaguyoh, sorry, thats not your question16:32
johnthetubaguymriedem: the API we have is create a new one, then delete it, to make it atomic16:33
lyarwoodmriedem: I suggested that during the spec review but I don't think the api can do that at the moment16:33
ildikovmriedem: the Cinder code does not support that at the moment16:33
johnthetubaguyildikov: does it support the reverse ordering?16:33
johnthetubaguycreate then delete16:33
ildikovmriedem: and when I last talked to jgriffith we kind of agreed it would be better not to mess up update with that case16:33
mriedemok, create new attachment (w/o connector), delete old attachment (that had connector), then update new attachment with new connector?16:33
johnthetubaguymriedem: yeah, that I thought what the plan16:34
mriedemit'd be nice if attachment update could just be atomic16:34
johnthetubaguymriedem: agreed its messy16:34
mriedemsince the client has to jump through hoops now16:34
ildikovjohnthetubaguy: we might be able to do that, I don't think anything prevent us from reversing that order16:34
mriedemand it means the client has to know the internal state machine of cinder for how attachments work16:34
johnthetubaguyso live-migrate has to do that flow anyway, as both are connected for some time16:34
johnthetubaguyildikov: we need the reverse order for live-migrate, so I hope it works16:34
mriedemyeah create new attachment on destination host, once live migration is done, delete old attachment from source host16:35
mriedemwe do the same with ports16:35
lyarwoodjohnthetubaguy: iirc you called that flow out in the spec so it is allowed16:36
ildikovjohnthetubaguy: I think it should, I don't recall anything that would fail with that case right now16:36
johnthetubaguylyarwood: yeah, I remember jgriffith gave me a funny look about that, so I am curious if it made it into the code16:37
johnthetubaguymriedem: FWIW, its causing big problems with ports16:37
ildikovjohnthetubaguy: what kind of problems?16:38
johnthetubaguyso its probably a total distraction here, its specific to how neutron does a partial attach16:38
johnthetubaguyand live-migrate, naturally16:38
ildikovI just asked as if we have anything similar to that here then we could try to do it better for volumes16:38
mriedemjohnthetubaguy: are you referring to all of the stuff we talked about with neutron in BCN that we never implemented?16:39
johnthetubaguymriedem: yeah16:39
johnthetubaguymriedem: they wouldn't implement it, because we wouldn't approve our side of the work, or something like that16:39
johnthetubaguyand it wasn't high priority16:39
mriedemoh mututally assured destruction, i see16:39
johnthetubaguybut anyways, thats a total rat hole16:39
mriedem*mutually16:39
* johnthetubaguy nods16:39
mriedemwell it was ocata,16:40
mriedemwe had 3 people and 3 months to do anything16:40
mriedemso...16:40
mriedemanyway,16:40
mriedemcomments are in the patches16:40
mriedemwhat else?16:40
johnthetubaguyseems like we have the stuff for next week16:41
johnthetubaguywho was working on those, I forget lyarwood or ildikov or both?16:41
ildikovwe could decide what to do with remove check_detach: https://review.openstack.org/#/c/446671/16:41
ildikovjohnthetubaguy: me for sure, I will sync up with lyarwood and stvnoyes to see how we can share work items16:41
johnthetubaguysweet16:42
johnthetubaguy#action ildikov will sync up with lyarwood and stvnoyes to see how we can share work items16:42
johnthetubaguycheck_detach16:42
ildikovjohnthetubaguy: and I will hunt down jgriffith as we're in the same town today and figure out what we touched on today to see what to do with these edge cases16:42
johnthetubaguyso I totally missread the code and got all worried about breaking the API16:42
johnthetubaguyit seems the extra checks are very much "belt and braces" stuff, by which I mean overkill16:43
johnthetubaguyildikov: is that right, its just redundant checks we are removing here really16:43
ildikovjohnthetubaguy: check_attach and check_detach were/are checks remained here from the ancient times when Cinder became a separate entity in my understanding and from what I saw in the code so far16:44
ildikovjohnthetubaguy: and I personally believe in code clean up and deleting code time to time and this seems like a good time for it16:45
mriedemcheck_attach had implications for api behavior,16:45
ildikovjohnthetubaguy: and finally in my view Nova should not track the volume state in these cases as it's Cinder's responsibility really16:45
mriedemso it required service version checks and the like16:45
hemnaildikov, +116:45
mriedemi also love deleting code16:45
ildikovmriedem: that was because of a missed reserve_volume call16:45
mriedembut i don't assuming every service is always upgraded to the latest level at the same time16:46
johnthetubaguyildikov: I support all your aims, I am just trying to work out if the change is safe16:46
ildikovso I think we made the Nova side code better overall with that :)16:46
mriedemhaving said all that, i haven't gone through the check_detach patch, it might be less messy16:46
johnthetubaguyI mean if it were me merge code deletes should deliver you a cookie16:46
johnthetubaguymriedem: I think it is16:46
mriedemi didn't get a cookie for removing os-pci16:46
mriedemanyway...moving on?16:47
ildikovmriedem: we can sort this out in Boston :)16:47
johnthetubaguyso... I have a quick question16:47
ildikovmriedem: plz check the patch, it's pretty small, should be a piece of cake16:47
johnthetubaguybegin_detaching...16:47
ildikovmriedem: sorry, cookie :)16:47
johnthetubaguywhat does that get replaced by in the new world?16:47
johnthetubaguyi.e. one we use the new attachment API16:48
ildikovjohnthetubaguy: you or someone from Cinder should remind me what that does exactly before I try to answer that question16:48
johnthetubaguyits the API that does the state check, to see if we can detach16:48
johnthetubaguycinder API16:48
johnthetubaguyI don't think it exists in the new API, so Nova would have to manual check the state16:49
johnthetubaguyI think it does attached -> detaching state change on the volume in the old world16:49
ildikovjohnthetubaguy: we shouldn't solve anything on the Nova side that belongs to Cinder16:50
johnthetubaguyI agree, but the API doesn't let us do that16:50
*** stvnoyes has quit IRC16:50
ildikovwe can also re-evaluate the states that we're maintaining in Cinder, if all of them are needed then we can check what's missing and fix it16:50
johnthetubaguyso... actually, about that patch we just approved16:51
ildikovI mean fix where the fix should happen16:51
johnthetubaguyI guess we actually still call begin_detaching, which I don't think we are allowed to?16:51
*** stvnoyes has joined #openstack-meeting-cp16:51
johnthetubaguyso maybe begin_detaching is in the new API too, I wasn't ever 100% clear on that16:51
mriedembegin_detaching changes the volume status from in-use to detaching,16:52
johnthetubaguyright16:52
johnthetubaguysorry16:52
ildikovI don't think it is16:52
mriedemif we fail to detach, we call roll_detaching to put it back into in-use state16:52
johnthetubaguymaybe thats why we dropped it in the new API, my memory is sketchy on that now16:53
mriedemif begin_detaching isn't available in the new api, shouldn't it fail if you request it at microversion >= 2.37?16:53
mriedem*3.2716:53
mriedemor is it just a noop?16:53
ildikovmriedem: I meant that the new flow doesn't use it, the code is available still16:53
mriedemi guess this is homework for cinder16:53
johnthetubaguyI think so16:54
johnthetubaguyso to be clear, I think this is the new flow16:54
johnthetubaguycheck BDM state, continue (maybe check cinder is in sync-> attachment still exists)16:54
johnthetubaguygive early error to uses it was a bad API call16:54
johnthetubaguyetc16:54
johnthetubaguythen we do stuff16:54
johnthetubaguythen delete attachment16:54
mriedemwhy does begin_detaching even exist?16:55
johnthetubaguyso its like one cinder API call only in the new flow, I think16:55
smcginnisI thought there was a reason it was no longer needed, but we can verify that.16:55
mriedemwho cares if the volume is being detached?16:55
johnthetubaguyso the code we are deleting seems to be needed in the new flow16:55
mriedemit's either in-use or it's not16:55
johnthetubaguymriedem: correct16:55
smcginnismriedem: +116:55
mriedembegin_detaching was probably some internal nova-volume thing, externalized in the cinder api16:55
johnthetubaguyits just so you get a detaching... state in the API16:56
johnthetubaguyactually, thats a good point16:56
mriedemsame with putting the volume in reserved state, but that's a bit trickier because we need to reserve the volume in nova-api and then put it in-use in the compute16:56
johnthetubaguythe new flow changes the API state changes16:56
hemnawell it was also used to try and prevent race conditions16:56
johnthetubaguyreserve just == "in use" I think16:56
mriedemhemna: which one?16:56
ildikovmriedem: if we would ever get there to multi-attach I guess we might not want to attach and detach at the same time?16:56
johnthetubaguytwo people calling detatch... damm it16:56
hemnamultiple detaches happening at the same time16:56
hemnare: volume state mgmt16:57
johnthetubaguygrr, we missed that one16:57
johnthetubaguyright now it will fail on the compute due to the locking going on there16:57
mriedemso request 1 puts the volume into detaching state, request 2 fails because the volume is already detaching16:57
johnthetubaguywich is a poor API expereience16:57
hemnayah16:57
mriedemand we don't have this with the new flows?16:57
hemnaif it's already detaching, then isn't the 2nd api request a success ?16:58
mriedemi.e. we can't use begin_detaching with the >=3.27?16:58
hemnait's detaching....16:58
hemnaexcept multi-attach16:58
mriedemwe have 2 minutes left16:58
ildikovmriedem: we can and I think we still do actually16:58
mriedembut this is why we can't just rip code out willy nilly16:58
mriedemi said it, willy nilly16:58
ildikovlol :)16:58
mriedemanyway we're out of time16:59
ildikovmriedem: it's check_detach I would love to rip out16:59
ildikovwe can continue on the Cinder channel and code reviews16:59
*** hemna is now known as willynilly16:59
* willynilly wants to remove check_detach16:59
* johnthetubaguy needs to go cook his dinner16:59
ildikovor Nova channel, whichever we want to16:59
*** willynilly is now known as hemna16:59
johnthetubaguyheh17:00
ildikovthanks everyone for today17:00
ildikov#endmeeting17:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"17:00
openstackMeeting ended Thu Apr 20 17:00:29 2017 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2017/cinder_nova_api_changes.2017-04-20-16.00.html17:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2017/cinder_nova_api_changes.2017-04-20-16.00.txt17:00
openstackLog:            http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2017/cinder_nova_api_changes.2017-04-20-16.00.log.html17:00
*** mriedem has left #openstack-meeting-cp17:00
jungleboyjThanks!17:00
*** mgiles has left #openstack-meeting-cp17:02
*** rakhmerov has quit IRC17:14
*** IgorYozhikov has quit IRC17:14
*** david-lyle_ has joined #openstack-meeting-cp17:17
*** david-lyle has quit IRC17:17
*** rakhmerov has joined #openstack-meeting-cp17:19
*** IgorYozhikov has joined #openstack-meeting-cp17:19
*** david-lyle_ has quit IRC17:23
*** jaugustine has joined #openstack-meeting-cp17:32
*** lamt has quit IRC17:49
*** jaugustine has quit IRC17:50
*** jaugustine has joined #openstack-meeting-cp18:03
*** stvnoyes has quit IRC18:24
*** stvnoyes has joined #openstack-meeting-cp18:25
*** ying_zuo has joined #openstack-meeting-cp18:27
*** lamt has joined #openstack-meeting-cp18:31
*** david-lyle has joined #openstack-meeting-cp19:33
*** qwebirc97267 has joined #openstack-meeting-cp19:53
qwebirc97267JOIN20:02
*** qwebirc97267 has quit IRC20:05
*** MarkBaker has quit IRC20:14
*** MarkBaker has joined #openstack-meeting-cp20:27
*** ying_zuo has left #openstack-meeting-cp20:29
*** david-lyle has quit IRC20:57
*** sdague has quit IRC21:23
*** MarkBaker has quit IRC21:26
*** MarkBaker has joined #openstack-meeting-cp21:30
*** gouthamr has quit IRC21:57
*** topol has quit IRC22:20
*** jaugustine has quit IRC22:42
*** MarkBaker has quit IRC22:55
*** sdague has joined #openstack-meeting-cp23:06
*** gouthamr has joined #openstack-meeting-cp23:33
*** markvoelker has quit IRC23:39
*** topol has joined #openstack-meeting-cp23:40
*** lamt has quit IRC23:41
*** topol has quit IRC23:55
*** topol has joined #openstack-meeting-cp23:55

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!