Monday, 2016-11-14

*** tovin07_ has joined #openstack-meeting-cp01:11
*** markvoelker has joined #openstack-meeting-cp01:19
*** markvoelker has quit IRC01:23
*** zhurong has joined #openstack-meeting-cp01:30
*** dims has quit IRC02:43
*** markvoelker has joined #openstack-meeting-cp03:20
*** markvoelker has quit IRC03:24
*** zhurong has quit IRC04:37
*** markvoelker has joined #openstack-meeting-cp05:21
*** coolsvap has joined #openstack-meeting-cp05:23
*** markvoelker has quit IRC05:25
*** prateek has joined #openstack-meeting-cp05:31
*** zhurong has joined #openstack-meeting-cp05:33
*** cartik has joined #openstack-meeting-cp06:38
*** skazi has quit IRC08:04
*** rarcea has joined #openstack-meeting-cp08:08
*** cartik has quit IRC08:50
*** cartik has joined #openstack-meeting-cp09:03
*** zhurong has quit IRC09:34
*** gouthamr has joined #openstack-meeting-cp09:48
*** gouthamr has quit IRC09:58
*** tovin07_ has quit IRC10:03
*** gouthamr has joined #openstack-meeting-cp10:18
*** gouthamr has quit IRC10:32
*** gouthamr has joined #openstack-meeting-cp10:49
*** zhurong has joined #openstack-meeting-cp11:35
*** zhurong has quit IRC11:42
*** zhurong has joined #openstack-meeting-cp12:16
*** zhurong has quit IRC12:19
*** gouthamr has quit IRC12:19
*** zhurong has joined #openstack-meeting-cp12:22
*** cartik has quit IRC12:24
*** scottda has joined #openstack-meeting-cp13:10
*** gouthamr has joined #openstack-meeting-cp13:10
*** lamt has joined #openstack-meeting-cp13:21
*** lamt has quit IRC13:22
*** lamt has joined #openstack-meeting-cp13:23
*** tongli has joined #openstack-meeting-cp13:29
*** gouthamr has quit IRC13:30
*** dims has joined #openstack-meeting-cp13:35
*** markvoelker has joined #openstack-meeting-cp13:46
*** gouthamr has joined #openstack-meeting-cp13:55
*** gouthamr has quit IRC13:57
*** gouthamr has joined #openstack-meeting-cp14:06
*** gouthamr has quit IRC14:06
*** xyang1 has joined #openstack-meeting-cp14:06
*** gouthamr has joined #openstack-meeting-cp14:07
*** zhurong has quit IRC14:08
*** edtubill has joined #openstack-meeting-cp15:14
*** coolsvap has quit IRC15:17
*** prateek has quit IRC15:23
*** edtubill has quit IRC16:12
*** edtubill has joined #openstack-meeting-cp16:13
*** alij has joined #openstack-meeting-cp16:21
*** rarcea has quit IRC16:33
*** alij has quit IRC16:51
*** alij has joined #openstack-meeting-cp16:55
*** gouthamr has quit IRC16:55
*** gouthamr has joined #openstack-meeting-cp16:56
*** stvnoyes has joined #openstack-meeting-cp16:57
*** alij has quit IRC17:00
ildikov#startmeeting cinder-nova-api-changes17:00
openstackMeeting started Mon Nov 14 17:00:26 2016 UTC and is due to finish in 60 minutes.  The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot.17:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.17:00
*** openstack changes topic to " (Meeting topic: cinder-nova-api-changes)"17:00
openstackThe meeting name has been set to 'cinder_nova_api_changes'17:00
ildikovscottda ildikov DuncanT ameade cFouts jaypipes takashin alaski e0ne jgriffith tbarron andrearosa hemna erlon gouthamr ebalduf patrickeast smcginnis diablo_rojo gsilvis  xyang1 raj_singh lyarwood17:00
jgriffitho/17:01
ildikovjgriffith: hi :)17:01
scottdaHi17:02
jgriffithhowdy17:02
*** gouthamr has quit IRC17:02
ildikovNova guys are busy today, so I agreed with them to continue the discussion on johnthetubaguy's spec: https://review.openstack.org/#/c/373203/17:02
smcginniso/17:03
scottdaI'm in transit ATM, in case I don't respond promptly17:03
ildikovI thought we might have a chat still to see the roadmap of the Cinder API patch/activity https://review.openstack.org/#/c/387712/17:03
ildikovscottda: copy that17:04
smcginnisjgriffith: Need to go through your latest updates.17:04
smcginnisjgriffith: Do we have a feel yet whether v1 or v2 are in favor?17:04
jgriffithsmcginnis: v3 :)17:04
smcginnislol17:05
*** gouthamr has joined #openstack-meeting-cp17:05
jgriffithsmcginnis: I *think* the v2 updates are the direction that most closely satisfy the nova spec17:05
ildikovsmcginnis: we're getting there to have create/delete calls17:05
jgriffithsmcginnis: so it basically has 1 extra call for nova if they want it, otherwise it's pretty similar to the v1 approach17:05
ildikovsmcginnis: additionally we still have reserve in the patch, I guess that's for things like shelve17:05
smcginnisOK, cool. My preference was v1, but like I think I mentioned in BCN, if we can't get Nova to agree with the change then it doesn't do much good.17:06
smcginnisildikov: Shelve looked like the tricky one.17:06
ildikovsmcginnis: and the latest chat on the review is whether we call 'update' as 'finalize'17:06
* ildikov likes nit picking about names :)17:06
jgriffithildikov: smcginnis yeah; so the bigest thing is an "attachment_create" does a reserve on the volume (status = 'reserved') and creates an empty attachment17:06
jgriffithildikov: :)17:06
smcginnisildikov: Saw that. I agree, it's minor, but naming is important long term.17:06
smcginnisfinalize_reservation_or_do_update_to_existing()17:07
smcginnisRolls off the tongue.17:07
jgriffithfrom my side it would be awesome to see if the general API's and workflow are the right direction for folks or not17:07
ildikovjgriffith: is that 'reserve' something that we expose on the API?17:07
jgriffithI'm hesitant to write much more code if this isn't what people want to see17:07
jgriffithildikov: nope; it's the "attachment_create" call17:08
ildikovjgriffith: ok17:08
jgriffithildikov: the requirement for nova was the whole empty attachment thing; and they wanted to get rid of reserve (which makes sense to me) so that's what that call does now17:08
ildikovjgriffith: would something like that be needed for shelve - as it kinda reserves the volume - or we're confident about having 'attachment_create' is enough?17:08
jgriffithjust sets status to reserved and returns an ID of an empty attachment object17:08
jgriffithildikov: yeah, for things like creating an attachment for a shelved Instance and for boot-from-volume that's the sort of thing you'd need this for17:09
jgriffithildikov: shelving in and of itself isn't problematic, attaching to a shelved Instance is the trick17:10
jgriffithildikov: even if it is sort of a strange use case17:10
ildikovjgriffith: yeap, right, a bit of a nightmare-ish thing...17:10
ildikovsmcginnis: that's a bit of a long name :)17:11
smcginnisildikov: ;)17:11
ildikovsmcginnis: but at least contains everything, so I stop complaining17:12
smcginnisHah!17:12
* ildikov uses Mondays to complain, sorry about that... :)17:12
ildikovjgriffith: is your v2 patch containing v3-like changes up to date regarding the things we talked about above?17:13
smcginnisjgriffith: I'll try to review this afternoon.17:15
smcginnisSorry, gotta run for a lunch appointment.17:15
ildikovsmcginnis: do you think it would worth having 2 minutes about this on the Cinder meeting this week to get more eyes on this from the team?17:15
smcginnisildikov: Yeah, certainly wouldn't hurt.17:15
smcginnisildikov: If I don't add it to the agenda, feel free to remind me.17:15
ildikovsmcginnis: ok cool, will do that tnx!17:16
ildikovsmcginnis: we need more reviews on this as it's a new API, which is hard to change later if people realize they don't like it17:16
jgriffithildikov: s/"hard to change"/"impossible to change"/17:17
ildikovjgriffith: I try not to be this dramatic at the beginning of the week, but you certainly have a point17:18
ildikovanything else for today?17:19
ildikov#action smcginnis to add the API proposal topic to this week's Cinder meeting17:19
ildikovjgriffith: I've just seen your reply on the review17:20
jgriffithildikov: yeah, hopefully those responses *help*17:21
ildikovjgriffith: do you want to chat about the naming here or let's just leave it on the review?17:21
jgriffithildikov: I'm certainly fine with changing the naming around17:21
jgriffithildikov: totally up to folks here17:21
jgriffithildikov: I'm certainly not going to draw a line in the sand over naming17:21
jgriffithildikov: but be careful, or else the API will end up with a desert related name :)17:21
ildikovjgriffith: as for me it's really only about the name, so I don't want to change the planned functionality on the API17:22
ildikovjgriffith: personally I wouldn't mind, I have a good relation with deserts :)17:23
jgriffithhaha17:23
xyang1jgriffith: hi, question for you about the comments here: https://review.openstack.org/#/c/387712/5/cinder/volume/manager.py@439117:23
jgriffithxyang1: yeah?17:23
ildikovjgriffith: let's keep the discussion on the review then and give people the chance to come up with something sane17:23
xyang1jgriffith: do you mean the use case described my Jay is not supported yet17:24
jgriffithildikov: works for me; it's easy to change to "update" if that's where we end up17:24
jgriffithxyang1: the use case described by jay.xu appears to be multi-attach17:24
jgriffithxyang1: so correct, it's not implemented here *yet* but the design allows it17:25
ildikovjgriffith: +117:25
xyang1jgriffith: sorry, I am lost17:25
jgriffithxyang1: ok... how can I help *find* you :)17:25
*** alij has joined #openstack-meeting-cp17:25
xyang1:)17:25
xyang1jgriffith: volA attached to VM1 and VM2 on the same host17:26
jgriffithxyang1: so that's multi-attach17:26
jgriffithxyang1: and we will have the ability via attachments table to detect and notify Nova of that17:27
jgriffithxyang1: that's actually the *easy* case17:27
jgriffithxyang1: the case that is commented in the review is different17:27
jgriffithxyang1: Nova has expressed a case where VolA and VolB attached to InstanceX and InstanceY shares a single connection to HostZ17:27
jgriffithxyang1: I don't know anything about what backends work like this or how they work :(17:28
jgriffithxyang1: but regardless that would be pretty driver-specific, so I figured that has to live in the backend/driver17:28
jgriffithxyang1: when we do multi-attach VolA attached to InstanceX, InstanceY on HostZ... that's easier to address17:29
xyang1jgriffith: so our backend will create a different host lun number for every volume, so in this case, that may be 2 connections instead of 117:29
jgriffithxyang1: Cinder will have enough information to understand that17:29
jgriffithxyang1: right, ALL backends that I'm aware of will have a different connection (lun ID)17:29
*** alij has quit IRC17:30
jgriffithxyang1: but this is something that the Nova side has asked for specifically... it may be an NFS, or RBD thing17:30
jgriffithxyang1: I fully admit I'm not very familiar with those backends17:30
jgriffithand how they work17:30
xyang1jgriffith: so seems we are confused with the shared connection definition17:30
jgriffithxyang1: I'm not confused :)17:30
jgriffithxyang1: I understand it perfectly :)17:31
xyang1I mean Jay and I:)17:31
jgriffithxyang1: I know... I'm just giving you grief :)17:31
xyang1If I want to test this patch, which client patch is the right one17:32
jgriffithxyang1: none of them currently I'm afraid17:32
xyang1ok17:32
jgriffithxyang1: so if I can get reviews on what's up there and the API I'll move forward17:32
* johnthetubaguy sneaks in the back17:33
jgriffithxyang1: but I spent a lot of time on the last couple of iterations and never got anywhere; so I'm admittedly being kinda lazy17:33
jgriffithxyang1: not dumping a bunch of code until we're at least somewhat decided on this direction17:33
ildikovjgriffith: I can look into the Nova patch in parallel when we get to this stage17:33
jgriffithjohnthetubaguy: is that you back there sneaking in late?17:33
jgriffithjohnthetubaguy: I'll remind you that this class begins promptly at 17:00 :)17:34
ildikovjohnthetubaguy: we're talking about shared connections, so if you have anything to add, this is the time :)17:34
* jgriffith remembers those instructors back in school17:34
johnthetubaguyheh17:34
ildikovjgriffith: LOL :)17:34
johnthetubaguyshared connections suck, but it seems they exist17:35
ildikovjgriffith: you can help me reminding kinds to do their homeworks too as you know all the key expressions ;)17:35
jgriffithhaha, sadly I do17:35
johnthetubaguythe nova spec goes into quite a lot of detail on how shared connections would work with the new proposed flow17:36
johnthetubaguywith a few diagrams17:36
jgriffithjohnthetubaguy: yeah, they're annoying but as long as they're a thing we'll have to deal with them17:36
johnthetubaguyjgriffith: yup17:36
ildikovjohnthetubaguy: BTW, do we have anything on the gate to test this use case?17:36
johnthetubaguyildikov: not on our side, AFAIK17:38
jgriffithjohnthetubaguy: if we can just figure out the "who" on those I can track down ways to get it tested17:38
johnthetubaguyyeah, unsure if the os-brick folks have more details on the who17:39
jgriffithjohnthetubaguy: I thought Ceph was one of those cases (but perhaps I'm wrong) and NFS types17:39
johnthetubaguyI am assuming its some fibrechannel thingy17:39
jgriffithjohnthetubaguy: I pinged hemna on that the other day and got nothing17:39
jgriffithjohnthetubaguy: even FC uses luns for each volume17:39
xyang1jgriffith: +117:39
johnthetubaguywho knows when to drop the FC connection, or is that not really an option?17:40
xyang1jgriffith: our FC backend creates a host lun id for every volume17:40
jgriffithjohnthetubaguy: doesn't really work that way; and if it does via zone-manager you drop it when you no longer have any devices; BUT the way that usually works is FC is *always* connected (connected in quotes)17:41
jgriffithjohnthetubaguy: you just may not have any devices mapped/connected17:41
jgriffithjohnthetubaguy: FC is just scsi17:41
johnthetubaguyfair enough17:42
jgriffithjohnthetubaguy: scsi over FC... thus sometimes confusing17:42
johnthetubaguywe should find out who does the shared thing17:42
*** alij has joined #openstack-meeting-cp17:42
jgriffithjohnthetubaguy: agreed.. maybe dev ML?17:42
*** gouthamr has quit IRC17:42
johnthetubaguyI guess17:43
johnthetubaguycan I leave that with you cinder folks?17:43
jgriffithjohnthetubaguy: ildikov I can post a question up there if folks think it's helpful?17:43
jgriffithjohnthetubaguy: yep17:43
johnthetubaguythanks that would be good17:43
jgriffithjohnthetubaguy: we can/should figure that out17:43
johnthetubaguy+117:43
ildikovjgriffith: that would be great, thanks!17:43
jgriffithjohnthetubaguy: ildikov who knows, maybe we get lucky and this goes away :)17:43
* ildikov started to wonder whether shared connection is just an urban legend...17:43
jgriffithalthough it's trivial to solve17:44
johnthetubaguyildikov: was thinking the same17:44
johnthetubaguyso the latest version just drops a bit of locking logic, so thats not so bad17:44
jgriffithThere's no Urban Legends in OpenStack!!!!17:44
jgriffith:)17:44
ildikovLOL :)17:44
johnthetubaguycountryside legends?17:44
jgriffithOk, I'll try and track down any cases where a connection is shared17:44
ildikovit's good though if both of you feel it's an easy on to solve, I hope we will have only this kind if any17:45
jgriffithildikov: doesn't hurt to keep it in the specs/patches as is currently17:45
ildikovjgriffith: I didn't mean to remove it17:46
jgriffithand best of all requires the drivers who care about it to deal with it17:46
jgriffithildikov: no, I know what you meant; I'm just saying that it's not intrusive to leave the solution we have in place to deal with it17:46
johnthetubaguydo we want to talk live-migrate, I think thats what we were doing at the end of the last hangout?17:46
ildikovjgriffith: I just got surprised that after talking about it that much no one here knows a single example :)17:46
jgriffithanyway17:46
*** alij has quit IRC17:46
jgriffithjohnthetubaguy: sure17:47
jgriffithjohnthetubaguy: my thought there was we just create a new attachment and connect like we do currently.  We just provide an option to indicate that it's part of a migration so even if we're not supporting multi-attach allow this to be called17:48
ildikovjgriffith: I think we're on the same page :)17:48
johnthetubaguyI tried to cover that all here:17:48
johnthetubaguyhttp://docs-draft.openstack.org/03/373203/12/check/gate-nova-specs-docs-ubuntu-xenial/6e33afc//doc/build/html/specs/ocata/approved/cinder-new-attach-apis.html#live-migrate17:48
jgriffithjohnthetubaguy: so Nova would still create a new attachment, it would just have two attachments for the same volume for a brief period of time17:48
johnthetubaguyyep, thats what I would like17:48
jgriffithjohnthetubaguy: your wish is my command17:49
jgriffith:)17:49
johnthetubaguyfor me, its clear its related to a migration, because both attachments have the same instance_uuid17:49
jgriffithahh... yeah, doc works17:49
jgriffithjohnthetubaguy: ahh, good point17:49
jgriffithjohnthetubaguy: in that case I wouldn't even need to clutter the API with anything17:49
ildikovjohnthetubaguy: we might think about how to make it clear for everyone17:49
jgriffithjohnthetubaguy: that's even better17:49
ildikovjohnthetubaguy: it got clear to me too, when you reminded me, but for anyone who does not follow this activity it will still be tricky17:50
jgriffithildikov: johnthetubaguy I'll exploit the Instance UUID to detect it and add a field to the attachment record to idicate it's being migrated17:50
johnthetubaguymulti_attach=True means allows multiple server_uuids17:50
jgriffithildikov: johnthetubaguy or actually just set the attach-status to "migrating"17:50
johnthetubaguyhonestly, this is a bit of our shared model we need people to understand, else we will hit nasty bugs17:51
jgriffithjohnthetubaguy: to clarify server_uuids == instance_uuids ?17:51
johnthetubaguyyeah17:51
johnthetubaguyNova API says "server" not instance17:51
ildikovjgriffith: johnthetubaguy: what if for some reason the second attach request happens by mistake?17:51
jgriffithjohnthetubaguy: right, just wanted to clarify17:51
johnthetubaguya "server" is on a "host", which is sadly a little unclear17:51
ildikovjohnthetubaguy: right, I will need longer transition time for that in my head, my bad, sorry :S17:51
jgriffithjohnthetubaguy: should I change the attach API's s/instance_xxx/server_xxxx/g ?17:52
johnthetubaguyildikov: we know which host it was, and that will probably need manual cleanup17:52
jgriffithildikov: second attach by accident?17:52
jgriffithjohnthetubaguy: +117:52
jgriffithor just "don't do that" :)17:52
ildikovjgriffith: I personally didn't want to :)17:53
johnthetubaguyjgriffith: it probably wants to be nova agnostic I guess, so maybe we need something that is easier to get17:53
johnthetubaguythe evacuate case makes things interesting, FWIW17:53
johnthetubaguybut thats the next discussion17:53
jgriffithjohnthetubaguy: well I'm already faking it anyway... if it's nova calling I use instance_uuid, if not I use the host17:53
ildikovjohnthetubaguy: will this renaming be cleaned up in the Nova code ever?17:54
johnthetubaguyildikov: seems unlikely, way too hard17:54
*** njohnston has left #openstack-meeting-cp17:54
ildikovjohnthetubaguy: do you mean the questions about who puts the attachment into 'error' state, etc?17:54
johnthetubaguyyeah17:54
ildikovjohnthetubaguy: that's what I thought just wanted confirmation from an insider17:54
johnthetubaguyso os-brick disconnect fails17:54
johnthetubaguyI think put attachment into the error state is useful17:55
ildikovis that about evacuate or os-brick disconnect in general?17:56
johnthetubaguyin general at first17:56
johnthetubaguyI think I talk about it here: http://docs-draft.openstack.org/03/373203/12/check/gate-nova-specs-docs-ubuntu-xenial/6e33afc//doc/build/html/specs/ocata/approved/cinder-new-attach-apis.html#spawn-and-delete17:57
johnthetubaguybasically, the volume is attached from the VM, but not disconnected from the host17:57
johnthetubaguyso put the attachment into an ERROR state, so we know whats happening17:57
johnthetubaguythat would allow someone else to attach to that volume, as its not attached on first VM any more, I guess17:58
ildikovI guess except if the VM is on the same host as the previous one?17:59
johnthetubaguywell, that would probably fail the connect17:59
ildikovalthough I'm not an expert of what that means practically of disconnect fails17:59
johnthetubaguywhich is where we just delete the attachment17:59
johnthetubaguyme neither, honestly18:00
johnthetubaguyits just cases that have been brought up18:00
johnthetubaguyI have attempted to go through them.18:00
ildikovI think we need some os-brick insides to go through these cases in details18:01
ildikovjgriffith: will that be Cinder in the future who sets the state of the attachment to 'error'?18:02
jgriffithildikov: the concept I thought we discussed with Matt was to allow them to just then delete/remove the attachment18:03
jgriffithildikov: wait...for the disconnect case?18:04
ildikovjgriffith: for example for that18:04
jgriffithildikov: I guess we could add the ability to put the attachment into error state, but I'm unclear what that buys us18:05
jgriffithildikov: error, removed... what would the difference be?18:05
ildikovjgriffith: I guess the question here is how clean up will happen in certain cases and what info is needed for that, etc.18:06
jgriffithildikov: in either case we would need to dissociate the volume with that attachment record anyway18:06
jgriffithildikov: that's kind of a brick problem isn't it?18:06
ildikovjgriffith: if os-brick is enough to figure that out, then removing the attachment sounds fine to me18:06
jgriffithildikov: I mean from cinder's perspective, disassociate the volume from the connection so it's not tied up any longer18:07
ildikovjgriffith: my thought was just that we kind of removing the "evidence" of something went wrong18:07
jgriffithildikov: you'll still have logs though18:07
ildikovjgriffith: which might be bad18:07
jgriffithand honestly this goes down the philosophy of a hyper HA device vs a cloud IMO18:08
jgriffithI guess I'd have to think/talk through this case more18:08
ildikovfair enough18:08
jgriffithmaybe it's more important/critical than I'm thinking18:09
ildikovI think we would need someone who knows os-brick better and see how failover works18:09
jgriffithbut I don't see it as a really big deal or terrible thing.  We currently deal with it a little in Brick and a little in Cinder18:09
ildikovI guess we can start with how it's handled today and see where that leads us18:11
jgriffithildikov: seems reasonable18:11
ildikovif there's any chance not to overcomplicate things I'm all in for that18:11
ildikovwe're over time for today, but we can bring this up next week and see what others think18:12
ildikovjgriffith: johnthetubaguy: does it sound ok? ^^18:12
jgriffithildikov: sounds good to me18:13
ildikovcoolio18:13
ildikovthen I let you go for today :)18:14
ildikovhave a nice day/evening!18:14
ildikovtalk to you next week the latest!18:15
ildikov#endmeeting18:15
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"18:15
openstackMeeting ended Mon Nov 14 18:15:51 2016 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)18:15
openstackMinutes:        http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2016/cinder_nova_api_changes.2016-11-14-17.00.html18:15
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2016/cinder_nova_api_changes.2016-11-14-17.00.txt18:15
openstackLog:            http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2016/cinder_nova_api_changes.2016-11-14-17.00.log.html18:15
*** cartik has joined #openstack-meeting-cp18:24
johnthetubaguysorry had to leave for a meeting18:25
*** alij has joined #openstack-meeting-cp18:32
*** alij has quit IRC18:37
*** cartik has quit IRC19:56
*** cartik has joined #openstack-meeting-cp20:07
*** cartik has quit IRC20:29
*** brault has quit IRC20:33
*** tongli has quit IRC20:40
*** Rockyg has joined #openstack-meeting-cp21:04
*** alij has joined #openstack-meeting-cp21:13
*** uxdanielle has joined #openstack-meeting-cp21:14
*** alij has quit IRC21:17
*** uxdanielle has quit IRC21:28
*** uxdanielle has joined #openstack-meeting-cp21:49
*** edtubill has quit IRC22:40
*** uxdanielle has quit IRC22:51
*** edtubill has joined #openstack-meeting-cp22:56
*** xyang1 has quit IRC23:04
*** lamt has quit IRC23:05
*** edtubill has quit IRC23:11

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!