Wednesday, 2014-10-08

*** dboik has joined #openstack-lbaas00:01
*** dboik has quit IRC00:05
*** VijayB_ has quit IRC01:09
*** TrevorV_ has joined #openstack-lbaas01:19
*** jorgem has joined #openstack-lbaas01:39
*** jorgem has quit IRC01:53
*** fnaval has quit IRC01:58
*** woodster_ has quit IRC02:21
*** ajmiller_ has quit IRC02:52
*** fnaval has joined #openstack-lbaas03:00
*** fnaval has quit IRC03:07
*** IZebra has joined #openstack-lbaas03:19
*** TrevorV_ has quit IRC03:49
openstackgerritBrandon Logan proposed a change to stackforge/octavia: Implementing simple operator API  https://review.openstack.org/12123303:51
*** ptoohill-oo has quit IRC03:53
*** ptoohill-oo has joined #openstack-lbaas03:59
*** sbfox has joined #openstack-lbaas05:08
*** HenryG_ has joined #openstack-lbaas05:29
*** HenryG has quit IRC05:30
*** ksamoray has joined #openstack-lbaas05:30
openstackgerritStephen Balukoff proposed a change to stackforge/octavia: Specification of reference haproxy amphora API  https://review.openstack.org/12680105:44
*** rm_you| has joined #openstack-lbaas05:56
*** ksamoray has quit IRC05:58
*** rm_you|wtf has quit IRC06:00
*** ksamoray has joined #openstack-lbaas06:02
*** ksamoray has quit IRC06:12
*** sbfox has quit IRC06:13
*** sbfox has joined #openstack-lbaas06:14
*** ksamoray has joined #openstack-lbaas06:20
*** sbfox has quit IRC07:14
*** jschwarz has joined #openstack-lbaas07:30
*** Krast has joined #openstack-lbaas07:38
*** Krast has quit IRC08:52
*** Krast has joined #openstack-lbaas09:01
*** Krast has quit IRC09:04
*** amotoki has joined #openstack-lbaas09:05
*** Krast has joined #openstack-lbaas09:05
*** jschwarz has quit IRC10:17
*** Krast has quit IRC10:22
*** ksamoray has quit IRC10:31
*** jschwarz has joined #openstack-lbaas10:39
IZebraHi all10:50
*** IZebra has quit IRC10:51
*** jroyall has joined #openstack-lbaas10:57
*** openstack has joined #openstack-lbaas14:13
sballesbalukoff: blogan I created this diagram to show the LBaaS V2 flow. It is far from complete but I wanted to udnerstand where we still had gaps in our blueprints. https://region-a.geo-1.objects.hpcloudsvc.com/v1/52059731204378/LBaaS/LBv2_workflow.GIF14:16
sballeLast I looked it looked like the Amphora agent is still missing from the blueprint list: https://blueprints.launchpad.net/octavia. I am happy to add it. Let me know14:19
ptoohillMornin'14:23
sballeHey14:23
*** ksamoray has joined #openstack-lbaas15:08
*** ksamoray has quit IRC15:13
*** sbfox has joined #openstack-lbaas15:31
*** ksamoray has joined #openstack-lbaas15:34
rm_workooo, fancy diagram15:51
rm_workAmphora API / Amphora Agent -- which is actually *in* the amphora? both?15:53
*** ksamoray has quit IRC15:57
sballerm_work: thanks :)15:58
sballeyes that's what I was assuming. I think thta sbalukoff is now calling te AMphora-API for appliance API15:59
sballeOnce I know I have it right I'll try to redo it with some boundaries so we can show what's in the controller and what is in Amphora, etc.15:59
rm_workyeah15:59
rm_workjust not clear on why the two are separate things, if they're both internal to the amphora16:00
rm_workseems like just a single layer to me16:00
sballethey could be the same. I just saw a rest interface on one side and an agent to forward and process requests.16:01
rm_workunless it's just a logical separation somehow? because I'd assume the code would be a single entity I believe16:01
rm_workhmm16:01
rm_workat least the way i'm imagining it implemented, the API code that's listening would be a simple (probably Pecan to match our frontend) app that listens for configs and puts them in place when it gets one16:03
rm_workI guess there's a heartbeat on that side too?16:03
rm_workis that where the agent comes in?16:04
rm_workI guess I'd assume it would go "Amphora API -> ha-proxy", and "Amphora Agent -> Operator API" or something16:05
rm_workessentially "Amphora API" is an outside-in service (listening only) and "Amphora Agent" is an inside-out service (sending data only)16:06
rm_workbut I need to re-read the 1.0/2.0 spec today16:06
rm_workotherwise it looks exactly like my current understanding16:07
rm_workblogan: did you look at that diagram yet?16:07
*** fnaval has joined #openstack-lbaas16:07
*** markmcclain has quit IRC16:09
*** amotoki has quit IRC16:09
*** german_ has joined #openstack-lbaas16:13
sballesorry just had a repairman come and look at my oven.16:14
sballeYes we need something to do heatbeats, wake up of controller, etc...16:16
sballeI just wanted to do a diagram so we can talk about the gaps and maybe the peice we do not understand well yet16:16
*** woodster_ has joined #openstack-lbaas16:16
sballeWe could maybe talk about it at the Octavia meeting today16:17
rm_workyeah, possibly it is a problem with my understanding and not the diagram :P\16:29
rm_workwe'll discuss16:29
*** sbalukoff has quit IRC16:32
*** amotoki has joined #openstack-lbaas16:36
TrevorVdougwig you around?16:51
dougwigyes16:52
dougwigTrevorV: yes, what's up?16:53
TrevorVI had to switch to a different computer at work, and I'm getting a tox error including "dot"... sphinx.errors.SphinxWarning: WARNING: dot command 'dot' cannot be run (needed for graphviz output), check the graphviz_dot setting16:56
TrevorVIs it a requirements failure or something here?16:56
TrevorVI remember having this issue before and couldn't remember how to fix it, thought I'd ask you16:56
TrevorVdougwig I think rm_work just helped me out16:58
rm_workyeah just needed to install graphviz16:58
openstackgerritTrevor Vardeman proposed a change to stackforge/octavia: Implementing simple operator API  https://review.openstack.org/12123317:01
*** xgerman has joined #openstack-lbaas17:09
*** amotoki has quit IRC17:11
*** german_ has quit IRC17:11
*** HenryG is now known as HenryG_afk17:20
*** sbfox has quit IRC17:26
*** markmcclain has joined #openstack-lbaas17:30
*** sbalukoff has joined #openstack-lbaas17:31
*** VijayB has joined #openstack-lbaas17:31
sbalukoffHi folks!17:33
*** VijayB has quit IRC17:33
sbalukoffsballe: I'll check that out in a minute.17:33
*** VijayB_ has joined #openstack-lbaas17:33
sbalukoffDoes anyone have anything they'd like added to the agenda today? (I've only got a couple small things, so it might be a short meeting, eh.)17:34
dougwigi think brandon and ptoohill wanted to talk about floating ups.17:35
dougwigips17:35
sbalukoffOk, sounds good.17:35
sballesbalukoff: I would like to talk about the diagram and make sure we agree on where things aren't 100% thought out and see if we have gaps: https://region-a.geo-1.objects.hpcloudsvc.com/v1/52059731204378/LBaaS/LBv2_workflow.GIF17:37
sbalukoffsballe: Sounds good.17:37
sballeI'll rework it based on our comments but just so we know where we are at17:38
sbalukoffCool. I've added that to the agenda.17:38
sballethanks BTW short meetings are good :)17:39
sbalukoffI totally agree there!17:41
sbalukoffAlso, yes, I consider the haproxy amphora API and agent to essentially be the same thing (or rather probably the same couple of people will be working on them)17:43
*** HenryG_afk is now known as HenryG17:44
sbalukoffThe set of scripts that run on the amphora will probably be more than a single agent (what with having to emit regular stats updates and whatnot), but the part that powers the API that lives on the amphora is definitely a part of that.17:44
sbalukoffrm_work and sballe: If we want to indicate that they're separate entities on the amphora, I have no problem with that, though I imagine they'll probably be sharing some code.17:46
sballeWe agree.17:46
blogansballe: I've added a layer between the operator API and controller, im calling it a handler right now18:01
bloganthe 0.5 handler will just send requests directly to the controller, a new handler will be created to send requests to a queue18:02
bloganfor 1.018:02
blogansballe: also the operator API will access the DB as well18:03
blogansballe: I haven't thought about it much so I could be wrong, but will the controller amphora driver have db access? or will the controller do that?18:06
rm_workdougwig: can you mark https://review.openstack.org/#/c/123492/ as WIP again?18:15
rm_workhey dougwig, for usage on TLSInfo18:18
rm_workdougwig: I'm tempted to move it from being a big collection of static methods, to being an actual instantiated object18:18
rm_workso the usage would be less like A and more like B in the thing i'll link in a sec18:18
rm_workerr also, considering moving from CONTAINER_ID to CONTAINER_REF and parsing the barbican endpoint out of that <_<18:21
*** sbfox has joined #openstack-lbaas18:25
*** sbfox has quit IRC18:26
*** mlavalle has joined #openstack-lbaas18:27
rm_workdougwig / sbalukoff / blogan / ptoohill: http://pastebin.com/kkAXxZar18:28
rm_workxgerman: ^^18:28
rm_workwhich do you guys prefer?18:28
rm_workerr18:29
rm_worktypo18:29
xgermanI like objects18:29
dougwigmuch prefer B.18:29
rm_workfixed: http://pastebin.com/4TWEpUXY18:29
rm_workanywho yeah18:29
rm_workk18:29
rm_workI do too18:29
rm_workalso, container_id vs. container_ref18:30
rm_worktheoretically supporting container_ref is more flexible18:30
rm_workbecause it would support multiple barbican repos18:30
rm_workand be less config on our side18:30
bloganshould line 21 be cert_info instead of tls_info?18:30
rm_workas long as the barbican repo supported the identity federation correctly18:30
rm_workblogan: yes, see my second paste :P18:30
rm_workthat was the typo18:30
rm_workalright I am going to do that18:32
rm_workor as blogan points out, if we wanted something more like exhibit A, we could remove the class from the equation altogether and they'd be module methods18:33
rm_worksince that's essentially already what they are18:33
ptoohillYea, B is my preference18:33
rm_workbut using the class would allow multiple subsequent "get()" calls to share a keystone auth18:33
rm_workif it's all for the same tenant18:33
ptoohillhmmm18:33
rm_workyeah, that's B18:34
*** jorgem has joined #openstack-lbaas18:34
rm_work*unless* we assume that the Trust is specific to a resource and not a user18:34
rm_workwhich COULD be the case in the far future18:34
rm_workthen it can't share the keystone auth either way18:34
rm_workin that case it's really more like this anyway:18:37
rm_workhttp://pastebin.com/zz5cAHzr18:37
rm_workbecause trust_id is per-container, and consumer_url could feasibly be different if the object is cached/re-used?18:37
rm_workdunno if that's something we'd expect to happen ever tho18:38
xgermannow you lost me18:38
rm_worklol18:38
rm_worksometimes I ramble18:38
rm_workI operate very well with a sort of "stream of consciousness" dialogue going on :)18:38
*** sbfox has joined #openstack-lbaas18:46
TrevorVsbalukoff you runnin the meeting this afternoon again?18:48
dougwigrm_work: i think everyone said B, and at this point you are only arguing with yourself.  :)19:00
sbalukoffTrevorV: Yep.19:01
TrevorVsweet sbalukoff, I'll be online for it :)19:03
rm_workwell, I'm debating specifics19:06
sbalukoffrm_work: I'm also in favor of option B. :)19:07
rm_workyeah I am too19:08
rm_workjust deciding exactly how to accomplish B19:08
sbalukoffYour most recent paste seems like a good way to do it, IMO.19:09
sbalukoffSince trust_id is per container.19:09
sbalukoffI wouldn't expect the consumer_url to change, but since it's a per-container attribute... again your most recent paste seems like a good way to go.19:10
*** VijayB_ has quit IRC19:26
*** mwang2 has joined #openstack-lbaas19:57
*** jwarendt has joined #openstack-lbaas20:03
*** jwarendt has quit IRC20:03
*** dlundquist has joined #openstack-lbaas20:10
*** dlundquist has left #openstack-lbaas20:13
*** xgerman_ has joined #openstack-lbaas20:17
*** VijayB has joined #openstack-lbaas20:22
*** vivek-ebay has joined #openstack-lbaas20:23
*** sbfox has quit IRC20:26
roharasbalukoff: i'm travelling this week and might not be at the meeting today20:34
sbalukoffrohara: The meeting is underway right now. :)20:35
sbalukoffrohara: But that's OK, in any case.20:35
*** sbfox has joined #openstack-lbaas20:37
roharasbalukoff: gah my timezone is off20:37
sbalukoffNo worries. :)20:40
bloganhello20:59
*** xgerman has quit IRC20:59
*** xgerman_ is now known as xgerman20:59
bloganso handler (name can be changed, and probably should be) is just an easy way to swap out methods to get requests to the controller21:00
sbalukoffblogan: That sounds like a good idea to me.21:00
xgermanWe can always use a queue; zeromq now and something better later?21:00
sbalukoffThis should lead to more loosely coupled code, which is what we're going to need when we go to multiple controllers anyway, right?21:00
johnsom_I am thinking zeromq as a start21:01
blogansbalukoff: yes indeed21:01
blogan0.5 was supposed to be just be straight to the controller21:01
xgermanwell, zeromq is straight21:01
sbalukoffYeah, but also with the intent to eventually move to v1.0 and v2.021:01
sballeif we do a use a queue we should use oslo.messaging since that will allow us to swap out the queue based on the OS. for example ubuntu uses mostly RabbitMQ. I am told feodora used qpid21:02
sbalukoffBetter question: Is anyone objecting to this?21:02
bloganwell then a zeromq handler can be created21:02
TrevorVI have one problem with its existence:  I feel its unnecessary.  Why wouldn't the API just send the request to the queue?  blogan posed the "what if the operator doesn't want to use a queue?  then toggling versions of the API is difficult"21:02
johnsom_Yeah, later versions should use a real queue.  Oslo messaging or something...21:02
TrevorVI was just wondering what the community thought about it all, hence the question21:02
xgermanwe love queues!21:03
bloganyeah definitely, and the handler will make it much easier to swap out the real queue with say a zeromq solution21:03
xgermanI am questioning the need for a handler21:03
TrevorVxgerman same21:03
sballeme too21:03
xgermanoslo messaging is the abstraction layer we aim for :-)21:03
sbalukoffblogan: What would the handler get us that a queuing system won't?21:03
johnsom_xgerman +121:04
sballexgerman: +121:04
bloganwell the main thing is we don't have to go changing code in the resource layer of the api, we'll just swap out a module for another21:04
sbalukoffHow hard to you see it being, to have to change that code?21:04
bloganand i know some people at rackspace have wanted to be able to have a full test deployment without queues21:05
bloganso this would just give the ability to quickly swap in and out instead of changing the code itself21:05
sbalukoffWell, eventually queues are going to be part of the equation, I think, right? We can't really do this without them in v1.0.21:05
sballesbalukoff: +121:06
xgerman+121:06
TrevorVsbalukoff I don't think we're talking about removing queue's in all of our ideal situations21:06
TrevorVIts the necessity of the Handler that we're talking about21:06
johnsom_sbalukoff: +121:06
TrevorVShould the API communicate with the queue, or a handler21:06
sbalukoffTrevorV: Right.21:06
xgermanwell, is oslo too hard to work with?21:06
xgermanI rather have queues sooner than later :-)21:07
sballeI agree we should go API --> oslo messaging -> queue21:07
sbalukoffMy point is that wanting to test without using a queue in v1.0 and beyond probably doesn't make a lot of sense, since any production deployment will definitely use queues.21:07
sbalukoffI gues..21:07
bloganvery well, i was just thinking a more abstracted approach might be better so that we don't have to change the code much21:08
TrevorVsbalukoff it depends on what you're testing, right?21:08
sbalukoffWell, removing the queue for testing purposes does eliminate that as a potential source of failure if you're not actually interested in testing the queue...21:08
TrevorVI'm not fully against the handler, since, as blogan points out, the abstraction could be nice depending on the situation.  I was looking for a reason to keep it that way or to go straight from API21:08
blogani just didn't want all the queue setup code in the API layer, it would be in this other layer, just seemed cleaner21:08
sbalukoffblogan: How strongly do you feel that way?21:09
TrevorVsbalukoff strong enough that he beat me up when I opposed :(21:09
TrevorVnah I'm kidding21:10
sbalukoffTrevorV: Haha!21:10
bloganstronger than most, but i won't rage21:10
* TrevorV rubs his bruises21:10
roharai feel like the Tacker (ServiceVM) project tackled a similar problem21:10
bloganrohara could you expound on this21:10
sbalukoffblogan: If you think the code will be significantly cleaner (and therefore easier to test, debug and maintain), that is potentially a good reason to have a handler.21:10
sbalukoffI really don't have a strong opinion either way on this-- I'm mostly still trying to understand all the pros and cons of either approach.21:11
roharablogan: i cam trying to remember the details. i admit i have not followed tacker as closely as i had wanted21:11
johnsom_blogan: In a deployment, where would this handler live?21:12
bloganwell are there any really strong objections?21:12
TrevorVblogan mostly from me it seems ha ha21:12
TrevorVWhich I'm okay with it for its modularity21:12
sballeblogan: I am not sure what it give us.21:12
bloganjohnsom_: its just a code piece, not another component, the API code would pass it to the handler, then the handler would do whatever it needs to do to get the data to the controller21:13
johnsom_Ok21:13
TrevorVsballe simplification of API layer, modular location for queue interfacing code21:13
sballeand I am worried taht we won;t see a lot of the issues we will be runninginto when we switch to a queue21:13
xgermanoslo dies remote RPC so basically your handler would be call ('x') on the server21:13
blogansballe: a cleaner abstraction layer, separation of code, easier to test21:13
TrevorVsballe this isn't a queue replacement, its just a layer between API and Queue21:14
roharablogan: i am thinking of the rpc proxy blueprint21:14
roharahttps://blueprints.launchpad.net/oslo.messaging/+spec/message-proxy-server21:14
sbalukoffsballe: Yep, we're still talking about having queues in v1.0 at least (and possibly v0.5?)21:14
roharaprobably not a good comparison21:14
sballeso it would be API --> handler --> oslo.messaging -> queue21:14
blogansballe: yes or API --> handler --> controller if one wanted21:15
johnsom_sbalukoff: Frankly I would like to see the oslo messaging in 0.5.21:15
bloganor API --> handler --> oslo.messagingv2 --> queue21:15
sbalukoffsballe: The handler is just a portion of the code. It would effectively be part of the API stuff, but keeps queue communications cleanly outside of other API concerns.21:15
TrevorVsballe in the case that one doesn't want to use a queue (for any given reason... ex: faster testing)21:15
sbalukoffUnless I'm misunderstanding something entirely...21:16
xgermanhttps://wiki.openstack.org/wiki/Oslo/Messaging#Client_Side_API21:16
sbalukoff(Which is totally possible. I'm somewhat of an idiot, after all.)21:16
blogansbalukoff: that is correct21:17
xgermana call to the queu is a 3 line affair21:17
xgermanso in my opinion that doesn't need a driver but...21:17
* TrevorV kicks xgerman under the table for saying 'driver' in this case21:17
sbalukoffxgerman: I don't think this is a "driver" per se. But you're right that just using Oslo here might be the abstraction that satisfies blogan's concern?21:18
sbalukoffblogan?21:18
sballesbalukoff: +121:18
bloganwell it doesn't provide the abstraction if i dont want a queue at all21:18
rm_workyeah I wouldn't bother with a handler layer21:19
bloganbut if we're changing the scope of 0.5 to now have a queue, then this would have never come up21:19
TrevorVblogan that's fair21:19
rm_workoslo.queue(thing) if queue else do(thing)21:19
sbalukoffblogan: True... are you thinking something that is just a really light-weight wrapper around Oslo calls then? Something that can short-circuit and avoid the queue if the queue isn't desired for certain testing scenarios?21:19
sbalukoffAgain, we're not going to be able to avoid having a queue in v1.0 and beyond.21:19
sballesbalukoff: +121:20
sbalukoffSo the idea of testing without a queue really starts to be less meaningful with v1.0 and beyond.21:20
bloganoh i bet we coudl, but i wouldn't go for it21:20
rm_workoslo.queue(thing) if queue else do(thing)21:20
rm_workoslo.queue(thing) if queue else do(thing)21:20
rm_work^^21:20
bloganyes we saw it21:20
rm_workwell, you didn't seem to be appropriately in awe of the simplicity21:20
TrevorVShould we call a vote?  For/Against?21:21
bloganno i think im the only one21:21
sbalukoffblogan: It is true that introducing a queue in v0.5 would be a significant change of scope.21:21
rm_worknote that this is also what Barbican does right now21:21
rm_workoooo votes!21:21
bloganbut we need to update 0.5 to now have a queue21:21
rm_workalso it would make less temp-work21:21
xgermanI am ok with that21:21
roharai'm willing to vote on if we vote21:21
xgermanhaving queu in .521:21
TrevorVWait, so did the discussion change to "we want a queue in 0.5" now?21:22
johnsom_Agreed, it is a change in scope, but I think it is the right plan.  In my controller thinking I have had a queue penciled in...21:22
TrevorVWe apparently agree on "no handler"21:22
sballeCan we still get an Octavia+Neutron LBaaS 0.5 in kilo if we add a queue?21:22
sballesorry Octavia 0.5 _ Neutron LBaaS V221:23
sbalukoffHmmmm....21:23
dougwigforgive the stupid question, but if we're using an IPC queue, what is the point of fronting it with HTTP?  we can just have the lbaas driver do a queue insertion.21:24
rm_workI think so? it's not a HUGE change21:24
rm_workand does prevent some future rewrites and some possible complexity from dealing with two ways of doing things21:24
rm_worksballe: ^^21:24
xgerman+121:24
sbalukoffdougwig: We'll still need an Operator API.21:24
blogandougwig: i believe that was the original plan, but since we weren't having a queue we needed a way to actually send creates to the controller21:24
xgerman+121:24
sbalukoffAnd Neutron LBaaS v2 does not handle operator API stuff.21:24
xgermanyep21:25
bloganthis was a discussion on the operator-api spec21:25
dougwigaren't there about a metric bajillion web apps out there that have solved this problem without needing rabbit?  seems like we should just do http OR a queue.  both some weird mix of both.21:25
sballesbalukoff: I agree. I just put the Operator API under Octavia.21:25
dougwig /both some/not some/21:25
bloganoh oh, one more reason i wanted the handler, which was a cherry on the top, is because it would eseentially become the same thing as the neutron lbaas driver layer if the octavia API ever became the openstack lbaas21:25
bloganbut i don't know if that is that important21:25
sbalukoffsballe: +121:25
sbalukoffblogan: Ok, again, I'm not hearing any strong revulsion to the idea of a light-weight handler which (in v1.0) will wrap queue commands.21:27
TrevorVsbalukoff I'm thinking there needs to be an executive decision here then, yay or nay?21:28
rm_worksbalukoff: I'm also not hearing any strong opinions FOR it21:28
sbalukoffSo... I'm not in principle against the idea, and I'm definitely for code which is more modular, and simpler to test, debug, and maintain.21:28
rm_work(other than blogan)21:28
TrevorVsbalukoff said "yes handler" you read it here folks, we doin a handler21:29
sbalukoffOk, then executive decision is this: blogan wants to write it, there's are some good reasons to have it, not strong reasons not to have it... so, go for it!21:29
bloganhoenstly the main reason was support no queue and queue21:29
sballeI guess we can always take it out later if it doesn't make sense to have it.21:30
sbalukoffsballe: If it's lightweight, that shouldn't be hard.21:30
TrevorVAlright, so we're decided to keep it as it is described.21:30
TrevorVSweet, discussion moot :P21:31
sbalukoffTrevorV: I don't think this was a worthless discussion.21:31
sballeI still thinking about the no queue and queue options and I am not totally convinced that that is a good reason at least not for HP. But if RAX has that need then we should make sure they get waht thye need too21:31
* TrevorV was joking and sbalukoff didn't know.....21:31
sbalukoffTrevorV: I'm immune to sarcasm.21:31
xgermansballe +121:32
TrevorVAlright, I'm off, talk to you guys in the morning for neutron lbaas (potentially :D  )21:32
sbalukoffxgerman and sballe: Yeah, I agree-- I don't see testing without a queue being all that relevant for us either-- but blogan did seem to think this would make the code cleaner and simpler to maintain, test, and troubleshoot. And I'm definitely in favor of that.21:34
sbalukoffI'm inclined to believe he's right in that prediction.21:35
rm_workugh, I will type at you in the morning, but I give no guarantees about whether I'll really *be there*21:36
rm_workno clue how sbalukoff manages, in PDT21:36
sbalukoffrm_work: I have a confession to make: I actually fell asleep again 15 minutes into last week's meeting. XD21:37
sbalukoffI'm just glad I wasn't stuck with my face on the x key or something.21:37
rm_workheh21:37
rm_workI ... have also fallen asleep in the middle of one of those meetings before T_T21:37
sbalukoffIt's about to get even suckier as the daylight savings time shift happens...21:40
sbalukoffThe 6:00-7:00am hour is the Evil hour: If you're awake then you've either stayed up too late, or gotten up to early.21:40
sballeWow 6am :-(21:41
sbalukoff(Keep in mind that my usual bed time is about 3:00am... I'm very much a night owl.)21:42
rm_worksbalukoff: same21:56
rm_work6-7am is ungodly21:57
rm_worki am really tempted to write a bot that says my status update when you do like... #rm_update21:57
sballerm_work: You are Central Timezone so it is not that bad ;)21:57
rm_worksballe: still pretty bad T_T21:57
sballe8am?21:57
rm_workespecially when jorgem makes me be *at work* for the meeting21:57
sbalukoffHeh!21:58
* rm_work glares menacingly at jorgem 21:58
* sballe lol21:58
*** dboik has quit IRC22:00
*** mwang2 has quit IRC22:05
*** vivek-ebay has quit IRC22:08
*** jorgem has quit IRC22:40
*** vivek-ebay has joined #openstack-lbaas22:59
*** ptoohill-oo has joined #openstack-lbaas23:11
*** mlavalle has quit IRC23:18
*** ptoohill-oo has quit IRC23:28
*** ptoohill-oo has joined #openstack-lbaas23:30

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!