Wednesday, 2014-10-01

*** marcoemorais has quit IRC00:28
*** EricGonczer_ has joined #openstack-containers01:07
*** EricGonczer_ has quit IRC01:10
*** unicell has joined #openstack-containers01:13
*** jeckersb is now known as jeckersb_gone01:14
*** EricGonczer_ has joined #openstack-containers01:33
*** PaulCzar has quit IRC01:35
*** EricGonczer_ has quit IRC01:42
*** kebray has joined #openstack-containers01:48
*** bitblt has joined #openstack-containers02:39
*** bitblt has quit IRC02:39
*** bitblt has joined #openstack-containers02:39
*** bitblt has left #openstack-containers02:39
*** harlowja is now known as harlowja_away02:42
*** kebray has quit IRC02:48
*** EricGonczer_ has joined #openstack-containers03:05
*** EricGonczer_ has quit IRC03:11
*** nshaikh has joined #openstack-containers03:45
*** unicell has quit IRC04:00
*** nshaikh has quit IRC04:13
*** unicell has joined #openstack-containers04:38
*** unicell has quit IRC05:01
*** marcoemorais has joined #openstack-containers05:14
*** marcoemorais1 has joined #openstack-containers05:15
*** marcoemorais has quit IRC05:19
*** adrian_otto has joined #openstack-containers05:23
*** nshaikh has joined #openstack-containers05:32
*** diga has joined #openstack-containers06:18
digaHi06:19
*** zul has quit IRC06:31
*** diga has quit IRC06:35
*** diga has joined #openstack-containers06:35
*** diga has quit IRC06:41
*** diga has joined #openstack-containers06:42
*** diga1 has joined #openstack-containers06:47
*** diga has quit IRC06:49
*** diga1 has quit IRC06:52
*** zul has joined #openstack-containers06:52
*** marcoemorais1 has quit IRC07:01
*** diga1 has joined #openstack-containers07:18
*** diga1 has quit IRC07:25
*** diga1 has joined #openstack-containers07:42
*** diga1 has quit IRC07:47
*** coolsvap|afk is now known as coolsvap07:48
*** stannie has joined #openstack-containers07:57
*** julienvey has joined #openstack-containers08:15
*** diga1 has joined #openstack-containers08:16
*** diga1 has quit IRC08:24
*** coolsvap is now known as coolsvap|afk08:24
*** diga1 has joined #openstack-containers08:50
*** diga1 has quit IRC08:54
*** unicell has joined #openstack-containers09:00
*** diga1 has joined #openstack-containers09:18
*** julienve_ has joined #openstack-containers09:27
*** diga1 has quit IRC09:28
*** julienvey has quit IRC09:30
*** diga1 has joined #openstack-containers09:49
*** diga1 has quit IRC09:52
*** diga1 has joined #openstack-containers09:52
*** diga1 has quit IRC10:02
*** unicell has quit IRC10:38
*** unicell has joined #openstack-containers10:46
*** EricGonczer_ has joined #openstack-containers11:08
*** EricGonczer_ has quit IRC11:23
*** unicell has quit IRC11:34
nshaikhhey diga11:34
*** unicell has joined #openstack-containers11:34
*** nshaikh has quit IRC11:37
*** EricGonczer_ has joined #openstack-containers12:08
*** EricGonczer_ has quit IRC12:11
*** nshaikh has joined #openstack-containers12:11
*** nimissa1 has left #openstack-containers12:13
*** adrian_otto has quit IRC12:17
*** jeckersb_gone is now known as jeckersb12:29
*** julim has joined #openstack-containers12:36
*** nshaikh has quit IRC12:51
*** nshaikh has joined #openstack-containers12:53
*** jeckersb is now known as jeckersb_gone12:55
*** jeckersb_gone is now known as jeckersb12:56
*** thomasem has joined #openstack-containers12:56
*** nshaikh has quit IRC13:17
*** julienve_ has quit IRC13:38
*** julienvey has joined #openstack-containers13:39
*** adrian_otto has joined #openstack-containers14:28
*** adrian_otto has quit IRC15:03
*** n0ano has joined #openstack-containers15:35
*** mikedillion has joined #openstack-containers15:42
*** adrian_otto has joined #openstack-containers16:00
*** bauzas has joined #openstack-containers16:01
adrian_ottohi everyone16:01
bauzasadrian_otto: hi16:01
adrian_ottobauzas is joining us to follow up on our initial Scheduling/Gantt talk from the Magnum perspective16:01
n0anoo/16:01
adrian_ottoI am going to start a logged meeting16:01
adrian_otto#startmeeting containers16:01
openstackMeeting started Wed Oct  1 16:01:58 2014 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.16:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:02
openstackThe meeting name has been set to 'containers'16:02
adrian_otto#topic Gantt and Magnum - Scheduling for Containers in OpenStack16:02
adrian_ottowelcome bauzas16:02
bauzasadrian_otto: thanks16:02
bauzasn0ano just joined us too, he's also working on Gantt16:02
adrian_ottothanks for agreeing to meet with us. The purpose of today's discussion is to put some additional details about how Magnum will approach the subject of scheudling16:03
bauzasfor sure, I would love knowing your requirements16:03
adrian_ottoresource scheduling is something that Nova already does, and has a modular interface to adapt how it's done.16:03
bauzasright16:03
adrian_ottowe understand that you plan to pull out the schedulingpiece as a standalone project named Gantt16:03
bauzasright too16:04
adrian_ottoonce that happens, other projects could tap into that capability, which is what we would like to do16:04
bauzasthat's the idea16:04
adrian_ottoalso, last time we met, we covered our initial intent for Magnum16:04
n0ano+116:04
bauzasbecause we're convinced that scheduling needs to be holistic16:04
adrian_ottoto support two placement strategies for containers within Nova instances (through the help of an in-guest agent)16:04
adrian_otto1) To place a container on a specifiy instance id16:05
bauzasright, could you please just restate here for n0ano very briefly ?16:05
adrian_otto2) To repeatedly fill an instance with containers until no more will fit, and then we will create another instance, fill that, etc. We have been referring to this mode as "simple sequential fill".16:05
bauzasright16:06
adrian_ottoas containers are removed, and Nova instances are left completely vacant, the instances may be automatically deleted by Magnum16:06
adrian_ottoso clearly those two modes are a first step, and we can get there independently from Gantt, giving us some runway in terms of time16:06
adrian_ottowe would like to be able to do more sophisticated placement in the future16:06
adrian_ottoexample use cases are instance affinity (think host affinity)16:07
bauzasthat makes sense16:07
adrian_ottoand instance anti-affinity16:07
bauzasright16:07
adrian_ottowe may also wany other concepts such as zone filters (to represent network performance boundaries, etc.)16:07
n0anoI think `everybody` wants to do more sophisticated scheduling, hence the desire to split out gantt to able able to satisfy more projects than just nova16:07
adrian_ottoindeed!16:07
adrian_ottoso that's the starting point.16:07
n0anoviolent agreement so far16:08
adrian_ottowe hope to leverage as much of what's already there as possible, and have a sensible way of extending it16:08
bauzasn0ano: here there is no need for a cross-project scheduler, just for something that Nova doesn't provide yet16:08
adrian_ottoyes, we don't care if the scheduler is provided by nova or some other service16:09
n0anowhich should make the containers usage a closer fit, fewer changes needed16:09
adrian_ottowe are after the funcitonality16:09
bauzashere, the idea is that you have N sets as capacity, and you want to spawn Y children based on a best effort occupancy of X16:09
adrian_ottoso a great outcome of today's chat will be a shared vision of where we are headed, and a rough tactical outline for how to begin advancing toward that vision together16:09
bauzasX can be hosts/instances/routers/cinder-volumes and Y can be instances/containers/networks/volumes16:10
adrian_ottoyes, that's right16:10
bauzasthe placement logic and the efficiency of that logic still has to be the same16:10
adrian_ottoagreed.16:10
n0anoI think there's general concensus on the desired outcome, we've been bogged down by the mechanics of getting there so are16:11
bauzasn0ano: right, and that's why we're taking down to using objects16:11
*** thomasem has quit IRC16:11
adrian_ottoI do have a specific question about fault recovery, and whether we conceptualize that as scheduling, or orchestration16:11
bauzasadrian_otto: the current nova scheduler model is accepting non validated and non-typed dictionaries of values16:11
adrian_ottosay I have provisioned resource Y on instance X16:12
bauzasadrian_otto: sure go ahead16:12
adrian_ottoand Y dies.16:12
bauzasyep16:12
adrian_ottoI need a way to have that restarted.16:12
adrian_ottodoes Nova already have a concept of this?16:12
bauzasthat's not automatic16:12
n0anoadrian_otto, I've always interpreted that as a heat issue, not a nova scheduling issue16:12
bauzasbut you can evacuate16:12
bauzasn0ano: strong +116:13
adrian_ottook, so the user would use a delete command in the api and then another post to create a new one?16:13
bauzasIIRC, there is a heat effort about that16:13
bauzasadrian_otto: nope16:13
bauzasI heard about Heat 'convergence'16:13
n0anonova (at least currently) is kind of a fire and forget, once you've launced the instance nova scheduling is done (this is an area that might need to change in future)16:13
adrian_ottoyes, I'm pretty sure this is within the scope of Heat, but I wanted your perspective on that16:13
bauzasstill needs to figure out what's exactly16:13
bauzasadrian_otto: Gantt will be scheduling things and provide SLAs16:14
bauzasadrian_otto: if something goes weird, Gantt has to be updated for doing good decisions16:14
bauzasadrian_otto: by design, Gantt can be racy16:14
n0anoI think fault recover is a heat issue, gantt will have to be involved but the core logic comes from heat16:15
bauzasadrian_otto: ie. if the updates are stale, Gantt will provide wrong decisions16:15
adrian_ottowell, you do need to know the current utilization against capacity, so if a sub-resource is deleted, the scheduler may need to learn about that. Is this assumption correct?16:15
bauzasadrian_otto: right, Gantt will have a statistics API16:15
adrian_ottoor will it poll each instance at decision time?16:15
adrian_ottook, I see16:15
bauzasadrian_otto: so projects using it will update its views16:15
bauzasadrian_otto: nope, no polling from Gantt16:15
n0anoadrian_otto, right one of the changes we're actively doing is to put all resource tracking inside the scheduler16:16
bauzasadrian_otto: projects provide a view to Gantt, that's project's responsibilies to engage consistency16:16
adrian_ottoso there will need to be some way to inform the stats through that statistics API16:16
bauzasadrian_otto: exactly16:16
adrian_ottowe will expect Heat do to that upon triggering a resource restart (heal) event16:16
bauzasn0ano: to be precise, that will be the claiming process16:16
adrian_ottogot it, ok16:16
n0anowhat bauzas said16:16
adrian_ottoexcellent.16:17
bauzasn0ano: for making sure that concurrent schedulers can do optimistic scheduling16:17
bauzaswithout lock mechanism16:17
bauzasbut that's internal to Gantt16:17
*** zul has quit IRC16:17
*** harlowja_away has quit IRC16:17
adrian_ottook16:18
bauzasprojects will have to put stats to Gantt using an API endpoint, and will consume decisions from another API endpoint16:18
adrian_ottook, understood.16:18
bauzasso that requires a formal datatype for updating the stats16:18
adrian_ottothis will be some form of a feed interface like an RPC queue?16:18
bauzasand here is what we're currently working on16:18
*** zul has joined #openstack-containers16:19
n0anonote that, currently, all stats are kind of nova instance related, for cross project (cinder, neutron) usage we'll have to expand those stats16:19
bauzasadrian_otto: the internals are not yet agreed16:19
adrian_ottokk16:19
bauzasadrian_otto: I was seeing someting like a WSME datatype16:19
bauzasadrian_otto: or a Nova object currently16:19
bauzasadrian_otto: my personal opinion is that Nova objects are good candidates for becoming WSME datatypes16:20
adrian_otto:-)16:20
bauzasadrian_otto: but that's my personal opinion :)16:20
bauzasso the API will have precise types16:20
adrian_ottook, so I think I distracted you from your description of your current focus of work16:20
bauzasadrian_otto: not exactly16:20
bauzasadrian_otto: I just wanted to emphasize that as the current Nova scheduler is not validating what we send to it, we need to do that work16:21
n0anofocus = clean up claims, split out into gantt16:21
adrian_ottook16:21
bauzasso the basic plan for Kilo is what we call "fix the tech debt, ie. provide clean interfaces for updating stats and consuming requests"16:22
bauzasand eventually move the claiming process to Gantt for doing concurrent non-locking decisions16:22
bauzasbut that's not impacting your work16:22
n0anothere is so much that people want to do that is predicated on splitting out gantt that I `really` want to focus on that16:22
*** thomasem has joined #openstack-containers16:23
adrian_ottoour favorite question from product managers is "when will XYZ be ready". We have been pretty clear about the sequencing of new features. What we could tighten up is an expectation of when we might clear various milestones. What do you think the best way is to address these questions?16:24
bauzasadrian_otto: what we can promise is to deliver a Nova library by Kilo16:24
bauzasadrian_otto: ie. something detached from the Nova repo, residing in its own library but still pointing to the same DB16:25
adrian_ottowhat is a reasonable expectation for what features will be present in that library? Is that a feature parity with what is in nova today with tech debt subtracted out?16:25
n0anoadrian_otto, when you get a good answer to that let me know, I get the same questions internally and `when it's done' is never sufficient16:25
adrian_otton0ano: here, here!16:25
bauzasadrian_otto: feature parity is a must have16:25
bauzasadrian_otto: no split without feature parity16:25
adrian_ottook, that makes sense16:26
n0anobauzas, +116:26
bauzasadrian_otto: so, based on our progress, that requires to provide some way to update the scheduler for other than just "compute node" stats16:26
adrian_ottomaybe if we thought about it like "in what release of OpenStack could Magnum and Gantt be offering affinity and anti-affinity scheduling capability"?16:26
bauzasand the target is for Kilo16:26
adrian_ottothat might be a more concrete question that we could chew on together16:26
bauzasadrian_otto: ask Nova cores the priority they will put for the Scheduler effort16:27
adrian_otto!! :-)16:27
openstackadrian_otto: Error: "!" is not a valid command.16:27
bauzasadrian_otto: raise the criticity if you want so16:27
adrian_ottook so assuming this is in a new repo, it could have it's own review queue16:27
bauzasadrian_otto: tbh, we're not sufferring from a contributors bandwidth16:27
adrian_ottoand a focused reviewer team16:28
*** marcoemorais has joined #openstack-containers16:28
n0anoadrian_otto, +116:28
adrian_ottoanyone who is a true stakeholder could opt into that16:28
bauzasadrian_otto: right, but before doing that, we need to get merged the necessary bits for fixing the tech debt and provide stats with other Nova objects16:28
adrian_ottook, that's sensible too16:28
bauzasand here comes the velocity problem16:28
bauzasas we need feature parity *before* the split, we need to carry our changes using the Nova workflow16:29
bauzasadrian_otto: hope you understand what I mean16:29
adrian_ottoI do.16:29
adrian_ottoI'm thinking on it16:29
bauzasadrian_otto: in particular wrt Nova runways/slots proposal16:30
bauzasadrian_otto: that's envisaged for kilo-3, not before but still16:30
n0anobauzas, I'm not `too` concerned about the runways proposal, I hope we'll be in before we have to worry about that (I don't like it but that's a different issue)16:30
*** thomasem has quit IRC16:30
bauzasn0ano: think about it, and how we can merge changes before kilo-316:31
bauzasn0ano: and you'll see that isolate-scheduler-db will probably be handled by kilo-316:31
bauzasand winter is coming (no jokes)16:31
n0anoyeah but we get more accomplished in winter (ignoring the holidays)16:32
adrian_ottothe challenge here is that we can't just produce more Nova core reviewers by mustering them up.16:32
adrian_ottothat is a scarce resource that grows organically, and at a slow rate.16:32
n0anoadrian_otto, nope, but this is a generic problem that everyone is facing, we just have to deal with it16:33
adrian_ottoso as you mentioned, we would need to cause the existing reviewers to perceive the Gantt related work as a priority.16:33
bauzasadrian_otto: that's the thing16:33
*** EricGonczer_ has joined #openstack-containers16:34
bauzasin particular, Design Summits are a good opportunity for raising concerns about prorities16:34
adrian_ottoit probably only takes 3 reviewers with a commitment of 5 hours a week or less to succeed at this, right?16:34
bauzaseven less16:34
adrian_ottoprobably half that time16:34
bauzaspatches are quite small, except a big one I just proposed16:35
adrian_ottolet's just call it 2 hours a week for sake of discussion16:35
n0anowe can get reviewers, it 2 core reviewers that is the problem.16:35
adrian_ottothat seems to be something that we could influence using a 1:1 campaign16:35
bauzasn0ano: +100016:35
bauzasadrian_otto: as usual, pings and f2f during Summit16:36
adrian_ottoand I have a feeling that not just any cores will do, it's a subset of the cores who can rationalize and criticize contributions to this part of the system architecture.16:36
bauzasadrian_otto: tbh, we already have support from a set of them16:37
adrian_ottoso what we need is an earmarked time commitment from them16:37
n0anothe reality is we've thrashed the issues enough that I think we know what needs to be done, now it's just a matter of doing it and getting it reviewed16:37
bauzasadrian_otto: we just need to emphasize the priority16:37
n0anobauzas, +116:37
adrian_ottoand if we approached each of them with a join me for two 1-hour meetings each week to review patches for this subject matter.16:38
adrian_ottothat's a commitment that's likely to succeed, and may be just enough lubrication to help you get that work through16:38
adrian_ottoor even three 30 minute meetings, something along those lines16:38
adrian_ottoideally you'd have them review together interactively.16:39
adrian_ottoso they can debate disputes on the spot to keep cycle times between patch revisions shorter.16:39
n0anothat'd be nice, whether we can get that kind of commitment is the issue16:39
adrian_ottohow would you feel about working in this way for a limited time, until a particular milestone is reached (Gantt in own code base with it's own reviewer team?)16:40
adrian_ottoso the comitter, and ideally three reviewers show up like that on a regular schedule as a tiger team16:40
n0anoI'll try anything, if you think that can help I'd do it16:41
adrian_ottoand make revisions on the fly to the extent possible16:41
bauzasadrian_otto: well, that's quite close to what Nova calls 'slot'16:41
adrian_ottook, I'd like to help pitch that approach, or any variation on this theme that you think might resonate with the stakeholders and get the throughput up (velocity++).16:42
bauzasadrian_otto: I mean, asking specific people to do specific reviews on a specific time is FWIW what we call a runway or a slot16:42
adrian_ottotell me more about 'slot' please.16:42
adrian_ottook, mikal proposed that about a month back16:42
bauzasadrian_otto: the proposal is about having a certain number of blueprints covered at the same time16:42
adrian_ottohas that approach resumed, and is it effective?16:42
bauzasthe current threshold is set to 1016:42
bauzasadrian_otto: not yet, planned for k316:43
adrian_ottooh, so like an "on approach" strategy16:43
n0anoadrian_otto, only a proposal, hasn't been done yet16:43
adrian_ottowhere you cherry pick feature topics that must land by a deadline?16:43
n0anobasically, it's a way to prioritize important BPs16:43
adrian_ottoI see16:43
bauzasadrian_otto: that's just giving reviews attention until it gets merged16:43
bauzass/reviews/reviewers16:43
n0anoor, another way of saying it, the current review system doesn't work :-)16:44
adrian_ottoI refer to this as the "bird dog" approach16:44
bauzasadrian_otto: if a blueprint gets staled in the middle of a vote, it looses its slot16:44
adrian_ottoyou have a team of reviewers bird dog a given patch topic16:44
adrian_ottook, that's good food for thought. Please let me know how I can help with this.16:45
adrian_ottoI'll plan to help with the persuasive pitch delivery.16:45
bauzasadrian_otto: well, the best way is to emphasize your needs during the Summit16:45
n0anoadrian_otto, tnx, much appreciated16:46
adrian_ottoI'll be there. -)16:46
bauzaswe have some proposals for talks at the Summit16:46
adrian_ottothe Design summit schedule is still unreleased, correct?16:46
n0anoa couple of proposed launchpads so far16:47
n0anohttps://etherpad.openstack.org/p/kilo-nova-summit-topics16:47
n0anohttps://etherpad.openstack.org/p/kilo-crossproject-summit-topics16:47
bauzasn0ano: s/launchpad/etherpad16:48
n0anoat least I got the pad part right :-)16:48
adrian_otto:-)16:48
adrian_ottook, I will review those and make remarks on them16:49
*** thomasem has joined #openstack-containers16:49
adrian_ottothanks for your time today bauzas and n0ano.16:49
n0anoNP16:49
bauzasnp16:49
adrian_ottoanything more before we wrap up?16:50
bauzasgood to chat with you16:50
bauzasnope16:50
bauzaswe got your requirements16:50
n0anolooking forward to Paris16:50
adrian_ottook, cool. I'll see you in Paris!16:50
adrian_otto#endmeeting16:50
openstackMeeting ended Wed Oct  1 16:50:38 2014 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:50
openstackMinutes:        http://eavesdrop.openstack.org/meetings/containers/2014/containers.2014-10-01-16.01.html16:50
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/containers/2014/containers.2014-10-01-16.01.txt16:50
openstackLog:            http://eavesdrop.openstack.org/meetings/containers/2014/containers.2014-10-01-16.01.log.html16:50
bauzasAIUI, that's basically same filters/logic but applied to distinct components16:51
bauzasadrian_otto: still there ?16:55
*** thomasem_ has joined #openstack-containers16:57
*** thomasem has quit IRC16:57
*** thomasem_ has quit IRC16:58
adrian_ottoyes , but I am dialing into a conference call atm17:00
*** thomasem has joined #openstack-containers17:02
*** harlowja has joined #openstack-containers17:15
*** diga has joined #openstack-containers17:47
digaHi17:47
*** diga has quit IRC18:00
*** EricGonczer_ has quit IRC18:11
*** EricGonczer_ has joined #openstack-containers18:11
*** unicell has quit IRC19:01
*** diga has joined #openstack-containers19:02
*** julienvey has quit IRC19:05
*** diga has quit IRC19:28
*** diga has joined #openstack-containers19:37
*** julienvey has joined #openstack-containers19:38
*** julienvey has quit IRC19:49
*** thomasem has quit IRC19:53
*** thomasem has joined #openstack-containers19:55
*** thomasem has quit IRC19:59
*** diga has quit IRC20:03
*** EricGonczer_ has quit IRC20:28
*** EricGonczer_ has joined #openstack-containers20:28
*** EricGonczer_ has quit IRC20:32
*** EricGonczer_ has joined #openstack-containers20:33
*** adrian_otto has quit IRC20:42
*** PaulCzar has joined #openstack-containers20:47
*** mikedillion has quit IRC20:51
*** n0ano has left #openstack-containers20:54
*** adrian_otto has joined #openstack-containers21:00
*** marcoemorais has quit IRC21:02
*** marcoemorais has joined #openstack-containers21:02
*** marcoemorais has quit IRC21:03
*** marcoemorais has joined #openstack-containers21:05
*** marcoemorais has quit IRC21:05
*** marcoemorais has joined #openstack-containers21:05
*** marcoemorais has quit IRC21:06
*** marcoemorais has joined #openstack-containers21:06
*** diga has joined #openstack-containers21:25
*** diga has quit IRC21:30
*** jeckersb is now known as jeckersb_gone21:32
*** stannie has quit IRC21:36
*** EricGonczer_ has quit IRC21:39
*** marcoemorais has quit IRC22:05
*** marcoemorais has joined #openstack-containers22:06
*** marcoemorais has quit IRC22:06
*** marcoemorais has joined #openstack-containers22:07
*** marcoemorais has quit IRC22:07
*** marcoemorais has joined #openstack-containers22:07
*** mikedillion has joined #openstack-containers22:10
*** marcoemorais has quit IRC22:13
*** marcoemorais has joined #openstack-containers22:13
*** marcoemorais has quit IRC22:20
*** marcoemorais has joined #openstack-containers22:31
*** marcoemorais has quit IRC22:33
*** mikedillion has quit IRC23:09
*** mikedillion has joined #openstack-containers23:55

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!