Tuesday, 2014-08-26

*** HenryG_ is now known as HenryG00:16
*** adrian_otto has quit IRC00:28
*** unicell has quit IRC01:25
*** marcoemorais has quit IRC01:40
*** unicell has joined #openstack-containers01:52
*** unicell has quit IRC02:56
*** unicell has joined #openstack-containers03:35
*** harlowja is now known as harlowja_away03:47
*** harlowja_away is now known as harlowja03:48
*** marcoemorais has joined #openstack-containers03:53
*** marcoemorais has quit IRC03:54
*** adrian_otto has joined #openstack-containers04:47
*** adrian_otto has quit IRC05:45
*** adrian_otto has joined #openstack-containers05:46
*** adrian_otto has quit IRC05:46
*** harlowja is now known as harlowja_away06:31
*** stannie1 has joined #openstack-containers07:48
*** stannie has quit IRC09:00
*** julienvey has joined #openstack-containers09:06
*** PaulCzar has joined #openstack-containers12:54
*** unicell has quit IRC12:57
*** jeckersb_gone is now known as jeckersb13:06
*** PaulCzar has quit IRC13:34
*** PaulCzar has joined #openstack-containers13:35
*** PaulCzar has quit IRC13:35
*** thomasem has joined #openstack-containers13:38
*** adrian_otto has joined #openstack-containers14:17
*** EricGonczer_ has joined #openstack-containers14:44
*** adrian_otto has quit IRC14:54
*** apmelton1 has quit IRC15:04
*** apmelton has joined #openstack-containers15:06
*** unicell has joined #openstack-containers15:15
*** EricGonczer_ has quit IRC15:18
*** diga has joined #openstack-containers15:21
*** EricGonczer_ has joined #openstack-containers15:31
*** EricGonczer_ has quit IRC15:36
*** diga has quit IRC15:37
*** EricGonczer_ has joined #openstack-containers15:37
*** adrian_otto has joined #openstack-containers15:50
*** julienvey has quit IRC15:56
*** diga_ has joined #openstack-containers16:01
*** stannie1 is now known as stannie16:01
diga_Hi16:02
diga_do we have meeting today ?16:02
diga_yes, into meeting16:03
*** EricGonczer_ has quit IRC16:14
*** marcoemorais has joined #openstack-containers16:18
*** harlowja_away is now known as harlowja17:15
*** marcoemorais has quit IRC17:16
*** marcoemorais has joined #openstack-containers17:17
*** marcoemorais has quit IRC17:17
*** marcoemorais has joined #openstack-containers17:18
*** marcoemorais has quit IRC17:26
*** marcoemorais has joined #openstack-containers17:29
*** EricGonczer_ has joined #openstack-containers17:49
*** diga_ has quit IRC17:52
*** thomasem has quit IRC17:59
*** thomasem has joined #openstack-containers18:02
*** EricGonczer_ has quit IRC18:04
*** marcoemorais has quit IRC18:14
*** marcoemorais has joined #openstack-containers18:14
*** marcoemorais has quit IRC18:15
*** marcoemorais has joined #openstack-containers18:15
*** EricGonczer_ has joined #openstack-containers18:19
*** marcoemorais has quit IRC18:21
*** marcoemorais has joined #openstack-containers18:21
*** thomasem has quit IRC18:27
*** EricGonczer_ has quit IRC18:27
*** marcoemorais has quit IRC18:28
*** marcoemorais has joined #openstack-containers18:28
*** thomasem has joined #openstack-containers18:53
*** thomasem has quit IRC19:03
*** harlowja has quit IRC19:06
*** harlowja has joined #openstack-containers19:08
*** thomasem has joined #openstack-containers19:16
*** thomasem_ has joined #openstack-containers19:19
*** thomasem has quit IRC19:20
*** EricGonczer_ has joined #openstack-containers19:21
*** thomasem_ has quit IRC19:54
*** marcoemorais has quit IRC19:59
*** marcoemorais has joined #openstack-containers19:59
*** marcoemorais has quit IRC20:00
*** marcoemorais has joined #openstack-containers20:01
*** marcoemorais has quit IRC20:03
*** marcoemorais has joined #openstack-containers20:04
*** EricGonczer_ has quit IRC20:16
*** EricGonczer_ has joined #openstack-containers20:19
*** EricGonczer_ has quit IRC20:24
*** marcoemorais has quit IRC21:02
*** marcoemorais has joined #openstack-containers21:03
*** marcoemorais has quit IRC21:04
*** marcoemorais has joined #openstack-containers21:04
*** thomasem has joined #openstack-containers21:15
*** apmelton has quit IRC21:29
*** apmelton has joined #openstack-containers21:30
*** harlowja is now known as harlowja_away21:43
*** stannie has quit IRC21:54
adrian_ottoOur team meeting will begin in 5 minutes in #openstack-meeting-alt21:55
*** harlowja_away is now known as harlowja22:00
*** apmelton has quit IRC22:00
*** apmelton has joined #openstack-containers22:01
*** chuck_ has joined #openstack-containers22:25
*** slagle_ has joined #openstack-containers22:25
*** unicell1 has joined #openstack-containers22:35
*** marcoemorais has quit IRC22:36
*** adrian_otto1 has joined #openstack-containers22:36
*** marcoemorais has joined #openstack-containers22:36
*** s1rp_ has joined #openstack-containers22:37
*** zul has quit IRC22:37
*** slagle has quit IRC22:37
*** adrian_otto has quit IRC22:39
*** slagle_ has quit IRC22:43
*** unicell has quit IRC22:43
*** s1rp has quit IRC22:43
*** adrian_otto1 is now known as adrian_otto22:43
*** HenryG has quit IRC22:46
*** jogo has joined #openstack-containers22:59
*** mtesauro has joined #openstack-containers23:00
adrian_ottohi jogo23:00
apmeltonso jogo, right now, if someone wanted to run openstack + kubernetes, they could23:02
apmeltonthe only thing to solve there is the deployment situation23:02
jogoapmelton: go on23:02
apmeltonand maybe heat recipes (don't think that's the right term) so each tenant can spin their own kubernetes instance up23:03
jogoapmelton: I would think an image would suffice as well23:03
adrian_ottoI need a heat template to produce a kubernetes install for myself, and configure that, and maintain it.23:03
adrian_ottoit could use a special image23:03
adrian_ottoplus a cloud init script or whatever23:03
apmeltonthe entire point of our project is to offer a containers service that integrates with openstack on as many levels as possible23:03
jogoapmelton: right23:04
jogoso what does that mean23:04
apmeltonnot only for a good end user experience, but also a good deployer experience23:04
adrian_ottoapmelton, maybe I can articulate the use case for solum23:04
jogowhy can't there be a good experience for things that aren't offically openstack?23:04
adrian_ottojogo: say I want to offer hosted CI/CD service23:04
adrian_ottoand in order to deploy my applications using containers I need to use kubernetes or a steroid enhanced coreos/fleet setup in the middle23:05
jogoor mesos23:05
jogoor ?23:06
adrian_ottofor every CI costomer, I need an instance of kubernetes.23:06
adrian_ottoor mesos, or some other single tenant system23:06
adrian_ottoand I need to keep track of all those, and upgrade them, and deal with them23:06
adrian_ottoour ops guys *hate* the idea of doing that as much as they would hate managing 2000 instances of Jenkins23:06
adrian_ottowhich is why they want this solution to begin with23:07
adrian_ottoOpenStack has the concept of multi-tenancy at every level23:07
adrian_ottoand none of the container management systems do23:07
adrian_ottobecause "it would be stupid to run two containers on the same hosts with workloads from different customers in them"23:08
adrian_ottobut that reasoning is myopic23:08
adrian_ottoit does not consider that you are using Bare Metal or VM instances in combination with your container soltuion23:08
adrian_ottoand that you may never attempt to place two containers on the same host23:09
adrian_ottowhat I will absolutely commit to is avoiding NIH at all costs23:10
jogoso this container service would support multi tenancy on a single compute instance?23:10
apmeltonwhat is NIH?23:10
adrian_ottoNIH + Not Invented here disease, rebuilt everything from scratch23:10
apmeltonah23:10
jogoapmelton: http://en.wikipedia.org/wiki/Not_invented_here23:10
adrian_ottounfortunately the OpenStack community ahs a good dose of the disease already23:10
thomasemlol, very true23:11
adrian_ottoand that's hard to counter23:11
jogoadrian_otto: that is for sure23:11
adrian_ottobut I will commit to resisting that at every turn23:11
adrian_ottoin SOlum, I have even done that to my own detriment23:11
thomasemjogo: It would support multi-tenancy by nature of using OpenStack for the logical grouping of customers' containers.23:11
jogoadrian_otto: so am glad we agree about the NIH part :)23:11
jogothomasem: so seprate compute resources per user23:12
thomasemyeah23:12
jogowhere a compute resource is something that nova produces23:12
thomasemyeah23:12
apmeltonjogo, you mentioned before still paying for an instance even if you aren't using all of it (not paying per container)23:14
apmeltonthink of instances as a reservation of space to grow and spawn containers near each other23:14
thomasemIn my mind, by using Nova, we won't have to worry as much about the scheduling, we can just have a thin API that affords customers the advanced features of containers without having to stand up full control planes and doubling up on scheduling and the seamlessness of utilizing multi-tenant compute resources to keep everything separated and for billing.23:15
adrian_ottothomasem: +123:15
thomasemIf we went with an existing solution, we'd introduce a bunch of waste.23:15
thomasemeither on our side or our customers23:16
thomasemneither of which is desirable.23:16
mtesaurothomasem +123:16
adrian_ottothomasem: +123:16
thomasemWhen there's so much overlap between Nova and other existing solutions already, save for multi-tenancy.23:16
jogoso if the coud operator is managing the container service I agree running 1 per user is bad23:16
thomasemThey would be in the case of providing PaaS-like solutions.23:17
apmeltonjogo, well, bad for the user, it'd bring more money in for the operator :P23:17
jogoapmelton: hehe23:17
thomasemLol23:17
thomasemthat's very true23:17
adrian_ottoapmelton: aggregate income is lower that way. volume economics apply.23:17
jogoso as consumer of a public cloud23:18
thomasemThe problem, though, is we could prevent pain on both ends.23:18
adrian_ottoan profit is lower too because oversubscription rates are not as dense23:18
thomasemAnd I'd personally rather have less overhead so I can sell more resources to other customers.23:18
jogowhen would I use this service over running my own 'container service' (mesos coreOS docker etc) on top of a cloud23:18
adrian_ottothomasem: exactly23:18
adrian_ottojogo, I ahve this idea for a new mobile app with this innovative backend, let me whip it up!23:19
adrian_ottoI will just provision it on my Rackspace cloud account23:19
adrian_ottooh, that's going to cost me how much?23:19
adrian_ottooh, damn.23:19
adrian_ottonevermind.23:19
adrian_ottoI want hosting an application to cost about as much as making a phone call23:20
adrian_ottoit should be dirt cheap to run processes.23:20
adrian_ottoparticularly if they are stateless things that can go to sleep23:21
jogoadrian_otto:  but if all I have to do is spin up a single compute instance and then start playing thats easy23:22
adrian_ottoevery single openstack operator worldwide should have an answer for this scenario.23:22
jogohow is that different billing wise then using the proposed container service23:22
thomasemThink of the undercloud implications23:22
adrian_ottoit allows operators to bill in new ways23:22
adrian_ottolike per second instead of per hour23:22
jogoadrian_otto: why can't that be done now?23:22
adrian_ottobecause starting a VM is expensive23:23
adrian_ottoit's not a subsecond operation23:23
apmeltonjogo, if you want to spin up a 128M container, and your service provider provides a 128M instance from nova, there will be no extra space23:23
jogoadrian_otto: sure that is a bit of a pathalogical case23:23
adrian_ottojogo, let's make a bet and regroup in 5 years!23:24
thomasemlol23:24
jogowell pathological cases do happen23:24
thomasemAlrighty, I've got to run. Have a great one. I'll catch y'all later!23:25
adrian_ottoI don't want to be perceived as a chicken little, but consider this. Id OpenStack totally misses the boat for containers, it may be completely eclipsed. All of us who have made huge investments here will be kicking ourselves. I'm offering us insurance for tthat contingency.23:25
jogoadrian_otto: you have a similar pathalogical case with the container service. if I have 129M of containers I need double the servers and one is mostly empty23:26
jogoadrian_otto: yeah I agree with that statement23:26
jogoadrian_otto: I would like to see OpenStack have a strong container story23:26
adrian_ottojogo: not if I am running a private cloud with the nova virt type of libvirt/LXC23:26
*** thomasem has quit IRC23:26
jogobut that doesn't mean it needs to be the container service you are proposing23:27
jogothere are many ways to solve it23:27
*** thomasem has joined #openstack-containers23:27
adrian_ottoso far this is the only plan that has any sort of consensus at all23:27
apmeltonjogo you have that 129M issue in both cases23:27
jogothis is part of my bigger issue with: OpenStack does a bad job of supporting its surounding ecosystem23:27
adrian_ottois there an alternative that will get better support, if so we should consider it23:27
jogoits all about can it use the trademark or not23:28
apmeltonthe size of the instane spun up to run a container will always be, at a minimum, the smallest flavor that can support it23:28
apmeltonno matter what service is handling it23:28
jogoadrian_otto: make it easier for users to spin up single tenant container services23:28
jogogive them more flexibility23:28
jogodon't try to become the defacto standard via trademark23:29
adrian_ottojogo, we have heat templates.23:29
adrian_ottodone.23:29
jogodo it through tchnical prowess23:29
jogoadrian_otto: so make it super easy, document it, test it. solve the auto scaling of comute resources case23:29
jogoetc23:29
adrian_ottojogo, we did that with docker and a libswarm backend.23:30
adrian_ottobut that's not going to solve containers for all openstack operators.23:30
jogoadrian_otto: why not?23:30
jogowhy can't the compute program say, the default way to do this is use docker + libswarm23:31
jogoand add that into devstack etc23:31
jogodocument it in offical docs23:31
jogowhatever23:31
adrian_ottobecause the truth is that most openstack cloud operators are still trying to find out what containers even are, let alone how to wire them up to their clouds as managed services23:31
jogoadrian_otto: so taking a step back here23:31
apmeltonjogo, because docker + libswarm is very limited in features23:31
*** thomasem has quit IRC23:31
jogoOpenStack has taken the big tent approach for a while23:32
apmeltonyou don't get intelligent scheduling23:32
jogobut IMHO its not really working23:32
jogoapmelton: mesos and its frameworks can solve that23:32
jogoanyway, as we grow 'OpenStack' into a big tent23:32
jogoit dilutes the brand if some of the things in the brand fail (think ceilometer 6 months ago(23:32
*** mtesauro has quit IRC23:33
jogogiven our current track record I don't think the big tent approach is right for us23:33
jogobig tent crushes the ecosystem23:33
adrian_ottoI don't see this as a big tent issue23:33
adrian_ottoI see this as a way to give cloud operators a working containers solution, and points to integrate innovations of their own23:34
jogoI think the way to grow what 'OpenStack' is to have a solid set of foundational services and grow the 3rd party ecosystem around it23:34
adrian_ottoit's more like a pluggable middleware than a vertically integrated stack23:34
jogoin https://dague.net/2014/08/26/openstack-as-layers/23:34
jogoIMHO we should drop layer 423:35
adrian_ottoyou don't see a containers API as a foundational subsrtrate?23:35
jogoand promote a ecosystem around it23:35
jogoadrian_otto: well maybe it can be seen that way23:35
jogobut there are lots of tools that do something similar23:35
adrian_ottojust like trove is23:35
jogoand we have  tons of chances to get it wrong23:35
jogoadrian_otto: IMHO trove is not foundational23:35
jogoerr layer 123:35
jogoto use terms sean is23:35
adrian_ottoI see containers service at level 2.23:36
jogoadrian_otto: it consumes layer 1 things and builds upon them23:36
jogolike the rest of layer 423:36
jogoI would love to see OpenStack promote container adoption and usage23:37
jogoI just don't think the best way to do that is with our own special OpenStack container service23:37
jogoI am afraid by coupling it with OpenStack we would hold it back23:37
jogowe have lot of procedure and overhead do the integration of all the things and our stability requirements.23:37
jogoadrian_otto: does that make sense?23:40
jogoadrian_otto: it very well may not23:40
adrian_ottoI understand your position clearly, but I disagree with it.23:40
jogoits almost the end of the day23:40
jogoadrian_otto: so I think I understand your view, but can you re-itterate just to make sure I am clear23:41
adrian_ottoI see the power of the OpenStack community is derived from the ability to assemble and conduct open development and build community around it.23:41
adrian_ottobuilding an ecosystem is something that we have not *really* committed to yet as a community.23:42
jogoadrian_otto: and why can't that community be larger then things under the 'OpenStack' umbrella23:42
adrian_ottowe have committed to working on the projects inside the ecosystem.23:42
adrian_ottoI think it should be.23:42
adrian_ottobut there is a practical reality as well.23:42
adrian_ottoMost OpenStack sponsors are reluctant to work on projects are that are not in the OpenStack fold23:43
jogoadrian_otto: agreed we have not tried to grow the ecosystem to date, but I think that is the way forward23:43
adrian_ottoI'm not talking about the diamond level guys like us23:43
adrian_ottoI'm talking about the gold level guys23:43
jogoadrian_otto: yeah, so that part scares me. if it isn't openstack don't work on it23:43
adrian_ottothat's a fundamental truth today.23:44
adrian_ottoas much as we both hate it23:44
adrian_ottoand I'm trying to activate on ideas now, in the today we have.23:44
jogoadrian_otto: another idea, one that I have yet to fully formulate is: have more seperated workflow for layer 4 services23:44
adrian_ottoI'll help you get to the better tomorrow too, but I think that needs to be more planned and with more buy in than we have now.23:44
adrian_ottojogo: that's definitely a good idea23:45
jogowhere we can bless multiple projects to do the same thing etc23:45
adrian_ottoone example I can point to of a successful ecosystem that has a higher degree of federation23:45
jogoso its the same as ecosystem but it can use the OpenStack trademark because somehow that is what we are all hung up on23:45
adrian_ottois the Apache Foundation23:45
jogoyeah, they have a very different model23:46
adrian_ottobasically projects run without cross project integration as a goal23:46
jogoright23:46
jogoso maybe we need a apache foundation style wing23:46
adrian_ottoand in that sort of a world, your tents construct works great23:46
adrian_ottoso I think we do both want the same thing for the long term of OpenStack23:47
adrian_ottobut the method of getting there from here is not clear23:48
adrian_ottonot yet, not to me.23:48
adrian_ottothere is a fundamental difference between Apache and OpenStack23:48
adrian_ottowe have an ecosystem here of API enabled services that are growing more and more interdependent23:49
jogoadrian_otto: so yes we agree on that23:49
jogoadrian_otto: I don't know how to get there either23:49
jogoadrian_otto: but I think its one of the biggest issues we as a community are facing today23:49
adrian_ottoApache has a bunch of disparate applications that are only really unified by themes.23:50
jogoadrian_otto: they have a unified theme?23:50
jogoadrian_otto: I didn't even know that23:50
adrian_ottowell, some might argue they did at one point23:50
jogoadrian_otto: I thought they were unified by some infra and legal and that is it23:50
adrian_ottothat has sprawled23:50
adrian_ottojogo: LOL, fair enough23:50
jogoadrian_otto: but having unified legal and ifra is huge23:50
adrian_ottois this a discussion that we are having at the OpenStack BoD level?23:51
jogoand something we should foster for ecosystem thigs if they don't already have a home23:51
jogoadrian_otto: at some point maybe23:51
adrian_ottobecaues this really strikes me as business that needs to be dealt with between the BoD and the TC23:51
jogoadrian_otto: I don't disagree per-se23:52
adrian_ottoour jobs are to make things work for OpenStack suers within the frameworks we have23:52
adrian_otto*users23:52
jogoadrian_otto: but it would be good to make the BoD aware of the brewing storm23:52
jogoat the very least23:52
jogoso if I may try to summarize what we agree on:23:52
apmeltonI think that hits at what I want adrian_otto23:52
apmeltonI want the containers service to fit into the OpenStack framework23:53
apmeltontightly fit23:53
apmeltonfor instance, we have tools built around openstack's styles of notifiactions, I want this service to work with that23:53
apmeltonI think there is value in having a tightly coupled service23:54
adrian_ottojogo: we should think on that further to find the most productive way to address that set of concerns23:54
jogocurrently OpenStack only knows how to grow a big tent. Sometimes we pick the winner in a race before there is actually one and we get it wrong. This hurts the brand and hurts development. The big tent model breaks down when there is an existing solution that lives somewhere else23:54
adrian_ottoI'm worried of causing knee-jerks if there is a more refined and smooth way to iterate toward what we want23:54
adrian_ottoapmelton: +123:55
jogowith our current model of the integrated gate and tightly coupled development work, this model isn't scalable much further23:55
jogoTo move to a big ecosystem model there are two use cases to resolve23:56
adrian_ottojogo, the bottleneck is the infra team, correct? Or are there other concerns too?23:56
jogo* supporting existing services, so we don't have to reinvent something to bless it23:56
jogo* foster the built for OpenStack ecosystem, open it up to a unified development process, legal and trademark (?)23:57
jogoadrian_otto: no, not really23:57
jogoadrian_otto: I commentented on that in one of the ML threads23:57
jogoadrian_otto: infra is part of the limit sure. (We can use more rax test VMs)23:58
apmeltonI'd think the bottle neck is really you can only test but so many integration paths23:58
jogoand HP23:58
jogoapmelton: yup23:58
jogoapmelton: with such a tightly integrated system we have a N^2 issue23:58
jogowe have no good way of rolling out changes accross all projects too23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!