Monday, 2016-06-06

*** clayton has joined #openstack-kolla00:00
openstackgerritRyan Hallisey proposed openstack/kolla-kubernetes: Allow an operator to run an action on all services  https://review.openstack.org/32567900:03
*** tzn has quit IRC00:08
*** d_code has quit IRC00:24
*** zhiwei has joined #openstack-kolla00:24
openstackgerritRyan Hallisey proposed openstack/kolla-kubernetes: Add docs around bootstrapping and using the 'all' flag  https://review.openstack.org/32568100:25
*** fragatin_ has joined #openstack-kolla00:30
*** SiRiuS has quit IRC00:31
*** fragatina has quit IRC00:32
*** fragatin_ has quit IRC00:35
*** salv-orlando has quit IRC00:54
*** salv-orlando has joined #openstack-kolla00:54
*** tzn has joined #openstack-kolla00:58
*** tzn has quit IRC01:04
*** dwalsh has joined #openstack-kolla01:25
*** d_code has joined #openstack-kolla01:35
*** dwalsh has quit IRC01:40
*** v1k0d3n has quit IRC01:53
*** sacharya has joined #openstack-kolla01:59
*** tzn has joined #openstack-kolla02:01
*** tzn has quit IRC02:05
*** sacharya has quit IRC02:19
*** Jeffrey4l_ has joined #openstack-kolla02:24
*** klint has joined #openstack-kolla02:41
*** ozialien10 has quit IRC02:42
*** yuanying has quit IRC02:51
*** salv-orl_ has joined #openstack-kolla03:00
*** salv-orlando has quit IRC03:03
openstackgerritMerged openstack/kolla: Fix URL to Heka documentation in README file  https://review.openstack.org/32559503:14
*** jrist has quit IRC03:16
*** sacharya has joined #openstack-kolla03:16
*** jrist has joined #openstack-kolla03:29
openstackgerritMd Nadeem proposed openstack/kolla-kubernetes: Heat services and pod  https://review.openstack.org/31685003:36
*** d_code has quit IRC03:36
openstackgerritJeffrey Zhang proposed openstack/kolla: Remove the deprecated kolla-build section  https://review.openstack.org/32570903:38
*** dave-mccowan has quit IRC03:41
*** yuanying has joined #openstack-kolla03:46
openstackgerritMerged openstack/kolla-kubernetes: [doc] change Ansible version to exactly 2.0.x in quickstart.  https://review.openstack.org/32236103:48
*** v1k0d3n has joined #openstack-kolla03:48
*** tzn has joined #openstack-kolla03:49
*** tzn has quit IRC03:53
*** d_code has joined #openstack-kolla04:08
*** fragatina has joined #openstack-kolla04:08
*** fragatina has quit IRC04:09
*** fragatina has joined #openstack-kolla04:09
*** d_code has quit IRC04:19
*** d_code has joined #openstack-kolla04:25
Mech422https://coreos.com/blog/torus-distributed-storage-by-coreos.html04:56
*** daneyon has joined #openstack-kolla05:06
*** v1k0d3n has quit IRC05:10
*** daneyon has quit IRC05:11
*** Mech422 has quit IRC05:19
*** Mech422 has joined #openstack-kolla05:21
*** tzn has joined #openstack-kolla05:37
*** tzn has quit IRC05:41
*** tfukushima has joined #openstack-kolla05:51
*** openstackgerrit has quit IRC06:02
*** openstackgerrit has joined #openstack-kolla06:03
Mech422Score! Looks like I fixed my deploy-ceph-by-partition problems :-)06:10
*** Mr_Broke_ has joined #openstack-kolla06:15
*** Mr_Broken has quit IRC06:19
*** sacharya has quit IRC06:33
openstackgerritHui Kang proposed openstack/kolla: Add Kuryr ansible role  https://review.openstack.org/29889406:37
*** v1k0d3n has joined #openstack-kolla06:45
coolsvapmandre, ping06:47
mandrecoolsvap: hi06:47
coolsvapmandre, i am trying to setup the vagrant dev environment, but the Mounting NFS shared folders.. seems to have stuck for a long time, any pointers?06:48
Mech422Anyone noticed elasticsearch bombing deploy when 'enable_central_logging' is 'no' ?06:49
*** godleon has joined #openstack-kolla06:49
mandrecoolsvap: not really, maybe have a look at the logs of your nfs server06:50
mandrecoolsvap: can you mount the shares locally?06:51
coolsvapmandre, checking that06:51
*** daneyon has joined #openstack-kolla06:55
*** v1k0d3n has quit IRC06:55
*** cu5 has joined #openstack-kolla06:55
*** Serlex has joined #openstack-kolla06:56
*** daneyon has quit IRC06:59
*** salv-orl_ has quit IRC07:03
*** salv-orlando has joined #openstack-kolla07:03
coolsvapmandre, this seems to have been working after i added nfs to firewall service lists07:13
coolsvapwill let you the result07:13
* coolsvap brb lunch07:13
mandrecoolsvap: makes sense07:13
*** matrohon has joined #openstack-kolla07:17
*** athomas has joined #openstack-kolla07:21
*** tzn has joined #openstack-kolla07:25
*** tzn has quit IRC07:29
*** sacharya has joined #openstack-kolla07:33
*** callahanca has quit IRC07:35
*** sacharya has quit IRC07:38
*** mikelk has joined #openstack-kolla07:39
*** shardy has joined #openstack-kolla07:39
*** v1k0d3n has joined #openstack-kolla07:52
*** v1k0d3n has quit IRC07:56
*** chopmann has joined #openstack-kolla08:04
*** chopmann is now known as chopmann_08:04
*** shardy has quit IRC08:20
*** shardy has joined #openstack-kolla08:23
*** SiRiuS has joined #openstack-kolla08:31
*** chopmann_ has left #openstack-kolla08:40
*** tzn has joined #openstack-kolla08:40
*** mgoddard has joined #openstack-kolla08:50
*** v1k0d3n has joined #openstack-kolla08:53
*** papacz has joined #openstack-kolla08:53
*** v1k0d3n has quit IRC08:57
*** mgoddard has quit IRC09:01
*** dcwangmit01_ has quit IRC09:10
*** tfukushima has quit IRC09:11
*** dcwangmit01 has joined #openstack-kolla09:13
*** sacharya has joined #openstack-kolla09:19
*** sacharya has quit IRC09:23
*** tfukushima has joined #openstack-kolla09:26
*** tzn has quit IRC09:28
*** daneyon has joined #openstack-kolla09:37
*** daneyon has quit IRC09:41
*** v1k0d3n has joined #openstack-kolla09:54
*** v1k0d3n has quit IRC09:58
*** athomas has quit IRC10:05
*** pbourke has quit IRC10:11
*** pbourke has joined #openstack-kolla10:12
*** athomas has joined #openstack-kolla10:19
*** salv-orlando has quit IRC10:22
*** salv-orlando has joined #openstack-kolla10:23
*** tfukushima has quit IRC10:33
*** godleon has quit IRC11:09
*** sacharya has joined #openstack-kolla11:20
*** sacharya has quit IRC11:25
*** daneyon has joined #openstack-kolla11:25
*** rhallisey has joined #openstack-kolla11:26
*** mliima has joined #openstack-kolla11:28
*** daneyon has quit IRC11:30
*** mgoddard has joined #openstack-kolla11:32
*** salv-orlando has quit IRC11:35
mliimamorning11:35
*** salv-orlando has joined #openstack-kolla11:35
*** rmart04 has joined #openstack-kolla11:37
*** salv-orl_ has joined #openstack-kolla11:39
*** coolsvap_ has joined #openstack-kolla11:39
*** eaguilar has joined #openstack-kolla11:42
*** salv-orlando has quit IRC11:42
*** dave-mccowan has joined #openstack-kolla11:46
openstackgerritChristian Berendt proposed openstack/kolla: Cleanup help string of install_type parameter  https://review.openstack.org/32560011:48
*** Jeffrey4l_ has quit IRC11:51
*** Liuqing has joined #openstack-kolla11:54
openstackgerritMerged openstack/kolla: Make container dind unpin old docker relase  https://review.openstack.org/29743411:55
*** salv-orl_ has quit IRC11:58
*** salv-orlando has joined #openstack-kolla11:58
*** JoseMello has joined #openstack-kolla12:00
*** tzn has joined #openstack-kolla12:00
*** tzn has quit IRC12:05
*** Mech422 has quit IRC12:07
*** coolsvap_ has quit IRC12:09
*** salv-orl_ has joined #openstack-kolla12:13
*** salv-orlando has quit IRC12:16
*** eaguilar_ has joined #openstack-kolla12:24
*** tzn has joined #openstack-kolla12:27
*** eaguilar has quit IRC12:28
*** chopmann has joined #openstack-kolla12:32
*** chopmann is now known as chopmann_12:32
*** chopmann_ has quit IRC12:33
*** tzn has quit IRC12:34
*** chopmann_ has joined #openstack-kolla12:34
*** tzn has joined #openstack-kolla12:35
*** jtriley has joined #openstack-kolla12:48
*** chopmann_ has quit IRC12:58
*** v1k0d3n has joined #openstack-kolla12:59
*** klint has quit IRC12:59
*** ppowell has joined #openstack-kolla13:01
*** ayoung has joined #openstack-kolla13:07
openstackgerritMerged openstack/kolla: Cleanup help string of install_type parameter  https://review.openstack.org/32560013:08
*** daneyon has joined #openstack-kolla13:14
*** absubram has joined #openstack-kolla13:17
*** daneyon has quit IRC13:18
*** Liuqing has quit IRC13:20
*** sacharya has joined #openstack-kolla13:20
*** alyson_ has joined #openstack-kolla13:25
*** sacharya has quit IRC13:25
*** cu5 has quit IRC13:35
*** cu5 has joined #openstack-kolla13:35
*** dwalsh has joined #openstack-kolla13:39
*** mbound has joined #openstack-kolla13:42
*** mgoddard_ has joined #openstack-kolla13:44
*** dmk0202 has joined #openstack-kolla13:44
*** mgoddard has quit IRC13:47
*** sdake has joined #openstack-kolla13:51
*** sdake_ has joined #openstack-kolla13:56
*** mgoddard_ has quit IRC13:58
*** inc0 has joined #openstack-kolla13:58
*** sdake has quit IRC13:58
*** mgoddard has joined #openstack-kolla13:58
sdake_morning13:59
inc0hello everybody13:59
inc0hey sdake_ did you get a chance to test out customizations13:59
inc0?13:59
sdake_inc0 i have no begun work on our plan yet14:00
sdake_no/not14:00
inc0kk so let's unblock mechanism patch and get this merged plz14:00
sdake_inc0 i wanted to try friday but gerrit was un vacation14:00
inc0yeah14:01
sdake_andi was on pto sat/sun :)14:01
*** vhosakot has joined #openstack-kolla14:07
mag009_morning all14:08
inc0hey mag009_14:09
vhosakotmorning!14:09
mliimamorning all14:12
*** mgoddard_ has joined #openstack-kolla14:12
*** mgoddard has quit IRC14:15
*** mbound has quit IRC14:16
*** Mr_Broke_ has quit IRC14:21
*** Mr_Broken has joined #openstack-kolla14:22
*** Mr_Broken has quit IRC14:26
*** vhosakot has quit IRC14:37
*** salv-orlando has joined #openstack-kolla14:38
*** vhosakot has joined #openstack-kolla14:39
*** cfarquhar has quit IRC14:39
*** salv-orl_ has quit IRC14:42
*** cfarquhar has joined #openstack-kolla14:42
*** cfarquhar has joined #openstack-kolla14:42
*** sdake_ has quit IRC14:45
mag009_inc0 I'm having issue with : ceph : Mounting Ceph OSD volumes14:57
mag009_[Errno 13] Permission denied: '/var/lib/ceph'"14:57
mag009_I think we need to add become: yes14:57
*** mgoddard_ has quit IRC14:58
*** tzn has quit IRC14:58
*** tzn has joined #openstack-kolla14:59
inc0well, yeah14:59
inc0or chmod /var/lib/ceph14:59
mag009_i found it weird tho14:59
mag009_i've added become=yes to my ansible run14:59
mag009_btw I think I've found some syntax problem the issue I had last Friday..15:00
inc0mag009_, do tell please15:00
mag009_I think it's cleaner with a become: yes less code to add15:00
inc0I always use become..15:01
mag009_I'm re-deploying as we speak :)15:01
inc0I like that about kolla "I'm re-deploying as we speak" "OK I'll ask you again in 5min"15:01
mag009_there : roles/neutron/tasks/deploy.yml15:01
mag009_the when statement15:02
mag009_or should be aligned with : and there's a space15:02
wirehead_Indeed.  Kolla-Kubernetes is still in the very early days, but I'm far more comfortable dealing with it than I ever was with DevStack. :D15:02
mag009_I think thats why it failed for me last friday15:02
*** daneyon has joined #openstack-kolla15:02
*** mliima has quit IRC15:03
rhalliseywirehead_, sbezverk_ inc0, I added a bunch of new stuff in the queue15:03
inc0hmm, you sure, doesn't it render as single line15:03
rhalliseyfor both kolla & kolla-k8s15:03
inc0kk rhallisey looking at it15:03
rhalliseyinc0, thx15:03
inc0I've seen stuff in kolla, didn't checn k8s today yet15:03
inc0rhallisey, did anyone deploy multinode k8s already?15:04
rhalliseynot that I know of15:04
*** dmk0202 has quit IRC15:05
*** daneyon has quit IRC15:06
mag009_how do I push my stuff as early stage ?15:07
wirehead_rhallisey, I've already +2'd some of 'em.15:07
rhalliseywirehead_, thanks!15:07
wirehead_inc0, I'm still making sure that everything actually properly provisions and works before I go farther.15:09
*** tzn has quit IRC15:09
inc0btw wirehead_ you have Polish heritage?15:10
wirehead_Yes.  On my dad's side.15:10
inc0I figured;) You have unmistakably polish second name15:10
*** zhiwei has quit IRC15:10
wirehead_Well, my wife took my last name and is not Caucasian, which has caused some very amusing situations in the past.15:11
inc0well, I'll try multinode;) stuff happends when you move beyond one server15:11
inc0haha, I can imagine15:11
inc0especially that you don't have super easy name too15:11
inc0for US people15:11
wirehead_My great grandparents changed the pronunciation but not the spelling at Ellis Island.15:12
wirehead_That didn't fix anything. :D15:12
wirehead_inc0, it's going to be pretty brokey till https://review.openstack.org/#/c/325613/ is merged.15:12
patchbotwirehead_: patch 325613 - kolla-kubernetes - Convert MariaDB to work without HostNetwork=True15:12
Lyncosinc0 is using base image mendatory for 'new' modules ?15:13
*** tzn has joined #openstack-kolla15:13
LyncosI would like to start form alpine15:13
inc0Lyncos, nothing is mandatory if it makes sense, but would break convention15:14
Lyncosok15:14
inc0I'd rather explore adding alpine as yet another distro alltogether15:14
Lyncosok15:14
wirehead_For the most part, Kubernetes is magical and well-behaved single-node apps magically work multi-cloud style as well.15:14
inc0more work but might prove more beneficial15:14
LyncosAgreed15:14
*** _tzn has joined #openstack-kolla15:14
inc0that being said, there are things that alpine doesn't have yet15:15
*** tzn has quit IRC15:15
wirehead_err. multi-node.15:15
inc0we had that discussion before15:15
inc0librbd is a no-no for example15:15
inc0same as galera15:15
wirehead_Yeah, I was going to comment.  I had some 'fun' with Alpine and quickly went back to a more standard distro.15:15
inc0so I don't think it's possible to move full alpine yet15:15
inc0however nice idea that it might sound... it will be ugly during actual dev15:16
inc0so Lyncos if you want to publish somethign to kolla repo, start from base (or at least clean ubuntu/centos) image15:17
Lyncosok I'll do that way15:17
*** dgonzalez has quit IRC15:17
inc0and we can think how to optimize our images at large15:18
inc0Lyncos, is there any particular reason you want Alpine besides cutting few hundreds megs out of images?15:19
*** dgonzalez has joined #openstack-kolla15:19
Lyncosnot reallt15:19
LyncosI'm trying to do the felix container.. and I had a working example with alpine ... was trying to be lazy15:19
inc0well locally you can do whatever you like15:20
inc0if you'd like to push it up to kolla, that will require going along our standards15:20
inc0however it's easier to debug stuff if they're similar15:20
inc0and trugth be told, you wont gain anyting15:20
inc0really you'15:20
inc0ll lose space15:20
LyncosI'll do the standard way15:20
inc0as base image has to be pulled anyway15:21
Lyncosnot a big problem15:21
inc0for other containers15:21
inc0so unless we move full alpine, it wont optimize anything:)15:21
*** mliima has joined #openstack-kolla15:21
*** sacharya has joined #openstack-kolla15:21
inc0rhallisey, ad https://review.openstack.org/#/c/325613/ - you need bp for that? That will be requirement for HA15:24
patchbotinc0: patch 325613 - kolla-kubernetes - Convert MariaDB to work without HostNetwork=True15:24
rhalliseyinc0, I just wanted the other blueprint to be included too15:25
inc0in general, for any service managed by k8s for HA, you'll need k8s networkign15:25
inc0kk15:25
rhalliseyremoving net=host is a bp15:25
inc0so now when I think about it, what will l3 agents do?\15:25
inc0You'll need net-host15:26
*** sacharya has quit IRC15:26
inc0for them15:26
inc0and keepalived15:26
inc0(well routers ha have it embedded)15:26
*** cu5 has quit IRC15:26
inc0let me think if that will break with k8s..15:26
inc0shouldn't but k8s won't really do anything for them15:27
inc0so you want to keep vrrp for HA for this one imho15:28
inc0or Pacemaker really as it will need fencing regardless of orchiestration15:28
inc0uhh, this one will be harder15:29
*** dwalsh has quit IRC15:35
*** mgoddard has joined #openstack-kolla15:35
sbezverk_inc0 trying to undersatnd why would you need keepalived and haproxy in k8s? I think all these is taken care of by k8s infra, no??15:38
inc0sbezverk_, so in 99% you're right15:38
*** salv-orlando has quit IRC15:38
inc0and you shouldn't require haproxy at all15:39
inc0but for neutron routers you need net-host and floating IP15:39
inc0as gateway15:39
*** salv-orlando has joined #openstack-kolla15:39
inc0I don't think you can solve network gateway with k8s15:39
inc0and even if you could it'll never beat vrrp milisecond-scale speeds15:40
inc0but having keepalived has it's cost15:40
*** sacharya has joined #openstack-kolla15:41
inc0you need deterministic set of hosts15:41
inc0hmm....or maybe not, let me see quickly our keepalived conf15:41
inc0hmm, no you don't need, but you need priority per server15:42
inc0so technically you could let keepalived float even15:42
inc0but k8s would have to put it in lower priority than existing master15:43
inc0in fact, that might be better HA than normal keepalived cluster15:43
inc0now when I think about it15:43
*** mikelk has quit IRC15:43
openstackgerritMarc-Andre Gatien proposed openstack/kolla: add become to mount osds getting a permission denied when it try to create /var/lib/ceph  https://review.openstack.org/32599915:43
inc0so consider this, we have 3 hosts in keepalived cluster A is master (highest priority) B and X15:44
inc0C*15:44
inc0all of them with net-host15:44
inc0if A dies, B will get floating IP15:44
openstackgerritMarc-Andre Gatien proposed openstack/kolla: add become to mount osds getting a permission denied when it try to create /var/lib/ceph  https://review.openstack.org/32599915:44
inc0so if k8s restarts A, you don't want needlessly to swap IP again back to A (on some other host)(15:44
inc0so new A can stand up with lower priority than C15:45
sbezverk_inc0 how do you make sure that neutron will go to the same node as keepalived?15:45
inc0that's doable15:45
inc0however I think neutron itself manages keepalived for rounters HA15:45
inc0sbezverk_, you can ensure that all of pods containers are on same host15:45
inc0in fact that's what pod mean15:45
sbezverk_inc0 ok so you suggest keepalived container is launched under neutron pod?15:46
inc0sbezverk_,  not sure, so if we need to manage keepalived, it will be in pod with l3 agent15:47
inc0however I think neutron deals with it on it's own15:47
*** salv-orl_ has joined #openstack-kolla15:47
inc0you're from Cisco, you speak networking;)15:48
sbezverk_inc0 sure ;-) but in this configuration your provider network can end up all over the place15:48
inc0yeah, we need to figure this one out15:49
rhalliseyhttps://github.com/kubernetes/contrib/tree/master/keepalived-vip15:49
sbezverk_with neutron pod moving back and forth15:49
sbezverk_I just do not see at least now how neutron can always figure out where physically provider network is15:49
*** salv-orlando has quit IRC15:50
*** dwalsh has joined #openstack-kolla15:50
inc0rhallisey, nice, we might very well use this one15:51
inc0sbezverk_, it won't - you need to force k8s to schedule neutron pods on hosts with provider network connectivity15:51
sbezverk_inc0: rhallisey: ok nailing can work, would be interesting to hear k8s operators how they build cluster with regards to provider network15:53
inc0yeah, it's good practice to limit exposure of nodes to provider network connectivity15:54
sbezverk_inc0 exactly that is why I think they do not expose k8s compute nodes directly to provider network at all and use some sort of proxy15:55
inc0so depends on configuration reallyh15:55
inc0if you use DVR you need provider network for CNodes15:55
inc0or network routable to provider network15:55
inc0without DVR you can limit this to network nodes15:56
sbezverk_inc0 absolutely but we are not talking about openstack typical topology here15:56
*** sdake has joined #openstack-kolla15:56
sbezverk_I have never seen a real k8s production cluster15:56
inc0without DVR you need provider network only on network nodes15:56
inc0that's the reason we put haproxy on network nodes15:56
sdakemorning take 215:56
sdakeinc0 rhallisey i am submitting a general ops session on kolla15:57
inc0hands-on labs?15:57
sdakei'll list both of yoou as co-pesenter15:57
sdakewe are doing a separate lab15:58
sdakewe got wait listed liast time15:58
inc0sure why not15:58
rhalliseyk15:58
sbezverk_sdake I submitted cinder and iscsi session15:58
mag009_inc0 I'm not using the precheck15:58
sdakere the lab, i expect allt he cores to help with that one if we get accepted15:58
inc0btw are we going to have any presence in ops midcycle?15:58
mag009_I'd rather add a tasks instead15:58
mag009_in the run itself15:58
inc0mag009_, well, you should;)15:58
sdakeinc0 i wont be able to make it15:58
inc0or in inventory15:58
*** dmk0202 has joined #openstack-kolla15:58
inc0so I don't want become=true in playbook really15:58
inc0I'd rather add precheck for this one15:59
inc0and add become=true in inventory15:59
inc0if you add become=true in playbook you'll prevent non-root users to deploy kolla15:59
inc0even if they can prepare their env beforehand15:59
inc0and we need this to be possible15:59
mag009_but even in the precheck you'll need root to chmod+chown16:00
inc0mag009_, precheck will not do chmod+chown, it will validate it16:01
inc0we don't meddle with host16:01
inc0so imagine this - I want kolla to run as non-root16:01
inc0for security reasons16:01
inc0but I can prepare my env beforehand16:01
inc0so I manually chmod this dir16:01
inc0then su to non-root user16:02
mag009_ok but my question is when do you chown/create this /var/lib/ceph ?16:02
mag009_you do it manually ?16:02
wirehead_Gah, of course this discussion happens whilst I am biking to the train.16:02
inc0manually16:02
mag009_outside kolla16:02
mag009_?16:02
inc0correct16:02
inc0also in your case16:02
inc0just put become=true in inventory16:02
rhalliseywirehead_, :)16:02
inc0inline16:02
inc0will work just fine16:02
mag009_ok16:03
mag009_i'll close that one then.16:03
inc0mind adding a precheck tho?16:03
inc0or want me to do it?16:03
inc0because it's actually valid precheck to have16:03
wirehead_sbezverk_ rhallisey inc0 : I think we’ll have succeeded if most of the services (c.f. Keystone, Heat, et al) are not HostNetwork but Neutron stil is.16:04
mag009_i'll do it16:04
inc0thank you16:04
inc0you can use same patchset16:04
mag009_I'll run a full run again I'm still having issue with the when statement of neutron16:04
inc0interesting16:04
inc0it was working fine for months;)16:04
wirehead_I was doing some experimentation with Kubernetes and the limits of Services and HostNetwork on Friday.16:04
inc0brb16:04
sdakemliima will you be at summit?16:05
mliimabarcelona?16:05
sdakewhat does c.f. mean wirehead_16:06
*** _tzn has quit IRC16:06
*** mgoddard_ has joined #openstack-kolla16:06
*** dmk0202 has quit IRC16:06
*** mgoddard has quit IRC16:06
sdakemliima ya thatss the location16:06
wirehead_Oh, I typed ‘c.f.’ when I meant to type ‘e.g.’16:06
wirehead_c.f. means ‘compare’, veruss e.g. meaning ‘for example'.16:06
sdakegot it, know hot ot use eg and ie correctly :)16:07
mliimanot yet know sdake16:07
*** dwalsh has quit IRC16:07
wirehead_But, yeah, I was going to poke you two, inc0 and rhallisey, about the scale thing.  Operationally speaking, that’s going to be more complex than just a replica value template.16:09
sdakeyou mena scaling past 100-500 nodes or so?16:10
mliimago at the summit is a will that I have, but it's something i can't confirm today.16:10
rhalliseywirehead_, can you auto scale with horizon?16:10
rhalliseydidn't coreos demonstrate that?16:10
wirehead_AFAICT, we should be able to have a decent chunk of services autoscaled.16:10
rhalliseysdake, 1.2 can scale to 1000 nodes16:11
rhalliseywirehead_, I'd like to have scaling through the dashboard16:11
rhalliseythat would be awesome16:11
sdakenot ver ywell but yes i know its scale limits16:11
sdakewhat I was going to say is openstack has a limit of about 300 nodes16:11
rhalliseygotcha16:11
wirehead_This should be slick as a greased lepper armadillo in practice, because if someone’s causing more Keystone traffic but less Horizon traffic, it’ll add more Keystone pods and reduce the Horizon pods.16:11
sdaketo get past 300 nodes multiregion and cells are used16:12
rhalliseywirehead_, nice. That sounds like we need a BP for it16:12
wirehead_Yeah.16:12
rhalliseycan you describe what this looks like in a BP?16:12
rhalliseywith relevant links16:12
wirehead_Totally.16:12
rhalliseywirehead_, thanks :)16:13
rhalliseyso your patch then, how about for now we just template?16:13
rhalliseythen add this feature when it's relevant16:13
sdakenote the default setup of kolla spins up each api service with a certain number of processes16:13
wirehead_OK.16:13
rhalliseybecause adding it now may not make the best sense16:13
sdakejeffrey4l has a patch he is workign on or that has merged to hard limit this at 3 or so16:13
sdakeright now it choses the number of cores on the machine as the default16:14
sdakeit being all of openstack16:14
*** callahanca has joined #openstack-kolla16:14
rhalliseygood to know16:14
sdakescaling api has detrimental affects on rabbitmq and mariadb16:14
sdakethe more apis there are, the more sluggish rabbitmq and mariadb become16:15
wirehead_Would we ever exist in a situation where Kolla supports service-specific rabbitmq and mariadb instances?16:15
*** daneyon has joined #openstack-kolla16:15
sdakewirehead_ that is called multiregion an multicell16:15
rhalliseymaybe the one db per service approach could help16:15
sdakeoh service sspecific16:16
sdakeya that would work for rabbitmq and maybe for mariadb16:16
rhalliseyside car database containers16:16
sdakegah side car16:16
sdakedon't use that word16:16
rhalliseyha16:16
sdakeit reminds me of ruby on rails!16:16
rhalliseythat's their word16:16
rhalliseysymbiotic container16:17
sdakefrom what i know of openstack setup16:17
rhalliseythat's harder to say :/16:17
wirehead_Just call it a facehugger container.16:17
sdakeit should work to have each service in a seprate database16:17
rhalliseyhaha16:17
sdakethose ruby people, always sused to tlak about "we will ljust solve it with a sidecar"16:18
sdakeright before theyblew me off for 3 months16:18
sdakehapepned multiple times16:18
*** Mech422 has joined #openstack-kolla16:18
Mech422Morning16:18
sdakehence heat was born :)16:18
sdakehey meh16:18
sdakehey Mech42216:18
rhalliseymaybe it can't be a side car though because that means there will be on db service per pod16:18
wirehead_That was one thing I was wondering, w.r.t. sbezverk_ ’s BP for services, if we should create a service per database or otherwise make things more divisible.16:18
wirehead_service per service-database.16:18
rhalliseyit would have to be per service vs per pod16:18
Mech422sdake: Morning :-)  I think I fixed my ceph in partition issues...16:19
wirehead_Yeah, sidecar doesn’t work well for DB.  It might work for memcached.16:19
sdakeMech422 nice16:19
Mech422sdake: It appears the root cause was stale partition info since kolla renames partitions and the kernel doesn't update /sys16:19
wirehead_Depends on how sophisticated we want to get.  memcached per sidecar works more towards the bigger case.16:19
rhalliseywirehead_, I'll check the kube objects and see what there is for this16:19
wirehead_And will get in the way for the smaller case.16:20
wirehead_We might use namespaces instead.16:20
wirehead_Yeah, the more I think about it, that’s the correct way.16:20
rhalliseyI suppose this could be a rc=116:20
sdakere services and adatabases erandomly sscheduled on hardware16:20
Mech422sdake: patches are here: https://bugs.launchpad.net/kolla/+bug/1589309  but you probably won't like them as it requires running kolla-toolbox with root, so you can use sgdisk to get the 'real' partition names16:21
openstackLaunchpad bug 1589309 in kolla "Problem Bootstrapping ceph from Partitions due to stale part. table" [Undecided,New]16:21
wirehead_If you want to have a per-openstack-service mariadb, I think you’d want to keep the service names fixed but use a different namespace for each one.16:21
sdakeoperators at this point in time have a universal appeal to knowing "where" there controller applicaitons are running16:21
rhalliseysdake, why16:21
*** vhosakot has quit IRC16:21
sdakeMech422 we can run kolla-toolbox as sudo16:21
rhalliseythe idea here would be you wouldn't know16:21
*** ssurana has joined #openstack-kolla16:22
sdakeMech422 do you ahve teh review available - i'll have a look and suggest how to fix it16:22
*** ssurana has quit IRC16:22
Mech422sdake: Oh, I read in the docker man page sudo bad for docker?  sudo would work fine for what I do...16:22
sdakerhallisey why is complex - and I'd be speculating16:22
rhalliseykk16:22
sdakebut part if it is their training in the past16:22
sdakethey have always done it that way in the past16:22
sdakebut typicallly contorller nodes getspecial treatment in a rack setup16:23
sdakethey uuallyl go at the TOR (called the top of rack)16:23
sdakethey are marked specially so people dont muck with them16:23
sdakevs the compute nodes, which are more expendable ;)16:23
wirehead_Also, debugging issues with config.  It’s helpful to know if Kube is properly distributing pods.16:23
*** vhosakot has joined #openstack-kolla16:24
wirehead_Understanding that I’ve never operated an OpenStack cluster in anger, I kinda want ot have the kolla-kubernetes command contain some useful tools like `kolla-kubernetes map` that would display an infra-level view of what’s running.16:24
*** ssurana has joined #openstack-kolla16:25
sdakewirehead_ my comments go beyond that16:26
sdakewirehead_ in a real scenario, i think ops wil want their controller workload running on specific gear16:26
*** jmccarthy has quit IRC16:26
*** ssurana has quit IRC16:26
wirehead_Yeah, that’s why I’m kinda wondering if the operational case is going to have a kube cluster for just the controller workload and then compute nodes as a separate thing.16:27
*** jmccarthy has joined #openstack-kolla16:27
*** ssurana has joined #openstack-kolla16:28
sdakewirehead_ its hard to say - as nobody is really using k8s in production all that much16:28
sdakewirehead_ I dont think best practices have been discovered or defined16:28
wirehead_Best Practices are often discovered the hard way. :P16:28
wirehead_Gut feel says that the smallest of the servicible installs and the developers are going to want no differentiation, whereas the larger installs will want a constrained control plane.16:29
rhalliseywirehead_, ya we would steer controller services away from compute nodes16:29
rhalliseybut what services on what controller nodes will be determined by kuber16:30
wirehead_K8s itself has a flag for if you want it to schedule compute on the control plane or not.16:30
rhalliseyI figured we would use labels in the pods16:30
sdakewirehead_ which flag yo utalking about16:31
sdakethe best practice re control plane gear at the top of rack is driven by hurricanes storms etc16:32
wirehead_rhallisey: http://kubernetes.io/docs/admin/multiple-schedulers/16:32
*** fragatina has quit IRC16:33
rhalliseynice we can write out own scheduler :)16:33
*** matrohon has quit IRC16:35
sdakeya and the scheduler can schedule to specific gear16:35
wirehead_--register-schedulable=false on the master node kubelet, sdake.  More stuff that landed in 1.116:35
sdakeyour talking about the kubernetes control plane16:36
wirehead_Yeah,16:36
sdakeI am tlaking about the openstack control plane16:36
wirehead_Yeah.  So, in k8s, they realized that they needed to support both the case where control and compute were intermingled as well as the case where control and compute were segregated.  I would suspect that we will end up in the same situation.16:36
sdakesound slike the way to handle control vs compute separation is via a pluggable scheduler and labeling16:37
wirehead_Yah.16:37
*** vhosakot has quit IRC16:38
wirehead_This actually wraps around to the goals that Craton has in mind.16:38
*** vhosakot has joined #openstack-kolla16:38
inc0back16:44
sdakeback on autoscaling, autosacaling is a cool idea16:45
sdakebut if kubernetes thinks apis need to be scaled, it could lbe detrimental to the operation of the cloud16:45
sdakebecause rabbitmq and mariadb are not independently scalable16:45
inc0so problem is as you said rabbitmq and maria isn't really scallable this way16:46
inc0and they are performance bottleneck most of the time16:46
inc0s16:46
inc0at scale, rabbit dies first16:46
sdakeso re various government requirements16:49
sdakeI keep hearing about this password rotation thing and how its a blocker for kolla deployments16:49
inc0that's keystone issue16:49
sdakewe need to be able to change all passwords in the system not just keystone16:50
inc0and maybe you'll need to change passwords.yml and call reconfigure every now and then16:50
sdakeincluding database passwords16:50
sdakereconfigure doesn't do the job currently16:50
inc0if recondigure doesn't change confis for mariadb, we need to fix it16:50
inc0it's more important than only passwords rotation16:50
inc0we need to be able to do everything16:50
inc0btw. that remind me, non-ini config overrides16:50
sdakeagree but incremental approach is better then fix everthing16:51
inc0sdake, we just need to make reconfigure works for every service, that's it16:51
sdakeand password rotations is going to end up blocking kolla deployments16:51
*** Mech422 has quit IRC16:52
*** Mech422 has joined #openstack-kolla16:52
inc0sdake, so does reconfigure miss anything?16:52
sdakeglobals.ml ad passwords.yml...16:52
inc0really? why16:53
inc0?16:53
sdakethink about the external VIP example16:53
sdake(the one on the mailing list)16:53
inc0I mean ansible re-creates all the configs16:53
inc0hmm, let me check one thing16:53
sdakefor a password rotation, first you must reconfigure the database, then all services that use the databases to use the new database passwords16:54
sdakereconfigure as is hass no dependenncy ordering16:54
sdakehowever for password rotation there is an ordering16:54
sdakesame story for vip changing16:54
inc0sdake, not true16:54
inc0it does things in order in playbooks16:54
inc0and mariadb is higher in order of playbooks than services16:55
inc0that being said, it does require maintenance mode as it obviously can't be rolling change16:55
sdakereconfigure passswords = change all passwords estart meriadb with new password, change all services datbase access, reconfigure all services16:55
sdakechange all services passwords16:55
sdakereconfigure all service password16:55
*** fragatina has joined #openstack-kolla16:55
sdakeit not a simple reoconfigure16:55
inc0it is exactly how it goes with deploy16:56
mliimaclean all and deploy again?16:56
inc0no, why?16:56
sdakeright one mechanism for password rotation would be to stop the cluster then rotate teh passwords then star tthe cluster16:56
inc0well for maria it will require custom SQL query we don't use now16:56
sdakebut some password information itself is stored in the datbase, so that mechanism wont work16:57
sdakewe definately dont watn to clean all :)16:57
mliimasure :)16:57
inc0well, for keystone16:57
inc0but we won't do keystone calls16:57
inc0that's not our job to do16:57
wirehead_sdake: In an ideal world, the passwords for mariadb would be continually rotated, a la Hashicorp Vault.16:57
inc0we don't do keystone user-update16:57
sdakethe functionality i think we want is replace old passwords.yml with a new one16:58
sdakeand do it with minimal downtime16:58
inc0no, that's not it16:58
inc0so for changing mariadb pasword you need to call SQL16:58
inc0for changing keystone passwords you need to call keystone16:58
inc0and only after that update configs16:58
inc0and we don't want to do either - calling sql or keystone16:59
inc0that's operators job16:59
sdakewe want kolla to be operable16:59
sdakeif that means we automate rottation for them, thats whatit  means ;)16:59
inc0operable != replacing operator16:59
sdakepassword changes can be automated17:00
sdakeand if they can be they should be17:00
inc0we don't want to do too much because it will hurt our maintainability and provide serious security risk17:00
inc0I disagree, I don't want kolla to call running keystone17:00
sdakei dont get either point17:00
sdakecould you expand17:00
inc0we mess sometghing there and we destroy running envi17:00
sdakekolla already calls keystone all the time17:00
inc0no, it does not, just for bootstrap17:00
sdakeoperator messes something up there and they destroy running env17:00
inc0but that's operator not kolla17:01
inc0also what wirehead_ said, you want proper secret management anyway17:01
sdakesecret management is different from rotation17:01
sdakeorthogonal topics17:01
inc0rotating passwords will require maintenance mode, something we don't do with kolla17:01
sdakea maintenance node?17:02
inc0would need to support separate logic, also not doing it with kolla now17:02
inc0yeah, you need to turn off everything prior to change17:02
inc0I mean you can keep maria/rabbit running17:02
sdakeyou can do a rolling upgrade of a password change17:02
inc0then turn on keystone and change password there17:02
inc0then turn on services17:02
inc0no, you can't17:02
inc0think about it17:03
inc0unless you have a period of time of both passwords being correct17:03
inc0and that's not how auth works17:03
sdakewell maybe its different then reconfigure17:03
inc0it totally is if you want to do whole thing17:03
sdakebut atleast in the us, we have this bunch of laws called sarbanes soxely17:04
sdakeone of the requirements of SBO is password rotation every 3 months17:04
inc0if you make change manual thing then reconfigure will distritbute configs17:04
sdakeif your a public company, you are bound by SBO17:04
inc0and that is what we do17:04
inc0ok, still, not kolla job, operators17:04
inc0and if that's the case, there are already tools in place17:04
sdakei disagree on whoms job it is17:05
sdakebut it doesn't matter, because atm we have no technique documented on even how to do it17:05
inc0and afaik it doesn't mean service passwords17:05
inc0only user passwords17:05
inc0and that's not something we deal with at all17:05
sdakeuser passwords are operator's problem ;)17:05
inc0yup, and afaik these are only ones forced to be rotated17:05
sdakehere is what i'm getting at17:06
inc0I might be wrong17:06
sdakeI dont have th efaintest idea how to rotate passwords today in kolla17:06
sdakeinc0 definately not true, all secrets must be rotated not just user passwords17:06
inc0but I've been in ops of financial company17:06
inc0and we didn't rotate db passowrd for services every 3 months;)17:06
inc0and we were PCI compliant17:06
inc0sdake, it's exactly same procedure as ops does now17:07
inc0you don't rotate passwords with kolla, you rotate them yourself and kolla only distributes congis17:07
inc0configs17:07
sdakewhat i'm getting at is we should automate the rotation17:09
inc0I disagree17:09
sdakeat minimum we should document how to change passwords17:09
sdakebecause atm nobody has any idea how to rotate passwords17:09
sdake(in our community)17:09
inc0so reason why I don't want to do it is because I don't want kolla to even know how to access db17:09
inc0we're deployment tool not fleet management tool17:09
inc0I have..operators have...17:10
sdakewe are deployment management tool17:10
inc0they do it in the very same way they did it till now...17:10
*** rmart04 has quit IRC17:10
sdakewhich is what?17:11
sdakei suspect most operators have automated rotation ;)17:11
inc0ehh...login to mariadb, change passwords, make config update17:11
sdakesuspect/know atleast sowme have17:11
inc0use keystone client17:11
rhalliseysdake, deployment managment tool is debatable17:11
inc0sdake, and it will work just fine17:12
inc0only instead of their previous config distribution thing they will call reconfigure17:12
inc0if they do it manually - fine, if they have tooling - fine17:12
inc0still works17:12
rhalliseywell I guess it depends on the exact definition of deployment management tool17:13
rhalliseymanual vs automatic17:13
inc0in any case, we have higher priorities now17:14
inc0like merge config for non-ini17:14
inc0without that reconfigure doesn't really work anywya17:14
sdakerhallisey here is our mission:17:17
sdakehttps://github.com/openstack/governance/blob/master/reference/projects.yaml#L211217:18
sdakenotice the keyword "operating"17:18
sdakethat implies we do more then deploy a cloud to day0 working function17:18
sdakeso in my mind atleast, it isn't  debateable ;)17:18
inc0sdake, but you will never replace need for manual work for ops17:20
sdakei dont want to do that inc017:20
inc0btw we need monitoring17:20
inc0we need reconfigure to work 100% of times17:20
inc0we need non-ini configs17:20
sdakei want to replace all the manual junk that requires manual intervention with the infrastructure17:20
sdakereconfgiure was eant to solve the /etc/kolla/config dir merging case17:22
sdakeatleast one person expects it to handle /etc/kolla/globals.yml changes17:22
sdakewe dont want operators logging in to nodes and having to bounce docker services17:23
sdakejust to do a rotation for example17:23
mliimaI think a good idea do something to automated db password rotation, but rotate db passowrd for services every 3 months, we can leave this manual.17:24
inc0so they won't need to log in to nodes17:24
inc0they need to change passwords.yml17:24
inc0login to mysql17:24
inc0login to keytstonme17:24
inc0and use reconfigure17:24
inc0no ssh involved at any point17:24
*** dwalsh has joined #openstack-kolla17:25
sdakethats good17:29
*** diogogmt has joined #openstack-kolla17:29
sdakecan someone take on documenting how its done then?17:29
wirehead_I think my operational tooling rule for anything built within the past 2 years is that you should be able to cycle all of your credentials with an automated step.  Obviously, some of that is outside of the scope of Kolla, but it’s not all outside our scope.17:32
*** ppowell has quit IRC17:33
sdakeinc0 you missed a good conversation I had with harlowja about godaddy equirements for kolla17:35
*** mark-casey has joined #openstack-kolla17:35
sdakethe missing peices atm are multi-region support and multi-cell support17:36
sdakepbourke you around17:37
inc0sdake, I'd love to read the log17:40
inc0brb17:40
sdakelet me see if i can find it17:40
sdakehttp://eavesdrop.openstack.org/irclogs/%23openstack-kolla/%23openstack-kolla.2016-06-03.log.html#t2016-06-03T23:53:1117:41
*** rahuls has joined #openstack-kolla17:42
*** daneyon_ has joined #openstack-kolla17:44
*** harlowja has joined #openstack-kolla17:48
*** sdake_ has joined #openstack-kolla17:49
*** sdake has quit IRC17:49
*** daneyon_ has quit IRC17:49
*** ppowell has joined #openstack-kolla17:51
*** rmart04 has joined #openstack-kolla17:53
*** sdake_ has quit IRC18:01
Mech422harlowja: your at GoDaddy? you in the PHX offices ?18:03
harlowjanah, sunnyvale offices18:03
harlowjabut yes, to the overall question Mech422 ha18:03
Mech422harlowja: cool - I'm at Limelight in PHX - we seem to trade a lot people back and forth :-)18:04
harlowjalol18:04
Mech422harlowja: my buddy just left to go to your 'minime' (?) email product18:04
openstackgerritKen Wronkiewicz proposed openstack/kolla-kubernetes: Replication controllers for Keystone, Memcached, RabbitMQ.  https://review.openstack.org/32410618:06
openstackgerritKen Wronkiewicz proposed openstack/kolla-kubernetes: Convert MariaDB to work without HostNetwork=True  https://review.openstack.org/32561318:07
*** ppowell has quit IRC18:08
*** athomas has quit IRC18:08
*** fragatina has quit IRC18:09
harlowjacoolio, i'm not sure what minime is, guess i should look into that, ha18:10
*** mbound has joined #openstack-kolla18:11
*** fragatina has joined #openstack-kolla18:11
*** fragatina has quit IRC18:11
*** rmart04 has quit IRC18:12
inc0I'm back18:12
inc0harlowja, hey, I meant to talk to you, sooo...I'm planning refactoring of build18:12
inc0which basically you started18:12
harlowjainc0 cool18:12
harlowjawhat u gonna do?18:12
*** rmart04 has joined #openstack-kolla18:12
inc0so I was thinking about decoupling dockerfile generation from build18:13
inc0for start18:13
harlowjaya, the build task is to huge imho, lol18:13
inc0break this up into multiple submodules18:13
wirehead_To eat an elephant, you only need to carve off a slice, then repeat.18:13
inc0revisit the idea of copying whole thing to /tmp18:13
inc0lol18:13
harlowjaother idea18:14
harlowjawhich also addresses your 'it looks like it froze' issue18:15
wirehead_OMG, I wasted around 20-30 minutes with that.  It looked like it froze, I canceled it, and then the docker dameon was wedged and thus it really did freeze.  I had to restart the daemon to un-wedge it.18:16
harlowjaarg, its not frozen, the log files :-P18:16
harlowjabut depends on how much u guys want to do with curses18:16
wirehead_It may just be a choice of curses or curses, tho. :D18:17
inc0harlowja, I'm also lookng at logs from cells/multiregion18:17
inc0Lyncos, around?18:17
inc0you might need cells too18:18
inc0right now it might require custom playbooks and heavy config lifting18:18
inc0I never deployed multi-cell so I can't say for sur18:18
inc0e18:18
*** sdake has joined #openstack-kolla18:20
harlowjaso what i was thinking18:22
harlowjais to use http://urwid.org/ (which i've used before)18:22
harlowjasplit the screen into X segments (1 for each thread)18:22
harlowjathen have say the logs go to each segment (as well as to files)18:22
harlowjaso that way u could see what a thread is working on, the output its producing18:22
harlowjabut not have it be all cluttered up18:23
harlowjai've done this before with https://github.com/harlowja/gerrit_view/#czuul18:23
inc0for build? so I think we need stdout to be readable18:23
harlowjajust think of each box there as a thread18:23
*** sdake_ has joined #openstack-kolla18:23
harlowjaya, this would make stdout be readable18:23
harlowjaby having each box (see that link) be a thread + its logs18:23
inc0otherwise our CI will be a bitch and I don't expect people to build at hand18:23
inc0readable => easy for machine parsing18:24
*** sdake has quit IRC18:24
harlowjamachine parsing can just read the individual log files that are created?18:24
inc0I think people will build CI/CD around build pretty soon and will want something easy to parse rather than ncurses-like experience18:24
harlowjahmmm, odd18:25
inc0but let's think about it for a moment18:25
inc0right now it's bunch of stuff, true18:25
harlowjapersonally i like stdout being used for the meaningful messages, not being a dump for all the stuff18:25
inc0well, we can do that with log level18:26
harlowjanot really18:26
inc0which you did partially already18:26
inc0how about we'll create urwid-interface for build, but external to build itself?18:26
inc0we could also put ansible into it18:27
inc0I was actually thinking about ncurses kolla client')18:27
inc0so build.py will throw bunch of stuff18:27
inc0and will retain API18:27
inc0we'll clean it up, but still18:27
inc0and we'll create 2 things18:27
inc01. urwid gui18:27
inc02. jenkins configs to do it automatically18:28
inc0how does that sound?18:28
harlowjahmmm18:28
harlowjaso build.py would be like a low-level thing18:28
harlowjathat u would build 'UI's on top of?18:28
inc0and build.py will become a lib rather than self contained script18:28
harlowjaok, seems f air18:29
inc0from kolla import build18:29
harlowjawhat would the build.py then output18:29
inc0so we can still use python logger18:29
inc0wanna go fancy, make your own handler18:29
harlowjawould it expose notifications (a listener)18:30
harlowjawell a logger is pretty low-level notifications, lol18:30
inc0yeah18:30
inc0and it's already in stdlib so all we need to do is LOG.exception("omg omg my build broke")18:30
harlowjai was thinking of something more structured18:31
inc0like what?18:31
harlowjahttps://pypi.python.org/pypi/notifier18:31
harlowjahave the build.py emit notifications18:31
harlowjathen have a 'log_ui' that translates those things to log statements18:31
harlowjaor that's an idea18:32
inc0yeah, but that would require a running service18:32
harlowjano18:32
harlowja'Simple in-memory pub-sub.'18:32
harlowjalol18:32
inc0so build.py finishes and memory is freed18:32
harlowjaits gonna be pretty hard to make a decent curses UI with a bunch of log statements18:32
harlowja:-P18:32
inc0true enough18:32
harlowja(or in fact any UI that isn't just a raw dump of the logs, lol)18:32
inc0raw dump of logs with grep will be improvement;)18:33
harlowja:-/18:33
inc0but I hear ya18:33
harlowjaya, that's why if u have at least notifications (like using the above library) u can at least structure them18:33
harlowjaand have build.py emit those kind of things18:33
inc0so what I don't want to end up with is running service with bunch of infrastructure that will have to run once and after that...just hang out there18:33
harlowjanah nah, the notifier lib is just in memory stuff18:34
harlowjanot a full service18:34
harlowjait came from taskflow so that taskflow engine users can do things like18:34
harlowjaengine.notifier.register("ON_START", my_Callback)18:34
harlowjaand  then when the internals of the engine emit a "ON_START" event, my callback would be triggered18:35
harlowjau just need to define what those events are18:36
inc0yeah, I know, but well, I'm just thinking if we need it for simple thing that build should be;)18:36
harlowjaon_build_start, on_build_progress, on_build_done18:36
inc0and one atomic action18:36
harlowjabuild isn't so simple though18:36
inc0no ha, if something breaks just re-run it and so on18:36
harlowjaits about 3 or 4 stes :-P18:36
harlowja(steps)18:36
harlowjabuild_download_things,18:36
harlowjabuild_download_more_things18:37
harlowjabuild_actually_start18:37
inc0first render templates18:37
harlowjaya, so there are a bunch of steps, if u can split them up into tiny pieces, then it imho becomes more possible to have a useful stdout (that isn't just a log dump)18:37
inc0so let's do that18:38
inc0let's start by splitting18:38
inc0then go for import kolla.build18:38
inc0then come back to drawing board and see where we ware18:38
inc0are18:38
harlowjaok18:38
inc0let me check if there is bp for that18:38
*** rmart04 has quit IRC18:39
inc0sdake_, around?18:39
sdake_shoot18:40
inc0when we at it, can we take longer look at our blueprints?18:40
*** cu5 has joined #openstack-kolla18:40
inc0so for example I can't see non-ini merge config, which is essential18:40
inc0we need to add bp for refactioring of build18:40
*** ppowell has joined #openstack-kolla18:41
sdake_anyone can add blueprints18:41
inc0it's more about looking at what we have, add missing and reprioritize it all18:41
inc0harlowja, https://blueprints.launchpad.net/kolla/+spec/build-refactor18:45
inc0let's make a session on next meeting to brainstorm possible improvements18:46
inc0I'll add this to agenda18:46
harlowjacool18:49
inc0so I'll start breaking the code18:50
inc0that didnt came out right.18:50
inc0but is true nonetheless18:51
harlowjalol18:51
harlowjaok with me18:51
sdake_lets wait on refactor18:52
inc0on what sdake_ ?18:52
sdake_until after our customization refactor is done18:52
sdake_or atlaest the build.py part is done18:52
inc0hold on18:52
sdake_it may be that way already18:52
inc0these are orthogonal18:53
vhosakotsdake_: what do you think about this ? :) --> http://paste.openstack.org/show/508436/18:53
sdake_they touch the same code base do they not?18:53
inc0well, I can rebase my code to it18:53
sdake_vhosakot hotness dude18:53
inc0easily18:53
sdake_inc0 cool that wfm then - i'm lazy :)18:53
vhosakotsdake_: will push the PS today.... magnum is working now18:53
inc0but really we're waiting for you18:53
inc0in my mind customization is already solved18:54
vhosakotsdake_: magnum needs dynamic variables in jinja2 file18:54
vhosakotI'll push the PS today18:54
vhosakotsorry I took more time as I had to understand magnum and debug all sorts of errors18:54
sdake_i know we are waiting on my prototyping18:55
sdake_working on scraping off a block to work on it18:55
inc0don't block good work on that plz18:56
inc0we need to refactor build.py18:56
*** mbound has quit IRC18:57
*** sdake has joined #openstack-kolla18:58
*** sdake_ has quit IRC19:00
claytonso we don't use kolla, but we do run openstack in docker and found an issue recently that I think might affect kolla also19:00
inc0clayton, what's that?19:00
sdakeclayton cool care to share19:00
claytonThe issue we found is that nova-compute does a volume mount of /run, but doesn't mount /run/netns in shared mode19:00
*** fragatina has joined #openstack-kolla19:00
claytonif you run routers/dhcp/metadata on the same node as nova-compute, you'll end up accidently capturing the netns mounts19:01
claytonwhich will make network namespaces undeletable19:01
claytonuntil the nova-compute container is restarted19:01
inc0hmm...interesting19:01
inc0that will appear on DVRs right?19:01
claytonfor us the fix was to mount /run/libvirt and /run/openvswitch into the nova-compute container instead of /run19:02
claytonyes, or if (like us) you run routers on compute nodes19:02
inc0good catch, thanks19:02
claytonwe also ran into an issue on trusty with the 3.13 kernel not properly sharing mounts even with the shared flag19:02
inc0yeah, I believe we have note of it in our docs19:03
claytonit works fine while the container is running, but if you restart the container, it the existing namespace mounts aren't created in shared mode19:03
inc0and a workaround19:03
claytonwhich makes it capture the mount.  the issue doesn't appear to exist on the 4.4 kernel that xenial ships with19:03
claytoninc0: have a link for that?19:04
claytonI'm curious what other pitfall I'm going to hit next :)19:04
inc0yeah hold on, not sure if that's fix for this problem19:04
inc0well if you'19:05
inc0you'd use kolla we could hit pitfalls together19:05
inc0it's always happier to share a suffering19:05
claytonI went to the feedback sessions in austin :)19:05
inc0so we run mount --make-shared /run19:06
inc0however I don't think that's fix for issue at hand19:07
claytonI think that newer versions of iproute2 do that automatically19:07
inc0we will be moving to 16.04 for O19:07
inc0so not sure if we should even care about that really19:08
*** callahanca has quit IRC19:08
claytonwell, shared not working properly on 3.13 is the thing that will at least get us on the 16.04 kernel.  we have other work that has to be done before we can start moving systems to xenial proper19:08
claytonhttp://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/ip/ipnetns.c#n63919:09
claytonthe iproute2 package from trusty has that patch19:10
*** cu5 has quit IRC19:10
*** callahanca has joined #openstack-kolla19:13
*** mliima has quit IRC19:21
*** salv-orl_ has quit IRC19:24
*** salv-orlando has joined #openstack-kolla19:24
*** cu5 has joined #openstack-kolla19:25
*** Serlex has left #openstack-kolla19:25
openstackgerritMichal Jastrzebski (inc0) proposed openstack/kolla: Make build.py importable lib  https://review.openstack.org/32610819:28
openstackgerritMichal Jastrzebski (inc0) proposed openstack/kolla: Make build.py importable lib  https://review.openstack.org/32610819:30
PyroManiWe've working on a new AZ for our own OS cloud at the office and we've started with 16.04 on the host machines. At this moment we run into issues around docker containers and we're questioning ourself if we should use Ubuntu 16.04 on the hosts or CentOS 7.19:31
inc0yeah I keep hearing about issues around xenial19:31
*** matrohon has joined #openstack-kolla19:32
inc0CentOS 7 is much more tested by us19:32
inc0I probably should move my dev env to xenial to get this feeling;)19:32
PyroManiWe already downgraded our docker version from the latest in the repo to 1.10.x to match the specs in kolla.19:32
inc0PyroMani, 1.11 works well19:32
inc0no need to downgrade19:32
claytonwe19:32
inc0we should upgrade docs.19:32
inc0update*19:32
inc01.10 is minial version tho19:32
claytonwe've had issues with containerd timeouts on 1.11.1.  not scheduled to be fixed until 1.1219:32
PyroManiExactly, same here19:33
claytonPyroMani: our problems were on trusty with the 3.13 kernel, fwiw19:33
PyroManiCurrently we see this: Error response from daemon: Cannot start container ABCD.... : [10] System error: could not synchronise with container process19:33
PyroManiError: failed to start containers: nova_api19:33
claytonso that's not xenial specific19:33
PyroManiTomorrow we're going to investigate this message: https://github.com/openstack/kolla/commit/e31d85e71c292b3af5f9a99b913f1d85fc0bcbad19:34
PyroManiAs the dockers refuse to start after a few restarts and we have to reboot the whole host19:35
claytonPyroMani: I've had some luck with manually removing all containers then restarting docker-engine19:35
claytonbut clearly that's not a great solution19:35
openstackgerritMerged openstack/kolla-kubernetes: Replication controllers for Keystone, Memcached, RabbitMQ.  https://review.openstack.org/32410619:36
PyroManiclayton: I agree on that19:36
PyroManiAs far as we can see the issues does not lie in the containers themselves but the docker controller on the hosts19:38
openstackgerritKen Wronkiewicz proposed openstack/kolla-kubernetes: Adding documentation for labels  https://review.openstack.org/32411019:40
*** mbound has joined #openstack-kolla19:40
Mech422PyroMani: how is Openstack AZ stuff working for you in general ?  I've gotten the dreaded 'single pane of glass' mgmt. request...19:43
Mech422PyroMani: not sure if we should bother with AZ, or go with independent clusters and some sort of api driven dashboard19:44
*** mliima has joined #openstack-kolla19:45
*** godleon has joined #openstack-kolla19:46
PyroManiMech422: unfortunately, this will be our first own build AZ as the previous was build and maintained for us. Our vision on it is that we like the idea of a single pane in which the clusters reside (managed with AZ's).19:50
PyroManiMech422: Also, if we want to perform migrations it's easier to do between AZ's (even live migrations should be possible).19:51
PyroManiMech422: It has it's downsides as well as you introduce dependencies between the clusters19:51
Mech422PyroMani: yeah - so far cross AZ migration is 'out of scope' - but I'm sure that will change the day the cluster goes live :-P19:52
mag009_i'm having issue with the kolla-build from master repo19:52
mag009_cinder-base fail19:52
PyroManiYeah, probably within the first month ;)19:52
Mech422PyroMani: my big concern atm is ceph - not sure I want to run a ceph 'cluster' with nodes in Ukraine, Chicago and phoenix19:53
Mech422PyroMani: just seems like latency would be a killer19:53
PyroManiMech422: we clearly choose to ditch ceph for our first own AZ19:53
mag009_anyone got that : Untar re-exec error: exit status 1: output: archive/tar: invalid tar header19:53
*** mliima has quit IRC19:54
PyroManiMech422: we had plenty of problems with the managed AZ. A single problem in ceph took down the whole AZ cripling numerous hosts19:54
Mech422PyroMani: oh ouch :-(19:54
*** openstackstatus has quit IRC19:55
PyroManiMech422: so for our first AZ we decided to strip quite a lot of the original and build new AZ with additional services / redundancy each time so we can manage certain issues better19:55
PyroManiMech422: for instance, we can have an AZ with almost no redundancy and one with triple redundancy. Then it's up to the application we need to host wether we host it on the redundant platform or the non redundant19:57
*** openstackstatus has joined #openstack-kolla19:57
*** ChanServ sets mode: +v openstackstatus19:57
Mech422PyroMani: makes sense - sort of a 'staged rollout' ...19:57
PyroManiMech422: yeah, instead of a dive in the deep.19:57
Mech422PyroMani: ukraine will probably be my 'test bed' - its got the oldest equipment and the worst connectivity...19:58
Mech422PyroMani: if I can get stuff working there, it should work anywhere :-p19:58
PyroManiMech422: this way you can get a feeling of the core system and its working.19:59
PyroManiMech422: if the connectivity isn't great I advise against ceph even more :p19:59
Mech422PyroMani: heh19:59
PyroManiCurrently I'm working on a test suite for our own stack19:59
PyroMani(started today :P)20:00
Mech422PyroMani: I've been working on 'pre-scripting' kolla, playbooks to bridge what our provisioning sets up, and what kolla expects20:00
PyroManiCool :) Is there a great difference?20:02
PyroManiI'm used to work with Chef, so for kolla I had to read into Ansible :P20:02
Mech422PyroMani: Umm - we use OVS for base network connectivity - thata a killer...20:03
Mech422we just setup a pop in baghdad, no way I'm going there for hands on if I screw networking :-P20:03
PyroManiMech422:  Ah, wouldn't want to do that either xD20:03
Mech422PyroMani: then just basic stuff like we partition/raid our drives, and kolla/ceph want to use full devices20:04
Mech422PyroMani: updating kernels, installing repo's, etc20:04
Mech422PyroMani: oh - I added a kolla 'checkout' play that does the git clone and automatically installs our patches20:05
Mech422PyroMani: thats a small thing, but makes life 10x easier20:05
PyroManiMech422: Haha, I can imagine :)20:06
PyroManiWould the pre scripting work on bare metal using dracs as well? :P20:06
Mech422PyroMani: umm - dunno drac.. we provision images/std. configs via PXE20:07
*** zhiwei has joined #openstack-kolla20:08
Mech422PyroMani: then my playbooks use the pre-configured root login20:08
inc0so we had session about kolla+bifrost20:08
inc0to address this20:08
inc0on summit20:08
Mech422inc0: ?20:09
inc0bifrost - standalone ironic20:09
inc0for bare metal deployment prior to kolla20:10
inc0so idea is to deploy bare ubuntu/centos20:10
Mech422inc0: yeah - thing is, I can't just change the company wide provisioning...20:10
inc0prepare kolla-host scripts to configure host20:10
mag009_hm20:10
inc0yeah, and it's just on paper now too20:10
mag009_kolla is deploying ironic too no ?20:10
mag009_kolla can replace bitfrost20:10
inc0not in standalone mode20:10
mag009_right20:10
Mech422inc0: oh - I got the ceph-on-partitions stuff working: https://bugs.launchpad.net/kolla/+bug/158930920:11
openstackLaunchpad bug 1589309 in kolla "Problem Bootstrapping ceph from Partitions due to stale part. table" [Undecided,New]20:11
inc0idea is that you install one node20:11
inc0deploy bifrost and bootstrap all other nodes20:11
inc0Mech422, cool! I'll read through that20:11
inc0thanks20:11
PyroManiinc0: a bit like bootstack does?20:11
Mech422inc0: hmm - i'll have to look at that - we have multiple types of system on the same dhcp/pxe domain - so I'm not sure we can use it - I can't hijack dhcp/pxe20:12
*** zhiwei has quit IRC20:12
sdakeMech422 our plan as outlined by sean-k-mooney :20:12
sdakewe will use bifrost for bare metal bootstappign of openstack nodes20:12
*** ppowell has quit IRC20:13
inc0PyroMani, I'm not familiar with bootstack20:13
sdakewe will use ironic as container's implemented by jpeeler for bare metal as a service20:13
Mech422sdake: cool :-)20:13
Mech422I think ironic wants to control dhcp/pxe though ?20:13
Mech422unless i'm mistaken ?20:14
sdakeindeed it is mandatory it does so20:14
inc0Mech422, yeah it has it's own pxc20:14
inc0pxe20:14
inc0server20:14
sdakethe ccontainers jpeeler wrote are nearly functional20:14
sdakebut not for standalone mdoe20:14
inc0it uses it and ipmi-reboot machines20:14
Mech422yeah - I need more of a 'ironic in ansible'... so I can just take the machine after provisioning and jigger it howerver needed20:14
sdakewhereas bifrost is about standalone mode20:15
sdakeMech422 we need both - sean-k-mooney working on both20:15
inc0Mech422, we call it kolla-host ;) something in the depths of "things to do"20:15
Mech422ahh - very cool...I'll have to look at that20:15
mag009_i've tested ironic and it's far from being perfect in standalone mode20:15
cineramawe're reviewing the larger patch for splitting out the service startups now  and it should be able to land soon20:16
mag009_it really depend on neutron to control dhcp20:16
Mech422I'm currently re-provisioning my cluster with Foreman20:16
sdakecinerama cool that is fantastic news!20:16
inc0Mech422, I use cobbler20:16
cineramaiirc it also needs to be rebased on the current master because we needed to update our boolean handling.20:16
sdakein my lab cobbler is used20:16
sdakebut in my home lab i'm going to be using bifrost once it works :)20:17
Mech422inc0: cobbler does seem more stable then foreman20:17
Mech422but I'm not switching back now :-P20:17
inc0*khe khe*crowbar20:18
inc0Mech422, mind publishing a review with this partition stuff?20:19
sdakecinerama we have a 132 node lab20:19
inc0let's discuss it there20:19
sdakecinerama we will have it for 6 weeks starting iirc 7/2220:19
inc0also if possible I'd love to squeeze in file-based osd too20:19
sdakecinerama it would be hepful to use bifrost on that lab gear during our work20:19
inc0super useful for dev/ci stuff20:19
inc0actually guys yeah, cinerama if you want to help us with bare metal part of testing that'd be great20:20
Mech422inc0: have you used crowbar?  that and razor looked interesting....20:20
cineramasdake, wow really? cool20:20
inc0I used it with canonical maas for a moment, actually it did good job20:21
inc0sdake, and make it 3 weeks instead of 620:21
inc0we have only 3 weeks20:21
Mech422inc0: the 'hands off' approach sounded awesome for production - but at home, I switch os's on the boxes too often20:21
inc0so we really need to make most of it20:21
sdakeoh y bad i thought we had 620:21
Mech422the nice lil web gui makes that easy20:21
inc0Mech422, so reason we use pyudev is because it's python library for ceph20:22
inc0I mean we use it, because we don't liek screenscraping outputs20:22
Mech422inc0: yeah - but the kernel doesn't update /sys even after running partprobe...20:23
inc0problem is, I didn't find good alternative in python to list disks, partiitons and such20:23
Mech422so poor pyudev gets screwed20:23
Mech422inc0: yeah - I'm not sure there is anything - you really need to re-read the part manually as kernel doesnt20:23
mag009_anyone use ironic in prod env not a lab?20:23
wirehead_Rackspace’s OnMetal product is ironic.20:24
Mech422inc0: re: code review - I have no idea what to do next?20:24
Mech422I got as far as singing up for the openstack.org account20:25
mag009_rackspace is the number 1 commiter on ironic20:25
mag009_if imnot mistaken20:25
Mech422err...signing up...I tried singing but got voted off the project20:25
*** daneyon_ has joined #openstack-kolla20:27
*** alyson_ has quit IRC20:28
inc0hold on Mech422, let me find guide20:28
sdakeharlowja around?20:28
sdakeMech422 got voted off which project?20:29
inc0http://docs.openstack.org/infra/manual/developers.html20:29
Mech422sdake: oh - I was joking about how bad my singing is :-)20:29
sdakeoh\20:30
*** ayoung has quit IRC20:30
Mech422inc0: i was thinking of using Terraform for the bare-metal-to-kolla stuff20:31
Mech422inc0: but the 'bare metal' support is pretty weak20:31
inc0I've heard about that20:31
inc0pretty new project isn;t it?20:31
*** daneyon_ has quit IRC20:31
wirehead_It’s been around for at least a few years.20:32
Mech422inc0: yeah - hashicorps answer to cloudformation/sparkle ?20:32
*** dave-mccowan has quit IRC20:32
*** dave-mccowan has joined #openstack-kolla20:33
Mech422inc0: Oh - did you see: https://coreos.com/blog/torus-distributed-storage-by-coreos.html20:33
inc0interesting20:38
*** jtriley has quit IRC20:40
openstackgerritRyan Hallisey proposed openstack/kolla-kubernetes: Use the Kube endpoint to dictate state instead of etcd  https://review.openstack.org/32550320:40
*** mbound has quit IRC20:41
sdake108f out today20:41
* sdake groans20:41
sdakeatleast its not 114f like last friday20:42
Mech422sdake: looks like the 'corporate' agreement for openstack is broken? https://secure.echosign.com/public/hostedForm?formid=56JUVGT95E78X520:42
inc0Mech422, I wonder if they want to compete with ceph20:42
inc0according to latest user survey ceph has ~60% of prod in openstack20:42
Mech422sdake: heh - it was 120 here yesterday - PHX - where are you ?20:42
Mech422inc0: the 'build object store on top' part sounds like it...20:42
*** jabari has quit IRC20:43
*** jabari has joined #openstack-kolla20:43
Mech422inc0: problem is, its not compatible with VMs (at least not now).  Ceph works for both docker and VM20:43
wirehead_Heh, so 62 in San Francisco, 74 at my apartment.20:43
wirehead_This is why I’m always wearing long-sleeved shirts.20:43
Mech422wirehead_: oh god.. your in SFO...I'm sooo sorry. I can't believe the rents you guys pay :-P20:44
wirehead_Well, I live 30 miles south of SF.20:44
wirehead_The rents are slightly less bad.20:44
sdakemy mortgage is 740/mo20:44
sdake;-)20:44
Mech422wirehead_: I lived in Cupertino, right behind 1 infinity loop :-)  And sunnyvale, right behind a dumpster :-P20:45
*** shardy has quit IRC20:45
inc0sdake, Mech422 do a PHX openstack user group;)20:45
sdakeplus 1600 every 6 months for insurance and taxxes20:45
Mech422sdake: Oh! you in phx too ?20:45
sdakeyup20:45
sdakeMech422 didn't know you were20:45
harlowjasdake  sorta, ha20:45
*** mbound has joined #openstack-kolla20:46
sdakeharlowja quick quesiton about your setup and neutron20:46
Mech422yeah - I'm out in mesa - sossaman and broadway20:46
sdakeas in do you use neutron at gd20:46
sdakeif not, what do you use20:46
*** mbound has quit IRC20:46
harlowjaeck20:46
inc0:D20:46
sdakeand if so, how do yoou get it to scale to your node count20:46
harlowjawe use it20:46
*** mbound has joined #openstack-kolla20:46
harlowjaya, let me see if u can get someone else to answer :-P20:46
Mech422harlowja: have you guys played with Midonet ?20:46
Mech422harlowja: I'm looking at trying to tie that into kolla eventually20:47
claytonsdake: they carry a bunch of local patches last I heard :)20:47
sdakeharlowja we have 3 1k+ node setups that people want to do20:47
inc0that's gonna be an interesting20:47
* harlowja getting kris in here, ha20:47
claytonharlowja: figured :)20:47
harlowja:-P20:47
sdakeharlowja so I suspect since it seems a thing, we will be working on multiregion and  cells20:47
inc0Lyncos, mag009_ <- they're playing with Calico20:47
sdakeit would be nice to scrape some tribal knowledge out of the actual deployments at this scale20:47
harlowjadef20:47
Mech422inc0: ohh.. wonder how that compares to midonet20:48
harlowjakris knows all20:48
harlowjalol20:48
*** klindgren_ has joined #openstack-kolla20:48
klindgren_o/20:48
inc0however our main installment is neutron, and cells+neutron is....20:48
*** klindgren_ is now known as klindgren20:48
sdakehey klindgren_20:48
Mech422inc0: nice thing about midonet was it was openstack/neutron + docker compatible20:48
mag009_who asked for calico?20:48
klindgrenharlowja, was saying you had some neutron questions or something20:48
klindgren?20:48
sdakemag009_ only you thus far20:48
harlowjaklindgren knows all20:48
harlowjahe's the man20:48
harlowjalol20:48
klindgrenwell - I wouldn't go that far20:48
sdakemag009_ that said, if its integrateable, we hould integrate it into kolla20:48
inc0mag009_, we're talking about networking on 1k+ installments20:49
sdakeklindgren oh your the cat with all the infos about hwo to run openstack at scale :)20:49
mag009_of course it is20:49
mag009_:)20:49
harlowjaklindgren actually sdake might have some cells questions also (so that kolla  + cells works out)20:49
harlowjai'm just some newb20:49
harlowjalol20:49
mag009_thats what we expect 2k+ servers20:49
Mech422does anyone use the plumgrid stuff ?20:49
inc0heard it's neat20:49
klindgrenI dunno about at scale.  How about at a size and grwoth rate that makes me concerned :-D20:50
mag009_If I can get pass those bugs with ansible 2.120:50
sdakeklindgren atm kolla has been tested at 64 nodes20:50
mag009_i might be able to push my changes...20:50
sdakewe had to tune 3 variabbles in nova to get it to work20:50
inc0so, we're all here because we want kolla on 1k+, for that we need cells and other stuff20:50
Mech422inc0: yeah - sounds pricey though - wondering if its worth it20:50
*** dwalsh has quit IRC20:50
harlowjaMech422 are u paul bourke ?20:50
sdakekolla works really well at this size20:50
sdakeharlowja no that is pbourke20:50
inc0but we also need to figure out netowkring20:50
harlowjakk20:50
Mech422harlowja: no - I'm Steve...but I answer to hey you :-)20:51
harlowjahey u20:51
harlowjalol20:51
sdakeklindgren the two answers I know about for handling scale are multiregion and cells20:51
sdakeklindgren to do multiregion, basically you setup fernet in keystone, then run one dedicated keystone + one dedicated HA database for keystone20:51
harlowjasdake the other thought i was having with my manager, is to delay the nova-compute container (until further proven)20:51
sdakeand all nodes use that keystone20:51
harlowjause kolla for all the other stuff though20:51
sdakeharlowja ya i hear that one alot20:52
klindgrenk20:52
*** shardy has joined #openstack-kolla20:52
inc0harlowja, so kolla won't be the bottleneck here20:52
harlowjaklindgren though may get a lab we can mess around with this more, right klindgren  :-P20:52
sdakeklindgren is that accurate (multiregion implementation) and if so, do you have configs you can provide us that deploy it that way?20:52
inc0issue is, how to configure stuff so it will work20:52
inc0it's about what we want at the end, kolla will get us there20:52
klindgrenwe dont do multregion20:52
klindgrenwe use cells20:52
inc0klindgren, do you use neutron + cells tho?20:52
claytonwe do multi-region, but not for scaling purposes20:52
*** dcwangmit01 has quit IRC20:53
klindgreneach dc we deploy in is also 100% independent of another DC, from an openstack component perspective20:53
sdakeklindgren how does neutron scale to 1k+ nodes with cells?20:53
klindgrenthey all tie back into the same replicated AD for auth and things20:53
openstackgerritRyan Hallisey proposed openstack/kolla-kubernetes: The Keystone bootstrap job need to run a db sync  https://review.openstack.org/32566520:53
*** dcwangmit01 has joined #openstack-kolla20:53
sdakeAD = microsofts active directory product?20:54
klindgrenso we dont yet have a single place that is at 1k nodes.20:54
klindgrenlargest spot so far is ~750 nodes so far20:54
inc0klindgren, so it's almost like multi region20:54
Mech422klindgren: yeah - our stuff all ties back to ldap for auth - slooowwww :-/20:54
inc0klindgren, how many cells in this 750?20:55
klindgrenbut have ~700 nodes coming online in the next few quarters20:55
klindgreninc0 - 2 cells right now20:55
inc0and single neutron across these 2?20:55
klindgrenwith a third coming online to handle ironic20:55
sdakeso 350 nodes per cell20:55
klindgrenyes20:55
klindgrensingle neutron20:55
inc0hmm...ok that's cool info20:55
klindgrenI should not that we do networking... differently :-)20:56
inc0so neutron can handle 750 nodes, I don't know when it starts to break20:56
sdakeklindgren could you expand - we are stuck on the networking scalability part20:56
klindgrenwe dont do private networks at all - everything is a provider network20:56
inc0uhh...20:56
inc0I see20:56
harlowja(aka, no sdn)20:56
claytonklindgren: are you guys doing any tenant networks?20:56
claytonnm :)20:56
klindgrenwe do do floating ip's, but we extended neutron to handle that via route injection into the network20:57
klindgrenalso our L2 networks don't extend past the TOR for the HV's20:57
sdakeare tenant networking hard to do with neutron at scale?20:57
klindgrenso depending on the DC we have a max of 44 or 96(? I think) servers per rack20:57
klindgren*shrug* - I dont think virtualized tunnels does anything but virtualzie all the problems with L2 and take a way a decade + of tooling.knowledge on how to trouble shoot the issue20:58
harlowjaand make bling bling for someone20:58
harlowjalol20:58
klindgreninternal we run a L3 folded-clos network.  So we are bascially mapping VM's directly onto that network20:58
klindgrensince 99.9% of our users give 0 shits about private networking.20:59
sdakesbezverk_ around?20:59
inc0klindgren, uhh20:59
klindgrensupport for this is getting added into neutron under the routed network spec.  AKA segmented network support.20:59
inc0well, that is an open question then, what do we do with networking if cells land21:00
openstackgerritRyan Hallisey proposed openstack/kolla-kubernetes: Add the KOLLA_KUBERNETES flag to containers  https://review.openstack.org/32567521:00
openstackgerritRyan Hallisey proposed openstack/kolla-kubernetes: The Keystone bootstrap job need to run a db sync  https://review.openstack.org/32566521:00
openstackgerritRyan Hallisey proposed openstack/kolla-kubernetes: Use the Kube endpoint to dictate state instead of etcd  https://review.openstack.org/32550321:00
inc0also, I think we should have separate plays for 1k+ multicell and our normal21:00
klindgrenbut TL;DR - our use of neutron at current is bascially assign IP from network, and when someone does a floating ip call inject a route into the network to route traffic for that IP to the vm.21:00
inc0since configs will differ tremendously21:00
sdakeklindgren do you run the agents on only one node?21:01
klindgrenso in mitaka - everyone has cells21:01
inc0mag009_, do you do multi-cell?21:01
klindgrenwe run the L2-agent everywhere21:01
klindgrenand L3-agent we didn;t use to run, but due to some code changes we run it on a some node to satisfy some stupid HA thing that was added21:02
klindgrenwe now run it on a single node thats not connected to anything and I dont care if it actually works or not21:02
sdakedo you use ovs, or linuxbridge, or your own thing?21:02
klindgrenas long as it connects to rabbitmq and says its "alive"21:02
klindgrenovs21:02
sdakeklindgren all scale problems revolve aorund rabbitmq and mariadb21:02
sdakesince they are a single non-horizontally scalable component21:03
*** rhallisey has quit IRC21:03
klindgrenwe would like to move to linuxbridge - because its impler and due to security group usage - currently traffic flows through both ovs and linuxbridge anyway21:03
inc0rabbit usually dies first21:03
klindgrenwe haven't had any issues with the DB21:03
inc0yeah with your rather simplistic neutron usage that makes perfect sense21:03
klindgrenwe have issues scaling conductors per cell21:03
sdakeusing mariadb 5?21:03
klindgrenand rabbitmq - ish21:04
sdakecare about ipv6?21:04
klindgren99% sure its 5.  We dont actuall manage those servers.  Our mysql team does.21:04
klindgrenin the future yes, currently - not yet21:04
klindgrenre: ipv621:05
sdakeright21:05
sdakere cells - can you provide configuration files21:05
sdakeso we can automate the deployment thereof21:05
*** dwalsh has joined #openstack-kolla21:05
inc0sdake, hold on to that thought, we need to ensure it works well with neutron21:06
inc0and find out scale limit21:06
klindgrenso the question is cellsv1 or cellsv221:06
inc0it's pointless to implement cells if neutron dies21:06
sdakeis that a question? :)21:06
mag009_we are not using multi-cell21:06
inc0klindgren, cellsv2 even exist in any shape or form?21:06
sdakemag009_ your using multiregion?21:06
klindgrenbecause everyone has cellsv2 in mitaka21:06
klindgrenbut you can only have a single cell21:07
inc0ahh, I need to update my nova21:07
mag009_yes (well... we only have 1 for now)21:07
inc0mag009_, and rabbitmq works?:O21:07
mag009_we only have one L3 setup and it's here in Montreal21:07
sdakei have heard that cells + neutron as a scalability solution isn't ideal21:07
inc0how did you get rabbitmq working for 2k+ computes?21:07
sdakeso another scalability option is multiregion21:07
mag009_... oh no no21:07
mag009_we are not in prod.21:07
mag009_the goal is to get to 2k+21:08
mag009_lol21:08
mag009_we are not even in prod yet21:08
inc0ahh that explains things21:08
Mech422mag009_: your using calico ?21:08
mag009_yes21:08
klindgrenSO we have some patches to make cells work in cellsv1 with close to on par feature parity to a non-cells setup.  By we I mean a combo between nectar, godaddy cern.21:08
Mech422mag009_: any thoughts re midonet vs calico ? both looked usable for me...21:08
mag009_the design is not finale yet still a lot of work to do!21:08
sdakeklindgren i have heard cells v2 is in a massive rewrite atm21:08
klindgrenOn cells v1, the nova->neutron thing that doesn't work out of the box is the vif plugging communication.21:08
Mech422mag009_: midonet just seemed 'easier' with the kuyr and neutron plugins21:08
sdaketo make it work - basesd upon that forked work you talked about above (the cern functionality)21:09
klindgrenafaik cellsv1 and cellsv2 has always been a massive change.21:09
klindgrenOne passes message between rabbitmq layers21:09
klindgrenand the other connects dirrectly to the cell database to do things21:10
sdakecells v1 is database as a messaging layer solution?21:10
mag009_midonet and calico are not quite the same21:10
sdakeDB for messaging is an anti-pattern fwiw :)21:10
klindgrencells v1 = api rests -> rmq message on api cell -> nova-cells -> correct child cell rabbitmq -> pickedup by nova-cells in child -> nova-stuffs.21:11
wirehead_cells v1 is… um.21:11
klindgrenthen some up/down message passing for events and the like21:11
mag009_calico is pure L3 implementation which depend on a bgp client in our case we chose quagga instead of bird21:11
mag009_so we have zero L2 in our infra.. everything is L321:11
mag009_its what we wanted21:11
sdakeWTB sbezverk_21:11
mag009_we use cumulus for our TOR21:11
sdakeklindgren cellsv2 then goes directly to db?21:12
sdakeand there is a db per cell?21:12
claytonthere is a db per cell21:12
klindgrenplus a "cell0"21:13
inc0sdake, do you remember this nova-api db?21:13
Mech422mag009_: we running arrista TORs, don't think i need to worry about L2 internally...21:13
sdakeinc0 yes i do21:13
klindgrencell0 = things that didn't get scheduled to a cell21:13
inc0that was because of cells21:13
Mech422mag009_: does calico let you offload the forwarding at the edges like midonet ?21:13
inc0nova api is singular21:13
inc0Mech422, my understanding is that with calico edge would be another AS21:14
sdakeclayton and a rabbitmq per cell?21:14
Mech422inc0: yeah - I mean does calico require dedicated 'network' nodes, or can the hosts/edge route things directly to the vm..21:14
claytonshared rabbitmq I assume.  I''ve only seen stuff about extra databases21:15
Mech422inc0: without having to go thru a 'middleman' box21:15
inc0clayton, I don't think that would make sense as rabbitmq is what dies first21:15
klindgrenI need to look at v2 - but under v1 - yes each cell has a rabbitmq and all the nova-computes/conductors/metadata for that cell connect to the child cells rabbitmq21:15
claytontalk to the nova guys about that then :)21:15
klindgrenonything on the nova side that connect to the api cells rabbitmq is nova-cells21:15
inc0yeah, I'm not sure about how it works really;)21:16
mag009_each compute route directly to the vm21:16
mag009_the bgp client is installed on the compute21:16
sdakeclayton i did a bit - to russell21:16
klindgrenSo we looked at calico.  Bascially take provider networks.  remove any layer anywhere.21:16
sdakeclayton he basicaly said with cells you run rabbitmq and mariadb in each celel21:16
klindgrenThen use bpg per host to advertise to the network when IP's should be routed to it.21:17
klindgrenbascially their is no "translation"21:17
Mech422mag009_: I looked at juniper contrail last year too - but at the time, it seemed much more difficult to get going21:17
sdakethe cell uses rabbitmq nad mariadb as a central point which reaches capacity at the cell limits21:17
klindgrenor edge layer or anything like that.21:17
sdakeclayton did you say you were using cells?21:18
Mech422klindgren: Did you happen to look at midonet ?21:18
klindgrenno - some people wanted us to look at plumgrid internally21:18
Mech422that sounds nice - but pricey ?21:18
klindgrenso I sat through a couple presentations of that21:18
sdakeis plumgrid and midonet a competitor to neutron?21:19
inc0sdake, nope, drivers21:19
sdakecool21:19
mag009_Mech422: I personally didn't look at midonet so I don't know21:19
Mech422klindgren: I'm not a networking guy, so I'm sort of lost as to the diff. between calico and midonet...21:20
sdakei think an initial easy answer to scalability is multi-region21:20
Mech422it seems like both let you route packets directly from your 'edge' to the VM without going thru a neutron 'router' ?21:20
*** mark-casey has quit IRC21:21
Lyncosmidonet seems to use network Overlays to emulate L2 .. which calico dosen't doo21:21
Mech422Lyncos: umm...sorry, my network fu is weak... is that good or bad ? :-P21:22
inc0Mech422, you know BGP?21:22
Mech422inc0: no - I know what its supposed to do, but not anything about how it works21:22
sdakeklindgren so if kolla goes and develops cellsv2, that is going to be wholy unsuitable for godaddy then - because gd is on cells v1?21:22
LyncosMech422  Network overlays add up a bit of overhead .. there is a cost of 'emulating l2' in a L3 environment21:22
inc0so basically with bgp you think of networking differentlyu21:22
inc0you forget about L2 really21:22
inc0it's protocol used in internet itself21:23
LyncosMech422: and having L2 over L3  add up a layer of conlexity21:23
inc0if you want vm connecting to vm, both have ip's21:23
Mech422right - border gateway protocol for exchanging routes ? like OSPF ?21:23
inc0not exactly21:23
LyncosWe are using BGP21:23
klindgrensdake, roughly correct.21:23
inc0bgp knows doesn't know full route21:24
Lyncosshould work with OSPF also I think21:24
inc0only next hop21:24
Mech422inc0: ahh - thanks :-)21:24
inc0it can have multiple next hops and weights them21:24
klindgrenI think we could take that work are fairly trvially extended it to work in the cellsv1 use case21:24
sdakeklindgren is there a blocker to migrating to cellsv2?21:25
Mech422Lyncos: so both calico and midonet use BGP, and midonet has some 'shared state' in zookeeper for loadbalancing, firewalling...21:25
LyncosCalico is using ETCD21:25
Lyncosfor that matter21:25
klindgrensdake, yea - it doesn't support multiple cells yet :-)21:25
Mech422Lyncos: is there a practical difference between them? or only if you need to scale to Ks of nodes ?21:25
Lyncosload balancing is made with pure ECMP21:25
sdakecellsv 2doesn't work with more then one cell?21:25
sdakeklindgren you ahve got to be kidding21:25
klindgrensdake, nope currently only support 1 and only 1 cell21:25
LyncosMech422: Calcio scale very will thru PODS21:26
LyncosI think with our current design we can have up to 20 Racks per pods21:26
Lyncossomething like that21:26
Lyncoswhen you want to scale you must add up a pod for every ~20 racks (depending on your design)21:26
inc0Lyncos, pod == AS?21:27
Lyncosinc0 no21:27
*** vhosakot has quit IRC21:27
LyncosAS are disabled on our setup I think21:27
inc0ok21:27
Lyncoswe are using a modified version of quagga (I think)21:27
Mech422Lyncos: Cool... Does calico support both neutron and 'native' docker networking ? I'd sort of like to use the same thing everywhere rather then a bunch of different stacks21:27
LyncosMech422: You can use Calico with Kubernete21:28
Lyncosbut not sure how it integrate when you are using both21:28
sdakeklindgren if cellsv2 oly works with 1 cell21:28
sdakewhy is it in the code base?21:28
sdakeklindgren if you know21:28
inc0sdake, it's wip21:28
inc0it obviously have to work with more21:28
sdakeklindgren not complainign at you, just tyring to understand what the purpose is21:28
mag009_inc0: redeploying with brand new images latest version from master repo21:29
klindgrenbecause next release they are hoping to have multiple cells support21:29
mag009_better work this time21:29
inc0mag009_, good luck;)21:29
LyncosWe don't need luck we are using Kolla21:29
Lyncosshould just work :_)21:29
sdakemag009_ are you running into trouble21:29
inc0hahahahahah that's a good one Lyncos21:29
inc0it's a software21:29
inc0software is broken21:29
klindgrenbascially it was this.  Cellsv1 is "terrible" and not supported.  But people need this functionality.  So lets do cells v2.  First lets fixup some of the internals to better support cells.21:29
sdakeatually kolla is quite stable considering how many sheeer lines of code it works with21:29
mag009_sdake: yes with ansible 2.121:29
klindgrenTHen lets make it so where everyone can have and is using cells by default.  Then lets make it so where people can have mroe than one cell21:30
Mech422Lyncos: Hmm - can you use calico with just a stand-alone docker? I'm sure I'm gonna have devs doing one-offs21:30
sdakeklindgren thanks that seems logical21:30
sdakeso in mitaka, cells are default to on21:30
LyncosMech422:  if you use docker straight with L3 .. I guess you'll have to setup your 'host' machine to advertise your routes21:30
mag009_sdake: fail at : neutron/tasks/deploy.yml the when statement21:31
mag009_for my 3 ceph servers for some reason..21:31
sdakemag009_ haven't sseen that21:31
Mech422Lyncos: I mean can I install calico on a machine with just docker and no kube and have it play nice with the other boxes ?21:31
sdakemag009_ master is not really the best thing  to evaluate with imo :)21:31
kklimondawhich is "recommended" install type for deployment, source or binary?21:31
LyncosMech422: yes21:31
LyncosMech422: but you have no 'automation'21:32
Lyncosbut it works21:32
sdakekklimonda doesn't matter imo21:32
Lyncoswe are doing it for baremetal machines21:32
inc0kklimonda, I use source, causes least headache for ubutuntu at leasy21:32
sdakekklimonda both work - unless you want ubuntu - then i'd definately go with source21:32
inc0for centos both work stable in general21:32
mag009_sdake: sure agreed I can try with tag version to see but I'm pretty sure it will do the same i'll give it a shot next21:32
inc0writing is hard:/21:32
Mech422Lyncos: ahh - cool..I'll have to read up on it more then :-)  I'd like something that can work in neutron for VMs, with OpenStack docker driver and 'stand-alone' docker machines, if possible :-)21:32
sdakeya centos + stable/liberty or stable/mitaka are good with binary or source21:32
kklimondaok, thanks :)21:33
sdakemag009_ note stable/mitaka and stable/liberty are on ansible 1.9.421:33
LyncosMech422: we have a different topology than the standard calico one21:33
mag009_there's a tag 2.0.021:33
sdakemag009_ there should be a 2.0.1 tag21:33
mag009_yes that one21:33
sdakeit was just released today21:33
LyncosMech422: I'm trying to find back the info... I'm not the one that worked the most on the network part21:34
mag009_I want to keep up with you guys.. thats why I'm using the master21:34
Mech422Lyncos: Heh - you've given me plenty to think about - I'm gonna have to look at calico more...21:34
LyncosReally I think calico fits our needs.. but there are some limitations21:35
Lyncoslike21:35
mag009_the key is also the quagga version of cumulus21:35
LyncosNo: Multicast, no layer2 (means keepalive won't work) stuff like that21:35
mag009_which allowed us to automate our switches and compute21:35
Lyncoswhen you want exception you can install openvswitch and do vxlan21:35
Mech422Lyncos: hmm - I think our stuff uses multi-cast...but I might be mistaken21:35
Mech422Lyncos: Our first use case is qa environments that can create private networks mirroring productin21:36
Mech422Lyncos: so lack of layer-2 might be a problem?21:36
LyncosYes21:36
Lyncosyou need to think about it21:37
*** dwalsh has quit IRC21:37
Mech422Lyncos: I need to beat our NetEng/NetArch guys up to help me, but getting zero traction with them21:37
LyncosAlso our topology dosen't account for loosing a top of rack.. that means your apps must be resilient21:37
Mech422I don't think they like the whole 'software defined network' thing :-P21:37
LyncosMech422: welcome to the real world21:37
LyncosThey are Cisco centric type of network guys ?21:38
Mech422Lyncos: Arrista/Juniper21:38
LyncosSame thing21:38
Lyncos:-)21:38
LyncosWe are running cumulus linux switches by the way21:38
Mech422Lyncos: I think I need L2 - we do a lot of 'failover' and 'autoscaling' services..21:38
Mech422Lyncos: (We're a CDN)21:39
LyncosMech422: then you can check maybe with openvswitch21:39
Lyncosmaybe you can add an overlay21:39
Mech422Lyncos: yeah - we use openvswitch and vxlans currently21:39
Lyncosshould  still work in L321:39
klindgrensdake, I need to look to see what the rabbitmq setup is for cells v2.  But I think we would only have 2 changes to support cells v1 vs's cells v2.21:39
LyncosFailover and HA is a bit different in L3.... but you can stil use L2 stuff if you go with an overlays like ovs and vxlans21:40
Mech422Lyncos: I was looking for something tht could play nice with all 4 configs (OS+VM,OS+Docker, standalone docker, standalone VM)21:40
*** absubram has quit IRC21:40
Mech422Lyncos: since I'm not a network guy and didn't want to have to figure out 4 solutions :-P21:40
LyncosI know calico are working on a baremetal version of calico21:40
LyncosMech422: if you are a CDN ... I guess you don't need very low latency ?21:41
klindgren1.) being to start the nova-cells service and either configure it to use the cells.json or the database config (currently we use the database config which tells cellsv1 nodes how to talk to each other.  You list the api cell and its rabbitmq servers, and the child cells and their rabbitmq servers.21:41
Lyncos* Since you are21:41
Lyncosmy english not very good :-)21:41
Mech422Lyncos: actually - latency is killer for us - but we do production on BSD boxes...21:41
*** vhosakot has joined #openstack-kolla21:41
LyncosBSD Box21:41
Lyncosdamn21:41
Mech422Lyncos: we actually have 2-3 fulltime FBSD kernel commiters on staff to tweak our network stacks21:41
LyncosVXLAN will add up latency21:41
Lyncosfreebsd is not as bad as Openbsd21:42
Mech422Lyncos: yeah - this is just for QA though21:42
Lyncosok21:42
LyncosI'm just telling21:42
Mech422so the vxlan overhead is ok21:42
Mech422Lyncos: thanks :-)21:42
klindgren2.) possibly use/deploy another rabbitmq at the top level cell21:42
Mech422Lyncos: but it does sound like if I don't have some sort of L2 support, _someone_ is gonna whine21:42
LyncosMech422: Probably21:43
LyncosMech422: we are ready to live with this21:43
LyncosMech422: for us it is an enabler to be in public clouds if needed21:43
Lyncosso we make sure our stack only support L3 which is what is used on the internet :-)21:43
Mech422Lyncos: makes sense :-)21:44
klindgrensdake, if you care we did a presentation/blog on how to move from non-cells to cellsv1 configuration: http://www.dorm.org/blog/converting-to-openstack-nova-cells-without-destroying-the-world/21:44
Mech422blah - I probably need IPV6 too21:44
klindgrenhttps://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/moving-to-nova-cells-without-destroying-the-world21:44
* Mech422 tries to figure out which NetEng he can blackmail into helping with this21:44
LyncosMech422: find one that know linux/unix21:45
Lyncos:-)21:45
LyncosOthers won't see the benefit of it21:45
Mech422Lyncos: yeah.. like I said, our guys basically don't trust it :-P21:46
Mech422Lyncos: then again, I wouldn't trust me to do NetEng either :-P21:46
LyncosMech422: We eneded up installin linux in the switches and manage them ourself21:46
Mech422which means, they should really help, just to keep me from screwing it up :-P21:46
Lyncosbecause we only have one net guy that know what is SDN21:47
*** salv-orl_ has joined #openstack-kolla21:47
Mech422Lyncos: Hmm - isn't that how linux got on servers to begin with? someone snuck it in to run apache/etc :-D21:47
Lyncos;)21:47
Lyncosthe cool thin is that we configure our switches with Ansible21:47
Lyncos:-)21:48
Lyncosinc0 that one was for you :-)21:48
inc0yeah, I know21:48
Mech422Lyncos: oh nice - our arrista switches supposedly have APIs to allow stuff like that...21:48
inc0man I really like what you did in your network;) will need to reproduce it in some lab21:48
Lyncosyeah but it's not the same think21:48
Mech422Lyncos: but I'm not allowed to touch them - my technotard field makes them fail just by walking by21:48
Lyncosinc0 we can go to intel lab whenever you want21:49
inc0maybe even publish stuff so it can be uised with trunk kolla21:49
Lyncosour lab resume to my own PC21:49
*** JoseMello has quit IRC21:49
*** salv-orlando has quit IRC21:49
LyncosWhen we go live with our stuff we want to expose what we did to the world21:49
inc0cool21:50
Lyncosbecause I think what we are doing is really 201621:50
LyncosWe are coming from a legacy world21:50
LyncosMech422: these are the kind of switches we are using: http://www.qct.io/-c77c75c15921:51
LyncosJust do the maths you'll never look behind21:52
Lyncosit is like 200% less expensive than cisco ( and I say 200% just because there are cisco dude here)21:52
*** sdake_ has joined #openstack-kolla21:52
inc0I had quantas as storage servers21:53
inc0I like quanta21:53
Mech422Lyncos: oh nice :-)21:53
Lyncoswe bought Quanta because baremetal switches are all using the same chipset and were less expensive than Dell or HP21:53
Mech422Lyncos: I'll have to remember these next time lab budget rolls around...21:53
Lyncosprice for a switch 40gbit spine  is   ~8k$21:54
Lyncos32 ports I think21:54
*** sdake has quit IRC21:54
inc0:O21:54
Lyncosahh.. this is in Canadian $21:54
mag009_at least you have a budget !21:54
mag009_-_-21:54
inc0I'll buy one for my home!21:54
Lyncosinc0: yeah.. it shows that you have a better pay at Intel than us21:55
inc0so I can put my raspberry pi on 40gig21:55
Lyncos:-)21:55
inc0but 8k per switch is nothing21:56
LyncosI agree21:56
LyncosCanadian Dollars21:56
Lyncos:-)21:56
inc0they're not that off from US21:56
inc0anyway21:56
inc0I'm going off guys, have a good one21:56
LyncosCAN$ is very low right now21:56
Lyncossee you later21:57
*** SiRiuS has quit IRC21:58
Mech422inc0: real quick - did you want that ceph patch against master?21:58
*** inc0 has quit IRC22:01
*** Lyncos has left #openstack-kolla22:03
*** ayoung has joined #openstack-kolla22:04
mag009_alright time to go home22:06
mag009_see you tomororw22:06
*** shardy has quit IRC22:08
*** mbound_ has joined #openstack-kolla22:11
*** mbound has quit IRC22:14
*** daneyon_ has joined #openstack-kolla22:15
*** sdake has joined #openstack-kolla22:15
*** sdake_ has quit IRC22:18
*** daneyon_ has quit IRC22:20
openstackgerritJoshua Harlow proposed openstack/kolla: Stop using a global logger for all the things  https://review.openstack.org/32188422:20
*** mbound_ has quit IRC22:20
*** mbound has joined #openstack-kolla22:21
*** zhiwei has joined #openstack-kolla22:24
*** salv-orlando has joined #openstack-kolla22:24
*** salv-orl_ has quit IRC22:25
openstackgerritMerged openstack/kolla-kubernetes: Adding documentation for labels  https://review.openstack.org/32411022:27
openstackgerritMerged openstack/kolla-kubernetes: Allow an operator to run an action on all services  https://review.openstack.org/32567922:28
*** zhiwei has quit IRC22:28
openstackgerritMerged openstack/kolla-kubernetes: Add docs around bootstrapping and using the 'all' flag  https://review.openstack.org/32568122:28
*** sdake has quit IRC22:36
*** absubram has joined #openstack-kolla22:41
openstackgerritVikram Hosakote proposed openstack/kolla: Fix Magnum trustee issues  https://review.openstack.org/32616322:42
*** cu5 has quit IRC22:50
*** cu5_ has joined #openstack-kolla22:52
*** cu5_ has quit IRC22:53
*** diogogmt has quit IRC22:55
*** sdake has joined #openstack-kolla23:00
*** harlowja has quit IRC23:01
*** sdake_ has joined #openstack-kolla23:03
*** sdake has quit IRC23:05
*** cu5 has joined #openstack-kolla23:12
*** godleon has quit IRC23:19
*** salv-orlando has quit IRC23:23
*** salv-orlando has joined #openstack-kolla23:24
*** vhosakot has quit IRC23:32
*** vhosakot has joined #openstack-kolla23:42
*** diogogmt has joined #openstack-kolla23:44
*** vhosakot has quit IRC23:46
*** matrohon has quit IRC23:50
*** fragatin_ has joined #openstack-kolla23:58
*** sacharya has quit IRC23:58
*** ssurana has quit IRC23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!