*** eernst has quit IRC | 00:13 | |
*** tmhoang has quit IRC | 02:04 | |
*** pcaruana has joined #kata-dev | 04:22 | |
*** sameo has joined #kata-dev | 05:22 | |
*** lpetrut has joined #kata-dev | 06:01 | |
*** noahm has quit IRC | 06:25 | |
*** sameo has quit IRC | 06:48 | |
*** tmhoang has joined #kata-dev | 06:57 | |
*** sgarzare has joined #kata-dev | 07:02 | |
*** sameo has joined #kata-dev | 07:29 | |
*** jodh has joined #kata-dev | 07:36 | |
*** gwhaley has joined #kata-dev | 07:58 | |
*** davidgiluk has joined #kata-dev | 08:01 | |
*** sgarzare has quit IRC | 09:11 | |
*** sgarzare has joined #kata-dev | 10:28 | |
*** gwhaley has quit IRC | 11:00 | |
*** gwhaley has joined #kata-dev | 12:11 | |
*** igordc has joined #kata-dev | 12:59 | |
*** devimc has joined #kata-dev | 13:21 | |
kata-irc-bot2 | <eric.ernst> i used to think setting up k8s was hard, then I tried hacking in my own changes/components. But, its starting to work: ``` eernst@eernstworkstation:~/$ kubectl describe runtimeclasses Name: kata-qemu Namespace: Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: | 13:45 |
---|---|---|
kata-irc-bot2 | {"apiVersion":"node.k8s.io/v1beta1","handler":"kata-qemu","kind":"RuntimeClass","metadata":{"annotations":{},"name":"kata-qemu"},"overhead... API Version: node.k8s.io/v1beta1 Handler: kata-qemu Kind: RuntimeClass Metadata: Creation Timestamp: 2019-05-28T13:51:48Z Resource Version: 1148 Self Link: /apis/node.k8s.io/v1beta1/runtimeclasses/kata-qemu UID: ca6fe15c-91d9-4096-9658-1bc402f52308 | 13:45 |
kata-irc-bot2 | Overhead: Pod Fixed: Cpu: 250m Memory: 100 ``` | 13:45 |
*** lpetrut has quit IRC | 14:30 | |
*** tmhoang has quit IRC | 14:48 | |
*** lpetrut has joined #kata-dev | 14:59 | |
*** dklyle has joined #kata-dev | 14:59 | |
*** lpetrut has quit IRC | 15:38 | |
*** devimc has quit IRC | 15:53 | |
*** sameo has quit IRC | 15:56 | |
*** devimc has joined #kata-dev | 16:49 | |
*** sgarzare has quit IRC | 16:53 | |
*** jodh has quit IRC | 17:00 | |
*** sameo has joined #kata-dev | 17:03 | |
*** irclogbot_2 has quit IRC | 17:17 | |
*** irclogbot_2 has joined #kata-dev | 17:17 | |
kata-irc-bot2 | <mvedovati> hi @salvador.fuentes, can you clarify what's the intended use of kata-jenkins-staging.eastus2.cloudapp.azure.com ? I did not understand it well during the previous call | 17:21 |
kata-irc-bot2 | <salvador.fuentes> Hi @mvedovati, it will serve as staging area so anyone can create or modify jobs before moving them to the production jenkins. | 17:26 |
kata-irc-bot2 | <mvedovati> @salvador.fuentes, thanks, that makes sense | 17:31 |
*** igordc has quit IRC | 17:34 | |
*** igordc has joined #kata-dev | 17:55 | |
*** igordc has quit IRC | 17:55 | |
*** igordc has joined #kata-dev | 17:55 | |
gwhaley | ah, there is davidgiluk .... (hmm, dinner time David!) | 18:43 |
gwhaley | David, VM-ish question ... err, how many VMs (qemu-kvm's) do you think I should reasonably be able to launch in parallel - any limits etc.? | 18:43 |
davidgiluk | gwhaley: you mean all starting at the same time? | 18:43 |
gwhaley | coz, I'm looking at parallel launch kata's under kubernetes, and if we make a big 'deployment', then yes, it will spawn them off pretty much all at once | 18:44 |
gwhaley | actually in batches of 10 I think, but pretty quickly in succession.... | 18:44 |
gwhaley | guess what, it's not right happy when I ask it to do 500 :-) | 18:44 |
davidgiluk | gwhaley: There shouldn't be anything too special; it comes down to how many cores you have and if you pile on too many you'll hit timeouts in guest kernels etc | 18:44 |
gwhaley | What I think ultimately happens is the deploy will be slow enough that we start triggering some grpc timeouts - which probably sit at 60s or so | 18:44 |
gwhaley | yeah, I think we hit timeouts in the k8s stack and/or our own go code. I'll dig and dig... | 18:45 |
davidgiluk | gwhaley: Just fill up a box with 56 core platinums, a load of RAM and a few snazzy SSDs and it'll be fine :-) | 18:46 |
gwhaley | heh - well - I have 88 cores and 377G of RAM ;-) - but, am already running in a minikube, so am already nested virt, and limited to 'only' 64 cores and I gave it 300G of RAM ;-) | 18:46 |
*** lpetrut has joined #kata-dev | 18:47 | |
davidgiluk | gwhaley: So the hard question is how to calculate what's a safe limit | 18:47 |
davidgiluk | gwhaley: Because you need to be able to answer the question for a piddly little 4 core or a giant like that | 18:48 |
gwhaley | davidgiluk: yeah, I know :-( my 'other' box is set to 2-core right now ;-) | 18:48 |
davidgiluk | gwhaley: How many cores are you giving each VM? | 18:48 |
gwhaley | what I really want to see is what is the scalability limit/bottleneck we are hitting - I suspect just pure timeout due to 'busy machine', but, there might be some linear race/lock limitation somwhere as well. | 18:49 |
gwhaley | kata default, so I think 1cpu and 2G of RAM | 18:49 |
davidgiluk | gwhaley: IMHO you're probably OK at say adding 0.25 CPU for each VM for overhead, and say 512MB RAM and making sure you don't top the totals | 18:50 |
davidgiluk | gwhaley: Nesting however....hmm that varies a lot | 18:50 |
gwhaley | I need to see if k8s has any inbuilt mechanisms to rate limit as well already. If we have limits, then we should try to at least document them and go the path of 'least surprise'. | 18:50 |
gwhaley | thx. - /me scribbles those on pad.. | 18:50 |
davidgiluk | gwhaley: but watch out for IO then | 18:51 |
*** lpetrut has quit IRC | 18:51 | |
gwhaley | davidgiluk: so, I reckon I hit a k8s 'thundering herd' timeout storm doing the large parallel launch - this should be fun to chase and graph tomorrow.... | 19:07 |
davidgiluk | yeh very hard problem I guess | 19:08 |
*** eernst has joined #kata-dev | 19:13 | |
*** davidgiluk has quit IRC | 19:13 | |
*** gwhaley has quit IRC | 19:22 | |
*** eernst has quit IRC | 19:25 | |
*** pcaruana has quit IRC | 19:30 | |
*** tmhoang has joined #kata-dev | 21:29 | |
*** devimc has quit IRC | 22:33 | |
*** sameo has quit IRC | 23:19 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!