Wednesday, 2020-08-26

*** sapd1_x has joined #openstack-containers03:17
*** rcernin has quit IRC03:25
*** rcernin_ has joined #openstack-containers03:26
*** dave-mccowan has quit IRC04:07
*** jmlowe has quit IRC04:18
*** ykarel has joined #openstack-containers04:29
*** jmlowe has joined #openstack-containers04:29
*** ykarel has quit IRC04:37
*** ykarel has joined #openstack-containers04:38
*** ykarel has quit IRC04:41
*** ykarel has joined #openstack-containers04:44
*** ykarel has quit IRC04:46
*** ykarel has joined #openstack-containers04:48
*** ykarel has quit IRC04:54
*** ykarel has joined #openstack-containers04:56
*** ykarel has quit IRC05:08
*** ykarel has joined #openstack-containers05:10
*** ykarel has quit IRC05:17
*** ykarel has joined #openstack-containers05:18
*** ykarel has quit IRC05:28
*** ykarel has joined #openstack-containers05:31
*** vishalmanchanda has joined #openstack-containers06:49
*** mgoddard has quit IRC08:10
*** mgoddard has joined #openstack-containers08:26
*** flwang1 has joined #openstack-containers08:27
*** nikparasyr has joined #openstack-containers08:41
flwang1brtknr: spyros: around?08:47
brtknrflwang1: morning!08:48
*** iokiwi has quit IRC08:48
flwang1brtknr: got a good sleep?08:48
*** dioguerra has joined #openstack-containers08:48
brtknrflwang1: ~3 hours08:48
*** iokiwi has joined #openstack-containers08:49
brtknri had the day off yesterday08:49
flwang1brtknr: oh, no08:49
flwang1let's see if Spyros will join us, otherwise, we can cancel and talk later08:49
brtknrbut they are both getting better I think, we went to get tested for Covid... waiting for result later today08:50
brtknrI am pretty sure its just common cold as me and my partner have no symptoms and the babies have never been ill before08:51
*** strigazi has joined #openstack-containers08:51
brtknrapart from when they had their vaccines08:52
flwang1oh, god bless you08:54
flwang1i believe you will ok08:54
flwang1strigazi: hello there08:54
strigaziflwang1: hello o/08:55
flwang1strigazi: good to see you08:56
flwang1team, add your topics on https://etherpad.opendev.org/p/magnum-weekly-meeting08:56
openstackgerritDiogo Guerra proposed openstack/magnum master: Add CT observations field to the database and API  https://review.opendev.org/73784009:00
flwang1#startmeeting magnum09:01
openstackMeeting started Wed Aug 26 09:01:01 2020 UTC and is due to finish in 60 minutes.  The chair is flwang1. Information about MeetBot at http://wiki.debian.org/MeetBot.09:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.09:01
*** openstack changes topic to " (Meeting topic: magnum)"09:01
openstackThe meeting name has been set to 'magnum'09:01
flwang1#topic roll call09:01
*** openstack changes topic to "roll call (Meeting topic: magnum)"09:01
flwang1o/09:01
brtknro/09:01
openstackgerritSpyros Trigazis proposed openstack/magnum master: Very-WIP: Drop hyperkube  https://review.opendev.org/74814109:01
dioguerrao/09:01
strigazio/09:01
flwang1thank you for join, guys09:02
flwang1shall we go through the topic list?09:03
strigazi+109:03
flwang1ok, recently we (catalyst cloud) just done a security review by 3rd party and we got some good comments to improve09:04
flwang1hence you probably see there is a patch i proposed to separate the ca for k8s, etcd and front-proxy, though we discussed this long time ago09:05
*** sapd1_x has quit IRC09:05
flwang1the patch https://review.opendev.org/746864 has been rebased on the ca rotate patch and tested locally09:06
strigaziWe knew that before, right? That each node could contact etcd.09:06
flwang1strigazi: yep09:06
flwang1each node can use the kubelet cert to access etcd09:06
flwang1to be more clear ^09:06
strigazior kube-proxy09:07
flwang1yes09:07
strigazior any cert signed by the CA09:07
flwang1so, please review that one asap, hopefully we can get it in this V cycle09:07
strigaziI have a question though09:08
flwang1sure09:08
flwang1i'm listening09:08
strigaziWe need this patch. I'm not against it by any means it helps a lot.09:08
strigaziThe trust and trustee are in all nodes anyway. So one can get whatever certs they need, this is known, right?09:09
strigaziAnd09:09
strigaziWhat about serving service account keypair via the API?09:09
strigaziThat's it09:10
flwang1re the trust and trustee, yep. that's a good point, we can try to limit the request in the future to only allow it came from master node09:10
flwang1though i don't know how yet09:11
strigaziSo RBAC on magnum API09:11
strigaziI see different trustees or application creds as a solution.09:11
flwang1some silly(home-made) RBAC probably09:11
flwang1strigazi: that also works09:11
strigaziWhy silly, it can't get better than this09:12
strigazidifferent policy per role09:12
flwang1sorry, i mean, openstack don't reayll have a good rbac design yet09:12
strigaziThis point needs discussion. Maybe a spec. I just wanted to mention it.09:13
strigaziFor the 2nd point?09:13
strigaziSince we pass type in the API we could serve the service account RSA keypair09:14
flwang1yep, it's a very good point. thanks for the reminder09:14
flwang1strigazi: can you explain more about the benefit of serving the service account rsa keypair by api?09:15
strigaziAdd a new master NG09:15
strigaziATM we can't access the RSA keypair. It is a hidden (as it should) parameter in the heat stack09:16
flwang1master NG for what? resizing?09:16
strigaziresizing is not blocked by this09:17
strigaziadding a new one09:17
strigazimaster-ng-B09:17
flwang1i can't see the point of have another master NG09:17
flwang1for worker nodes, NG makes good sense09:17
strigaziI want to use bigger master nodes, how do I do this?09:18
flwang1but what's the value of multi master NG09:18
flwang1nova resizing?09:18
strigaziThat's an option, but you see my point.09:19
flwang1sure09:19
flwang1very good comments, i appreciate it09:19
flwang1next one?09:20
strigazi+109:20
flwang1#topic PodSecurityPolicy and Calico09:20
*** openstack changes topic to "PodSecurityPolicy and Calico (Meeting topic: magnum)"09:20
strigaziWhat did they break again?09:20
flwang1i'm still working on this, maybe you guys can help confirm09:20
flwang1after using --admission-controller-list label with PodSecurityPolicy, the calico pod can't start, the cluster can't be started09:21
flwang1with this post http://blog.tundeoladipupo.com/2019/06/01/Kubernetes,-PodSecurityPolicy-and-Kubeadm/ i think we need a dedicated psp for calico if we want to enable PodSecurityPolicy09:22
strigaziwe have a very privileged PSP for calico, it doesn't work?09:22
strigazicalico is using a PSP already09:23
flwang1strigazi: good to know. i haven't reviewed the code yet09:23
flwang1i will do another test, and it would be nice if you guys can help test it as well.09:23
flwang1the PSP is getting a common requirement for enterprise user09:23
flwang1EKS is enabling it by default09:24
flwang1btw, i just found our default admission list in the code is very old, should we update  it?09:24
strigazisure09:24
strigaziAt CERN we have it in out CT09:24
strigaziAt CERN we have it in our CT09:24
flwang1right09:25
strigazihttps://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/calico-service-v3-3-x.sh#L1809:25
flwang1i will propose a patch based on the default list from v1.16.x, sounds ok?09:25
*** k_mouza has joined #openstack-containers09:26
*** ykarel_ has joined #openstack-containers09:27
strigazisure09:27
strigazimaybe for V we do the list for 1.19?09:27
flwang1strigazi: can do09:28
strigazicool09:29
flwang1re the calico psp, maybe i missed something, but at line https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/calico-service.sh#L3009:29
*** ykarel has quit IRC09:29
strigaziyeap, that's it09:29
flwang1i can see this role name, but seems we didn't create the role?09:29
strigaziwe do09:29
strigazihttps://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/kube-apiserver-to-kubelet-role.sh#L12509:30
*** ykarel_ has quit IRC09:30
flwang1ok, got it, so we are using magnum.privileged as the psp09:31
strigaziyes09:31
strigazisame as GKE was doing (at least when I checked)09:31
flwang1ok, i will test this again09:32
flwang1let's move on?09:32
strigazi+109:32
flwang1#topic Add observations field to CT09:32
*** openstack changes topic to "Add observations field to CT (Meeting topic: magnum)"09:32
flwang1dioguerra: ^09:33
*** ykarel has joined #openstack-containers09:33
dioguerraHey, so the idea of this is to add a new field visible to the user where we can add observations09:34
dioguerraThe idea came so that we could label the CT with DEPRECATED/PRODUCTION/BETA or something similar09:34
flwang1does the field have to be a enum?09:35
flwang1or the list can be defined by the admin via config option?09:35
flwang1sorry, i haven't read the code yet09:35
dioguerrano, i made it so it is free text (other admins might want to add other observations like HA, Multimaster09:35
dioguerrayou agree?09:36
flwang1i'm not sure, if it's kind like a label or tag, then if i want to have several tags for the CT, i have to do something like AAA-BBB-CCC, is it?09:37
dioguerraits not a label, its a new field with free text09:39
dioguerraso --obersvations <something>09:39
*** rcernin_ has quit IRC09:39
flwang1i understand09:39
flwang1i'm just saying a free text is making it like a free tag09:40
brtknrIts basically a tag to filter cluster templates right? "observations" is a bit of a mouthful. Can we just call it "tags"09:40
*** k_mouza has quit IRC09:40
brtknrdoes the current implementation allow user to filter using this field?09:41
dioguerrait is not a filter (although you might use it for that) its just a ref http://paste.openstack.org/show/797160/09:41
dioguerrabrtknr: no09:42
flwang1hmm...i understand this is neat than putting the HA or Prod into the cluster template name, but I expect to make it more useful to deserve having a dedicated db field09:43
flwang1dioguerra: i'm not saying i don't like the idea. actually it will be useful, i probably need a bit time to think about it.09:44
dioguerrawell, the idea of doing filtering with the field cross my mind. We can add it now if you would like or later...09:45
brtknrdioguerra: i think that would be the thing which would add value to this proposal09:46
flwang1shall we move to next topic?09:46
jakeyipwill it be easier to filter using tags instead of free text?09:47
flwang1we have 15 mins left09:47
jakeyipdatabase don't match well on TEXT09:47
jakeyipsorry, cont' please09:47
dioguerrain our usecase we usually only have some visible templates (3 to 4) so filtering is not really required. everything else is hidden09:49
dioguerrajakeyip: tag would be better for the DB yes. but that restricts what you want to put.09:49
jakeyipdo we have a description field?09:50
flwang1i would suggest putting the discussion into the patch09:50
flwang1not here09:50
flwang1move on?09:50
jakeyip+109:50
flwang1#topic Drop hyperkube https://review.opendev.org/74814109:51
*** openstack changes topic to "Drop hyperkube https://review.opendev.org/748141 (Meeting topic: magnum)"09:51
flwang1strigazi: ^09:51
flwang1tell us more?09:51
strigazik8s community and some ex-openstack members though we should not have too much fun with hyperkube and dropped it.09:51
strigaziI have a solution there that gets a tarball with kubelet, kubectl, kubadm and kube-proxy (90mb)09:52
strigaziit works, running the kubelet from a bin09:52
strigaziand the rest components from their respective images.09:52
strigaziAll good so far, now the problems09:53
flwang1kubeadm?09:53
strigaziflwang1: well, it is there09:53
strigaziflwang1: can't skip it09:53
strigazieven the kube-proxy binary, we don't need it09:53
flwang1sounds like another breaking change09:53
strigaziwe need only kubelet and kubectl09:53
strigaziflwang1: which one? kubadm?09:54
brtknrhmm why does k8s.gcr.io make other binaries available in containers but not kubelet I wonder09:54
strigaziflwang1: I just mention what is in the tarball. Is it clear?09:55
flwang1strigazi: i wonder if the new change will allow old cluster to be upgraded to09:55
brtknri suppose that is why the PS is "Very WIP"09:55
strigaziflwang1: there is literally nothing we can do for clusters that reference k8s.gcr.io/hyperkube.09:55
strigaziflwang1: if your clusters use <my-registry>/hyperkube, we can build a hyperkube09:56
*** k_mouza has joined #openstack-containers09:57
dioguerrai need to go o/09:57
strigazibrtknr: does it matter? They decided to stop building it. (the reason is CVEs in debian base image)09:57
strigazibrtknr: flwang1: hello?09:58
flwang1strigazi: i appreciate the work, my only concern is we need to work out a design that make sure old cluster can be ugpraded09:58
strigaziflwang1: what is your situation? (regarding there is literally nothing we can do for clusters that reference k8s.gcr.io/hyperkube. && if your clusters use <my-registry>/hyperkube, we can build a hyperkube)09:59
flwang1we're getting hyperkube from dockerhub/catalystcloud09:59
strigazipro account i guess10:00
flwang1now i'm trying to build hyperkube for v1.19.x10:00
strigazithe free account won't cut it any more10:00
strigaziit relatively easy. But let me rephrase10:00
strigaziShall we move to the binary for V, so that we don't have to maintain a new image?10:01
strigazibrtknr: ^^ flwang1 ^^10:01
flwang1strigazi: moving to binary is our goal i think, and we don't have choice10:02
strigaziflwang1: easy to build, hard to maintain.10:02
flwang1you mean maintain the binanry?10:03
strigaziflwang1: no, the image10:03
flwang1i see10:03
brtknrI'm okay with that, I echo flwang's concern that existing clusters can also be upgraded, which should be possible.10:03
flwang1yep, but again, as public cloud, and a GAed service, we can't break the upgrade path10:04
flwang1though we should be able to do magic in the upgrade-k8s.sh10:04
flwang1at least, the good thing is we don't have to replace the operating system10:05
strigaziThe upstream project broke it. So for V we do the binary hoping they won't break it again.10:05
flwang1that's a good excuse for us, but we can't use it for our public cloud customer unfortunately :(10:05
flwang1strigazi: i will review your patch and see how can we resolve the upgrade issue10:06
flwang1strigazi: brtknr: anything else?10:09
strigaziI'm good10:09
brtknrstrigazi: any reason not to use binaries everything?10:10
brtknrthere is also a server binaries tarball i assume is for master node10:11
strigazibrtknr: they are 300mb and I think it is more secure and elegant running in containers.10:11
flwang1ok, let me end the meeting first10:12
flwang1#endmeeting10:12
*** openstack changes topic to "OpenStack Containers Team | Meeting: every Wednesday @ 9AM UTC | Agenda: https://etherpad.openstack.org/p/magnum-weekly-meeting"10:12
openstackMeeting ended Wed Aug 26 10:12:27 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)10:12
flwang1thank you guys10:12
openstackMinutes:        http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-08-26-09.01.html10:12
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-08-26-09.01.txt10:12
openstackLog:            http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-08-26-09.01.log.html10:12
strigazibrtknr: also, kube-proxy is easier to run in containers since they do some iptables magic in the image.,10:14
strigazibrtknr: what are the arguments on using binaries?10:14
brtknrstrigazi: i assumed it might involve less image download10:17
brtknr300 mb on master, 100 mb on workers10:17
flwang1strigazi: btw, is CERN enabling PodSecurityPolicy in your template?10:19
strigazibrtknr: it won't, the server tar contains all bins and images.10:19
brtknrstrigazi: yeah, 300md is less than what is currently downloaded isnt it?10:20
strigazibrtknr: no, hyperkube image is less than 300mb compressed.10:21
brtknrstrigazi: i see10:23
brtknrit would be good to avoid downloading kube-proxy twice10:26
strigaziI couldn't make kube-proxy run outside the image. As a matter of fact, kube-proxy should be a ds.10:27
strigazibrtknr: because of the iptables magic I mentioned10:28
brtknrds=daemonset?10:29
brtknri was wondering if we should run all the kube-services as static pods?10:29
strigaziit is a good-ish move10:30
openstackgerritDiogo Guerra proposed openstack/magnum master: Add CT observations field to the database and API  https://review.opendev.org/73784011:48
*** dave-mccowan has joined #openstack-containers13:17
*** sapd1_x has joined #openstack-containers13:22
*** kevko has quit IRC13:54
openstackgerritSpyros Trigazis proposed openstack/magnum master: ci: Log in to DockerHub using docker_login  https://review.opendev.org/74786715:05
*** sapd1_x has quit IRC15:25
*** ykarel is now known as ykarel|away15:55
*** ykarel|away has quit IRC16:00
*** nikparasyr has left #openstack-containers16:30
*** k_mouza has quit IRC16:41
*** k_mouza has joined #openstack-containers16:44
*** k_mouza has quit IRC16:48
*** sapd1_x has joined #openstack-containers17:04
*** k_mouza has joined #openstack-containers17:22
*** k_mouza has quit IRC17:26
*** sapd1_x has quit IRC17:47
openstackgerritVishal Manchanda proposed openstack/magnum-ui master: [DNM] Test magnum-ui with horizon jobs on focal  https://review.opendev.org/74716917:51
*** k_mouza has joined #openstack-containers17:56
*** k_mouza has quit IRC18:01
openstackgerritVishal Manchanda proposed openstack/magnum-ui master: [DNM] Test magnum-ui with horizon jobs on focal  https://review.opendev.org/74716918:15
*** k_mouza has joined #openstack-containers19:57
*** k_mouza has quit IRC20:01
*** vishalmanchanda has quit IRC20:37
*** k_mouza has joined #openstack-containers21:33
*** k_mouza has quit IRC22:06
*** rcernin has joined #openstack-containers22:40

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!