Wednesday, 2019-04-17

*** sapd1_x has joined #openstack-containers00:11
*** absubram has quit IRC00:16
*** sapd1_x has quit IRC00:30
*** ricolin has joined #openstack-containers00:56
jakeyipI think it's ok I got kubeconfig to work. Never mind me I think I was just confused.00:58
*** threestrands has joined #openstack-containers01:24
*** hongbin has joined #openstack-containers01:37
*** itlinux has joined #openstack-containers02:09
openstackgerritFeilong Wang proposed openstack/magnum master: [fedora_atomic] Support auto healing for k8s  https://review.openstack.org/63137802:47
*** itlinux has quit IRC02:52
*** itlinux has joined #openstack-containers02:54
openstackgerritFeilong Wang proposed openstack/magnum master: [k8s] Master nodes auto healing  https://review.openstack.org/65327003:16
*** ykarel|away has joined #openstack-containers03:19
*** chhagarw has joined #openstack-containers03:24
*** chhagarw has quit IRC03:28
*** hongbin has quit IRC03:38
*** itlinux has quit IRC03:43
*** chhagarw has joined #openstack-containers04:09
*** ykarel|away has quit IRC04:27
*** udesale has joined #openstack-containers04:37
*** ykarel|away has joined #openstack-containers04:43
*** sapd1 has quit IRC04:50
*** ramishra has joined #openstack-containers04:50
openstackgerritFeilong Wang proposed openstack/magnum master: [k8s] Master nodes auto healing  https://review.openstack.org/65327004:54
*** absubram has joined #openstack-containers05:01
*** absubram has quit IRC05:06
*** absubram has joined #openstack-containers05:10
*** ykarel|away is now known as ykarel05:10
*** ivve has joined #openstack-containers05:13
*** janki has joined #openstack-containers05:49
*** absubram has quit IRC06:01
*** pcaruana has joined #openstack-containers06:11
*** chhagarw has quit IRC06:22
*** chhagarw has joined #openstack-containers06:22
*** udesale has quit IRC06:37
*** udesale has joined #openstack-containers06:38
*** udesale has quit IRC06:39
*** udesale has joined #openstack-containers06:44
*** udesale has quit IRC06:46
*** udesale has joined #openstack-containers06:56
*** yolanda_ has joined #openstack-containers07:07
*** flwang1 has joined #openstack-containers07:34
*** gbialas has joined #openstack-containers07:36
* flwang1 is so excited that strigazi finally +2ed his autoscaling patch07:38
*** gbialas has quit IRC07:40
*** ykarel is now known as ykarel|lunch07:43
*** alisanhaji has joined #openstack-containers07:54
*** ttsiouts has joined #openstack-containers08:06
*** rcernin has quit IRC08:13
*** ttsiouts has quit IRC08:17
*** ttsiouts has joined #openstack-containers08:17
*** ttsiouts has quit IRC08:22
*** ttsiouts has joined #openstack-containers08:28
*** rcernin has joined #openstack-containers08:28
*** ykarel|lunch is now known as ykarel08:31
openstackgerritMerged openstack/magnum master: [fedora_atomic] Support auto healing for k8s  https://review.openstack.org/63137808:36
openstackgerritMerged openstack/magnum stable/stein: Support multi DNS server  https://review.openstack.org/65278408:48
openstackgerritMerged openstack/magnum master: Fix registry on k8s_fedora_atomic  https://review.openstack.org/65209608:48
*** udesale has quit IRC08:51
flwang1strigazi: around for a chat?09:02
flwang1just ping me when you're available09:03
strigaziflwang1: ping09:04
flwang1strigazi: pong09:04
flwang1strigazi: thank you for approving my patch :)09:04
strigaziflwang1 is so excited that strigazi finally +2ed his autoscaling patch, works great!!!09:04
strigaziflwang1: I'll override the v1.0 image in dockerhub with Thomas' fix09:05
strigazimakes sense?09:05
flwang1i need to resolve the conflicts before backporting to stable/stein09:05
strigazior v2.009:05
flwang1strigazi: better override09:05
strigaziI prefer tags to be immutable, but I don't mind for something that is not merged09:05
strigaziI mean, not merged in magnum side09:06
flwang1strigazi: it's happening now09:06
flwang1so just do it09:06
flwang1as for auto healing, i have another idea i'd like to get your comments09:06
strigazibefore getting to that. I think I commented in all your patches, did I miss something?09:07
openstackgerritMerged openstack/magnum master: Dropping the py35 testing  https://review.openstack.org/65239609:07
flwang1strigazi: i'd like to get a general option for this https://review.openstack.org/#/c/643225/09:08
flwang1and this https://review.openstack.org/#/c/648590/ if you can09:08
strigazipushed openstackmagnum/cluster-autoscaler:v1.009:11
flwang1strigazi: cool09:11
strigazinot it has the fix https://github.com/kubernetes/autoscaler/pull/187309:11
flwang1why?09:11
flwang1shouldn't we release the latest version otherwise the scale down doesn't work09:12
strigazis/not it has the fix https://github.com/kubernetes/autoscaler/pull/1873/NOW it has the fix https://github.com/kubernetes/autoscaler/pull/1873/09:13
strigaziall good, right?09:14
flwang1i'm happy now :)09:14
strigazi https://review.openstack.org/#/c/648590 +209:16
*** ttsiouts has quit IRC09:17
flwang1thanks09:18
*** ttsiouts has joined #openstack-containers09:18
strigaziflwang1 for https://review.openstack.org/#/c/643225/09:18
flwang1https://review.openstack.org/#/c/643225/  thoughts?09:18
strigazihehe, i was faster :)09:18
flwang1well, you win09:19
strigaziI left a comment, After rebasing, works fine. Thanks! we just need the docs change09:19
strigazipatch looks fine09:19
*** ttsiouts_ has joined #openstack-containers09:19
flwang1strigazi: the background is, we(catalyst cloud) would like a out-of-box keystone auth support just like GKE09:19
*** ttsiouts has quit IRC09:19
flwang1user don't have to deal with the policy, just use it09:19
strigaziI just have a related question. Do you have a role (not the admin one) that can do role assignments09:20
*** udesale has joined #openstack-containers09:20
flwang1strigazi: at project level, we do09:20
flwang1we're using Adjutant to do that09:21
flwang1Adjutant helps us doing those work need admin role09:21
flwang1for example, sign up09:21
flwang1and invite new user and assign roles for them09:21
flwang1for this case, we will create those new roles in keystone, and adding them into Adjutant, then project admin can assign those roles for project members09:22
flwang1strigazi:  does that make sense?09:22
strigaziproject admin will you openstack cli?09:23
strigaziyes it will09:24
flwang1strigazi: at this moment, i'm not sure too sure if Adjutant cant support cli for that09:24
flwang1but for dashboard, we do09:25
strigazimakes sense, we have something in house built long time ago. anyway09:25
strigazidid I miss any other patch09:25
flwang1i will rebase the patch and update the user guide doc09:25
strigazi?09:25
flwang1no, you make my day09:25
flwang1this is the most happy day for me in the last 3 months09:26
strigaziI hope you are exaggerating :)09:26
strigazibut in context, makes some sense09:26
flwang1:D09:27
flwang1before talking about rolling upgrade, give me 5 mins about the master node auto healing?09:27
strigazi+109:28
strigaziI mean, go on09:28
flwang1thanks09:29
openstackgerritFeilong Wang proposed openstack/magnum master: [k8s] Master nodes auto healing  https://review.openstack.org/65327009:29
flwang1^ this is my general idea09:30
flwang1mostly are here https://review.openstack.org/#/c/653270/3/magnum/service/periodic.py@12609:30
strigaziflwang1: it makes some sense09:32
strigaziI think with the template changes in upgrade, it *may* work09:32
flwang1strigazi: exactly, the work may need your work in the upgrade patch09:33
strigaziI'm not sure I agree with the octavia status check09:33
strigaziit should ask kubernetes09:33
flwang1strigazi: it's master, my friend09:33
flwang1we can't get health status for each master09:33
strigazicorrect, if the request times out09:33
strigazibut not through octavia, not everyone has it :)09:34
flwang1we can only get the overall api status or the etcd api status09:34
flwang1strigazi: i understand that09:34
flwang1is there any other way to get the status for each master? so that i can make it plugable?09:34
*** threestrands has quit IRC09:36
strigaziI mean, we can ask directly the api_address. but we need cases, fip enabled and so on09:36
flwang1strigazi: yep, i thought that as well09:36
flwang1can we use the ca certs in magnum db to talk to etcd api?09:37
strigaziflwang1 unfortunately yes, we must change that09:37
flwang1and will it still work if we generate different ca certs for api, etcd and front-proxy?09:38
strigaziI think I mentioned it in the past, we need a new ca for it09:38
flwang1i know09:38
flwang1there is a security hole09:38
flwang1it's on my list09:38
strigaziok09:38
flwang1if i can make a  base function like _give_me_the_broken_master(), are you happy with the overall desgin?09:39
strigazi+109:39
flwang1cool09:39
flwang1deal09:39
*** lpetrut has joined #openstack-containers09:40
flwang1following that09:40
flwang1actually, based on our current resize api, the worker node auto healing can be done by the similar way, i'd like to call it    auto_healing_mode=hard, because user can always keep the same node count09:41
flwang1by checking the health_status_reason, we can know which worker is down, and then we can call resize api to replace the node09:41
strigaziwhat is soft?09:41
flwang1soft is our current way with CA09:42
strigaziok09:42
openstackgerritMerged openstack/magnum master: [fedora atomic] Allow traffic between k8s workers  https://review.openstack.org/65293109:42
flwang1how do you think?09:42
strigazihard > soft09:42
flwang1you like the 'hard'(rock) way?09:43
strigaziyes, I don't know if it will conflict with the CA09:44
flwang1strigazi: it won't09:44
flwang1the new label 'auto_healing_mode' can manage that09:45
strigaziif a node is NotReady, the CA won't do anything?09:45
strigaziI'll be back in a few mins <509:46
flwang1assume auto_healing_enabled=True,  if auto_healing_mode=hard,   mangum-conductor will take over, otherwise CA+Draino will take over09:46
strigaziback09:48
strigaziflwang1: ok, but09:48
strigaziif auto_scaling_enabled=True and auto_healing_mode=hard ?09:48
flwang1if auto_scaling_enabled=True and auto_healing_mode=hard, that's the magnum-conductor way09:49
flwang1if auto_healing_enabled=False,  no auto_healing for either master or node09:49
flwang1as long as we have containerized master, we won't have to care the master09:50
flwang1at that moment, the auto_healing_mode may make more sense09:50
strigaziif auto_scaling_enabled=True and auto_healing_mode=hard, that's the magnum-conductor way. OK. but what the CA does when it sees a node as not ready?09:51
flwang1strigazi: AFAICT, CA won't automatically remove bad node09:54
flwang1that's the work need to be down by Draino09:54
flwang1but at this case, Draino won't be installed09:54
strigazidraino, just dains right?09:54
flwang1cordon and drain09:54
strigaziAnd then CA scales down because it is empty right?09:55
flwang1after drain,   the node will be empty, and CA will remove the empty node09:55
flwang1yes09:55
strigaziok09:55
strigaziactually soft and hard can "co-exist" in a way. It is whoever comes first, conductor or CA09:56
flwang1strigazi: it could be, but that will be messed and will be hard to debug09:56
flwang1but technically, yes09:56
*** gbialas has joined #openstack-containers09:58
flwang1so far, i think generally you like the idea?09:58
strigazisounds good09:59
flwang1cool, i will polish it09:59
flwang1and that does need your upgrade patch09:59
flwang1what's the status of the last part?09:59
strigaziI finish with the Software deployment of maste and we are good. The templates are a bit messed but it almost works. I don't have anything else to do today10:01
strigaziI have only a question10:02
strigaziLixian tried to change somehting similar, shall we change the template version to newton?10:03
strigazito have conditions10:03
flwang1strigazi: personally i think we should10:03
strigazinow?10:03
flwang1i don't think anyone can use magnum without upgrade heat to queens10:03
strigaziraelly?10:04
strigazireally?10:04
flwang1fine10:04
strigazidue to the agent?10:04
flwang1agent issue is a big problem10:04
strigazifyi, we upgraded heat to rocky yesterday10:04
flwang1but if they don't care region, then could be10:04
flwang1but10:04
flwang1i still think stick on very old heat version means we're losing more than we got10:05
flwang1we can send an email to the community to survey if there is any company is using Magnum with an old heat ( older than newton)10:06
flwang1if there is nobody, we're safe to go10:06
strigazior queens10:07
strigaziok, will you send an email?10:07
flwang1strigazi: i can do that10:08
flwang1and you can just focus on your work10:08
strigaziok10:08
strigazihave you tested my patches for 1.14?10:08
strigaziconformance passes for calico and flannel10:08
flwang1you mean the calico one? or the upgrade one?10:08
strigazicoredns/calico/1.1410:09
*** strigazi has quit IRC10:09
flwang1i reviewed them separatedly, but didn't test togehter10:10
flwang1i do have some questions10:10
*** strigazi has joined #openstack-containers10:10
flwang11. for the coreDNS, https://review.openstack.org/#/c/652058/2/magnum/drivers/common/templates/kubernetes/fragments/core-dns-service.sh@7410:10
flwang1so with the new coreDNS versioin, we don't have to provide the network cidr to coreDNS?10:11
strigaziflwang1 ok, easy, other than that, all good?10:11
strigaziflwang1: V10:12
strigaziflwang1: https://github.com/coredns/deployment/blob/master/kubernetes/deploy.sh#L710:12
strigaziI think both work10:13
flwang1ok, i will take a look10:13
strigaziI can try passing the cidr like before10:13
*** livelace has joined #openstack-containers10:13
flwang12. for calico v3.3, how calico know the etcd server to save data?10:13
flwang1i didn't see how the etcd address being passed in as before10:14
*** jchhatbar has joined #openstack-containers10:14
strigaziit uses custom resource definitions, not more etcd. It uses the kubernetes api directly10:14
flwang1fantastic10:15
strigaziflannel does the same, one of the reason I wanted to make it a cni10:15
flwang1then we can remove those info https://review.openstack.org/#/c/652059/4/magnum/drivers/common/templates/kubernetes/fragments/calico-service.sh@1010:16
*** janki has quit IRC10:16
strigaziI forgot about that10:16
strigazisure10:16
flwang1i you can propose new patchset, i'm happy to test it tomorrow and +2,10:17
strigaziok10:17
flwang1btw, i will take leave from this Firday 19th Apr to 28th10:17
strigazijust a question,10:17
flwang1for camping10:17
flwang1no laptop10:17
flwang1sure10:17
strigaziflwang1 proper holidats10:17
strigaziflwang1 proper holidays10:18
flwang1yep, Easter holidays10:18
flwang1i do need a break, otherwise i will be broken soon10:18
strigaziCALICO_IPV4POOL_IPIP upstream is on, in our case it is off10:19
strigaziwe use CALICO_IPV4POOL_NAT_OUTGOING, upstream example config it is not using it https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml10:20
strigaziany reason you did it like this for v2?10:20
flwang1strigazi: i can't remember the details, TBH10:21
flwang1i'm ok to follow the community way10:21
flwang1i mean the default yaml from calico10:22
strigaziok, in any case, conformance passes. I'll check if the upstream config works too.10:22
strigazilet's check tomorrow same time, or you are off already?10:22
strigazilet's check tomorrow same time, or you will be off already?10:22
*** janki has joined #openstack-containers10:23
flwang1are you all k8s nodes in a same l2 network?10:24
flwang1strigazi: i can meet with you tomorrow at the same time10:24
flwang1like 9:00 UTC10:25
flwang1but won't stay long10:25
strigazishort and efficient is better10:25
flwang1since we will leave very early on Friday10:25
flwang1strigazi: cool10:25
*** jchhatbar has quit IRC10:25
flwang1i will dig the CALICO_IPV4POOL_IPIP and leave comments on your calico patch10:25
strigaziI'm going home for holidays at the same time but not real10:25
flwang1haha10:25
flwang1now, tough question10:26
strigaziI have to do four presentations for the summit the week after10:26
flwang1after i back from holiday, can i see a new patch set for the upgrade patch?10:26
flwang1strigazi: ah, i see10:26
strigaziyes10:26
flwang1good luck and i believe you will be fine10:26
flwang1strigazi: cool, then i give you 10-NO-flwang-days10:27
flwang1i will start to bug you since 29th10:27
strigaziWell see you tomorrow10:27
flwang1see you, thanks for making my day10:27
strigazicheers10:28
openstackgerritFeilong Wang proposed openstack/magnum stable/stein: Fix registry on k8s_fedora_atomic  https://review.openstack.org/65337910:32
*** gbialas has quit IRC10:32
*** livelace has quit IRC10:33
openstackgerritFeilong Wang proposed openstack/magnum stable/stein: [fedora_atomic] Support auto healing for k8s  https://review.openstack.org/65338010:37
*** ttsiouts_ has quit IRC10:37
*** ttsiouts has joined #openstack-containers10:38
openstackgerritFeilong Wang proposed openstack/magnum stable/stein: [fedora_atomic] Support auto healing for k8s  https://review.openstack.org/65338010:39
*** ttsiouts has quit IRC10:42
*** gbialas has joined #openstack-containers11:06
*** ttsiouts has joined #openstack-containers11:08
*** gbialas has quit IRC11:09
openstackgerritMerged openstack/magnum master: Allow admin update cluster/template in any project  https://review.openstack.org/64859011:17
*** jchhatbar has joined #openstack-containers11:20
*** ttsiouts has quit IRC11:20
*** ttsiouts has joined #openstack-containers11:21
*** janki has quit IRC11:22
*** ttsiouts has quit IRC11:25
*** strigazi has quit IRC11:25
*** strigazi has joined #openstack-containers11:25
*** ttsiouts has joined #openstack-containers11:27
*** ttsiouts has quit IRC11:43
*** ttsiouts has joined #openstack-containers11:44
*** ttsiouts_ has joined #openstack-containers11:46
*** ttsiouts has quit IRC11:47
*** strigazi has quit IRC11:50
*** ttsiouts_ has quit IRC11:51
*** strigazi has joined #openstack-containers11:51
*** strigazi has quit IRC11:52
*** strigazi has joined #openstack-containers11:52
*** strigazi has quit IRC11:55
*** strigazi has joined #openstack-containers11:55
*** pcaruana has quit IRC12:30
*** pcaruana has joined #openstack-containers12:53
openstackgerritSpyros Trigazis proposed openstack/magnum master: k8s: set functional jobs to voting  https://review.openstack.org/62775613:00
*** ykarel is now known as ykarel|afk13:36
*** ykarel|afk has quit IRC13:41
openstackgerritDiogo Guerra proposed openstack/magnum master: [k8s] Set traefik to stable version v1.7.10  https://review.openstack.org/64563214:16
*** itlinux has joined #openstack-containers14:19
*** itlinux has quit IRC14:22
*** ykarel|afk has joined #openstack-containers14:24
*** ykarel|afk is now known as ykarel14:25
openstackgerritSpyros Trigazis proposed openstack/magnum master: Revert "ci: Disable functional tests"  https://review.openstack.org/64287314:26
*** lpetrut has quit IRC14:28
strigazimnaser: I think I figured out the issue you had with flannel, security groups see: https://review.openstack.org/#/c/652931/314:52
mnaserstrigazi: that might make sense then14:53
mnasermerging that will fix it though, I think right?14:53
*** jchhatbar has quit IRC14:53
*** jchhatbar has joined #openstack-containers14:54
strigaziWe merged this, will cherry-pick to stein14:54
mnaserstrigazi: great, I can redeploy and let you know14:56
strigazimnaser: use vxlan as backedn14:57
strigazimnaser: use vxlan as backend14:57
*** udesale has quit IRC14:57
strigazi--labels flannel_backend=vxlan14:57
mnaserI guess it defaults to dup14:58
mnaserudp14:58
strigaziyes14:58
openstackgerritMohammed Naser proposed openstack/magnum stable/stein: [fedora atomic] Allow traffic between k8s workers  https://review.openstack.org/65345914:58
mnaserstrigazi: to make your life easier ^14:59
*** udesale has joined #openstack-containers14:59
*** udesale has quit IRC14:59
*** udesale has joined #openstack-containers14:59
strigazimnaser: when it was failing for you it was udp or vxlan?14:59
mnaserstrigazi: it was default, the cni0 interface never went up15:00
mnaserapparently though sometimes it never goes up unless the node has interfaces attached15:00
strigaziudp then15:00
*** lpetrut has joined #openstack-containers15:01
*** udesale has quit IRC15:06
*** absubram has joined #openstack-containers15:17
*** absubram has quit IRC15:21
*** absubram has joined #openstack-containers15:29
*** jchhatbar has quit IRC15:33
*** jchhatbar has joined #openstack-containers15:33
*** itlinux has joined #openstack-containers15:35
*** jchhatbar has quit IRC15:36
*** jchhatbar has joined #openstack-containers15:36
*** jchhatbar has quit IRC15:40
*** lpetrut has quit IRC15:42
*** ykarel is now known as ykarel|away16:05
*** dims has quit IRC16:07
*** lpetrut has joined #openstack-containers16:09
*** ramishra has quit IRC16:26
openstackgerritSpyros Trigazis proposed openstack/magnum master: Update coredns from upstream manifest and to 1.3.1  https://review.openstack.org/65205816:51
*** dims has joined #openstack-containers16:54
*** dims has quit IRC16:59
*** dims has joined #openstack-containers17:01
*** ricolin has quit IRC17:04
*** chhagarw has quit IRC17:43
*** chhagarw has joined #openstack-containers17:48
*** ivve has quit IRC17:52
*** chhagarw has quit IRC18:23
*** chhagarw has joined #openstack-containers18:23
*** alisanhaji has quit IRC18:24
*** lpetrut has quit IRC18:41
*** lpetrut has joined #openstack-containers18:42
*** chhagarw has quit IRC18:55
*** ykarel|away has quit IRC19:02
openstackgerritMerged openstack/magnum stable/stein: [fedora atomic] Allow traffic between k8s workers  https://review.openstack.org/65345919:23
openstackgerritMerged openstack/magnum master: [k8s] Set traefik to stable version v1.7.10  https://review.openstack.org/64563219:23
*** lpetrut has quit IRC19:40
*** absubram has quit IRC20:35
*** pcaruana has quit IRC20:43
*** logan- has quit IRC21:34
*** logan- has joined #openstack-containers21:37
*** absubram has joined #openstack-containers21:46
*** hongbin has joined #openstack-containers21:52
*** itlinux has quit IRC23:26
*** hongbin has quit IRC23:38
*** itlinux has joined #openstack-containers23:44
*** itlinux has quit IRC23:44
*** absubram has quit IRC23:50

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!