Tuesday, 2018-12-18

*** itlinux has quit IRC00:02
*** PagliaccisCloud has quit IRC00:09
*** dave-mccowan has joined #openstack-containers00:13
*** PagliaccisCloud has joined #openstack-containers00:15
*** dave-mccowan has quit IRC00:18
*** absubram has quit IRC00:38
openstackgerritFeilong Wang proposed openstack/magnum master: Support Keystone AuthN and AuthZ for k8s  https://review.openstack.org/56178300:42
*** flwang has quit IRC01:05
openstackgerritYang Le proposed openstack/magnum-ui master: Update mailing list from openstack-dev to openstack-discuss  https://review.openstack.org/62574401:12
*** hongbin has joined #openstack-containers01:42
openstackgerritFeilong Wang proposed openstack/magnum master: Support Keystone AuthN and AuthZ for k8s  https://review.openstack.org/56178301:57
openstackgerritMerged openstack/magnum master: [k8s] Cluster creation speedup  https://review.openstack.org/62372402:22
*** hongbin_ has joined #openstack-containers02:28
*** hongbin has quit IRC02:29
*** ykarel|away has joined #openstack-containers02:46
*** ricolin has joined #openstack-containers02:48
openstackgerritLingxian Kong proposed openstack/magnum stable/rocky: Add Octavia python client for Magnum  https://review.openstack.org/62576602:52
openstackgerritLingxian Kong proposed openstack/magnum stable/rocky: Delete Octavia loadbalancers for fedora atomic k8s driver  https://review.openstack.org/62576702:53
openstackgerritLingxian Kong proposed openstack/magnum stable/rocky: [k8s] Cluster creation speedup  https://review.openstack.org/62576802:54
*** ykarel|away is now known as ykarel03:05
*** hongbin has joined #openstack-containers03:20
*** hongbin_ has quit IRC03:21
openstackgerritFeilong Wang proposed openstack/magnum master: Support Keystone AuthN and AuthZ for k8s  https://review.openstack.org/56178303:22
*** rcernin has quit IRC03:25
*** rcernin has joined #openstack-containers03:27
*** rcernin has quit IRC03:28
*** rcernin has joined #openstack-containers03:28
openstackgerritFeilong Wang proposed openstack/magnum master: Support Keystone AuthN and AuthZ for k8s  https://review.openstack.org/56178303:30
*** ramishra has joined #openstack-containers03:32
*** hongbin has quit IRC03:43
*** udesale has joined #openstack-containers04:11
*** ykarel is now known as ykarel|afk04:15
*** ykarel|afk has quit IRC04:19
*** Bhujay has joined #openstack-containers04:20
*** Bhujay has quit IRC04:21
*** Bhujay has joined #openstack-containers04:21
*** Bhujay has quit IRC04:22
*** Bhujay has joined #openstack-containers04:23
*** PagliaccisCloud has quit IRC04:31
*** PagliaccisCloud has joined #openstack-containers04:36
*** ykarel|afk has joined #openstack-containers04:39
*** ykarel|afk is now known as ykarel04:39
*** janki has joined #openstack-containers04:50
*** PagliaccisCloud has quit IRC05:01
*** itlinux has joined #openstack-containers05:45
openstackgerritMerged openstack/magnum-ui master: Update http link to https link  https://review.openstack.org/62364405:48
*** PagliaccisCloud has joined #openstack-containers06:06
*** yolanda has joined #openstack-containers06:07
openstackgerrityfzhao proposed openstack/magnum master: Update mailinglist from dev to discuss  https://review.openstack.org/62579706:26
*** PagliaccisCloud has quit IRC06:27
*** fragatina has joined #openstack-containers06:45
*** ykarel is now known as ykarel|lunch07:45
*** jonaspaulo_ has joined #openstack-containers07:51
*** ykarel|lunch is now known as ykarel08:51
openstackgerritSpyros Trigazis proposed openstack/magnum master: k8s_fedora: Use external kubernetes/cloud-provider-openstack  https://review.openstack.org/57747709:02
strigazibrtknr: ^^09:05
brtknrI saw, just reading this: https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/openstack-kubernetes-integration-options.md09:06
brtknrstrigazi: which agrees with your change09:06
brtknrstrigazi: although "openstack.org/standalone-cinder" also appears to be supported09:07
brtknrstrigazi: not sure how much longer "kubernetes.io/cinder" will be around09:08
strigazibrtknr: we should use csi-cinder but we can leave with kubernetes.io/cinder until we use csi. For catalyst the cloud provider is the most imporant feature afaik. At CERN we will mostly disable. CPO is still a moving target.09:16
strigazis/leave/live/09:16
*** rcernin has quit IRC09:18
*** Bhujay has quit IRC09:24
brtknrstrigazi: just spinning up devstack to test the changes... especially standalone-cinder and why the file must be called cloud-config09:45
*** fragatina has quit IRC09:48
*** _fragatina_ has joined #openstack-containers09:48
*** _fragatina_ has quit IRC09:48
*** fragatina has joined #openstack-containers09:48
strigazibrtknr: dims said so :) http://eavesdrop.openstack.org/irclogs/%23openstack-containers/%23openstack-containers.2018-12-17.log.html#t2018-12-17T17:28:4709:51
brtknrstrigazi: *bangs head* argh09:53
strigazibrtknr: did you find the code?10:00
strigazibrtknr: did you find the line code?10:00
*** Bhujay has joined #openstack-containers10:02
*** Bhujay has quit IRC10:03
*** Bhujay has joined #openstack-containers10:04
openstackgerritFeilong Wang proposed openstack/magnum master: Support Keystone AuthN and AuthZ for k8s  https://review.openstack.org/56178310:05
*** Bhujay has quit IRC10:05
*** Bhujay has joined #openstack-containers10:05
*** Bhujay has quit IRC10:06
*** Bhujay has joined #openstack-containers10:07
openstackgerritSpyros Trigazis proposed openstack/magnum master: k8s_fedora: Use external kubernetes/cloud-provider-openstack  https://review.openstack.org/57747710:08
mkufhi there, i'm trying to deploy a kubernetes cluster in queens. the master node gets deployed and cloudinit finishes and calls wc-notify sccessfully but no minions get deployed. any idea what might cause this behaviour?10:09
*** flwang1 has joined #openstack-containers10:09
flwang1strigazi: did you test pvc?10:09
*** ttsiouts has joined #openstack-containers10:10
strigaziflwang1: this https://github.com/kubernetes/cloud-provider-openstack/blob/master/examples/persistent-volume-provisioning/cinder/cinder-in-tree-full.yaml10:12
flwang1strigazi: great10:12
flwang1is the PS9 ready for testing?10:13
*** salmankhan has joined #openstack-containers10:16
strigaziflwang1: and this https://github.com/kubernetes/cloud-provider-openstack/blob/master/examples/volumes/cinder/cinder-web.yaml10:16
strigaziflwang1: ^^ s/tcp/TCP/10:16
strigaziflwang1: for cinder-in-tree-full.yaml the volume is created but not mounted.10:16
flwang1strigazi: hmm...10:18
flwang1we probably need a fix in CPO, lingxian mentioned this with me today10:19
strigazi  Warning  FailedScheduling  6s (x14 over 61s)  default-scheduler  0/2 nodes are available: 1 node(s) had no available volume zone, 1 node(s) had taints that the pod didn't tolerate.10:21
strigaziprobably this ^^10:21
strigaziflwang1: 1 node(s) had no available volume zone10:21
flwang1strigazi: got, i will double check with lxkong tomorrow10:22
flwang1today i mainly working on the keystone patch and draino10:23
strigaziflwang1: without the problem detector? is this done?10:30
flwang1without the node problem detector, it's still working by checking the condition of the node10:32
flwang1based on my understanding, NPD just give you more space to define the detect policies, need more dig10:32
flwang1but i'm pretty sure draino can work without NPD10:33
lxkongstrigazi: to make pvc work, you need this https://github.com/kubernetes/cloud-provider-openstack/pull/40510:34
flwang1what draino can do is checking the condition of the node and then cordon the node, and after  a while, drain it10:34
strigazilxkong: flwang1 this is why we abandoned the cloud provider.10:35
flwang1sorry?10:36
flwang1strigazi: what have we abandoned?10:36
lxkongafter that pr is merged and a few more role definition fixed, we are good to go10:36
lxkongi see in that cloud provider patch, just give the cloud-controller-manager a cluster-admin role which is not recommended10:37
strigaziwe, at cern, stopped using teh cloud proider because of all these issues.10:37
strigazilxkong: do you want to share what is recommended?10:38
strigazilxkong: and push a patch here I guess https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/10:39
lxkonghttps://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/cluster/addons/rbac/cloud-controller-manager-roles.yaml10:39
lxkongand https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/cluster/addons/rbac/cloud-controller-manager-role-bindings.yaml10:39
lxkongi also have a fix for the role https://github.com/kubernetes/cloud-provider-openstack/pull/40410:39
strigaziIs there any documentation for this decisions? :)10:40
strigaziIs there any documentation for these decisions? :)10:40
*** udesale has quit IRC10:40
lxkongwhat decision?10:40
strigaziWhy the recommended roles/rolebindings of the openstack cloud provider are different than the cloud-controller-manager?10:42
strigazilxkong: is there a doc saying it is not recommended to use cluster-admin for the cloud-provider?10:43
strigaziflwang1: lxkong what do you want to do? wait for lxkong's patch to be released and merge the CPO patch?10:45
lxkongi don't think there is an official definition for the role/rolebindings that a ccm should use. Using cluster-admin is totally ok, but that gives the service too broad permission.10:45
flwang1strigazi: any other thing depend on this patch?10:46
strigaziapart from the time I wasted on CPO, I don't think so10:47
flwang1strigazi: haha10:48
flwang1i totally understand your angry10:48
flwang1i don't mind merging it now, but i'd like to see comments, docs and release notes to highlight this10:49
strigazifair enough, when the fix is out we can just change the version10:50
strigaziWell, no I think we should point the cpo release page.10:50
strigaziWe can say that the deafult CPO is using this version -> for fearures/bugs people should read the CPO release notes, thoughts?10:51
flwang1strigazi: can we just use latest CPO? until we figure out a stable one?10:51
*** salmankhan has quit IRC10:52
strigaziflwang1: no, I think the cpo tag should match the kube_tag. v1.11.x is compatible with v0.2.010:52
strigaziflwang1: and then users can pick the tag they want, no?10:53
flwang1strigazi: again, we need a matrix ;)10:53
flwang1to be clear, i agree the cpo version should be always consistent with the k8s version10:54
strigazido we? CPO-k8s compatibility should be documented in magnum?10:55
flwang1not only cpo-k8s, but also other services we're supporting10:56
strigaziflwang1: we have a compatibility matrix, CPO is not there.11:07
flwang1ok, that's not a big problem at this moment11:07
flwang1if you can add a comment and a release note, i'm happy to approve it11:08
strigaziflwang1: what comment? that dynamic volume provisioning doesn't work?11:15
flwang1yep11:15
flwang1we need a TODO or FIXME to highlight the PVC doesn't work11:16
strigaziout of curiosity you are going to maintain a fork of k8s?11:16
flwang1strigazi: we're trying our best to avoid to do that11:16
*** ttsiouts has quit IRC11:17
flwang1however, we need to build a ci/cd of k8s in case there is any bug Google doesn't care but Catalyst Cloud care11:17
strigaziflwang1: what kind of ci/cd ?11:18
*** ttsiouts has joined #openstack-containers11:18
strigaziflwang1:  the people involved on CPO have openlab testing11:18
flwang1like cherrypick a PR into our k8s repo, then trigger a build to build all the k8s images and run tests.....11:18
strigaziflwang1: everything should be tested there PVC is not since it was discovered yesterday, correct?11:18
flwang1not for CPO, for k8s11:19
strigazifor the external IPs thing?11:20
strigaziflwang1: you carrt more patches?11:20
strigaziflwang1: you carry more patches?11:20
flwang1strigazi: yep, e.g. the missing ip one11:21
flwang1that one has been tagged into 1.11.611:21
*** zufar has joined #openstack-containers11:21
flwang1so we won't carry any patch now11:21
*** brtknr has quit IRC11:22
*** ttsiouts has quit IRC11:22
dimsstrigazi : o/11:24
openstackgerritSpyros Trigazis proposed openstack/magnum master: k8s_build: Build kubernetes v1.11.6 containers  https://review.openstack.org/62588411:26
strigaziflwang1: ^^11:26
strigazidims: hi11:26
flwang1strigazi: lovely, thanks11:27
openstackgerritSpyros Trigazis proposed openstack/magnum master: k8s_fedora: Use external kubernetes/cloud-provider-openstack  https://review.openstack.org/57747711:30
strigaziflwang1: I think I have everything here ^^11:30
flwang1strigazi: thank you11:31
flwang1strigazi: i can't test it now, probably tomorrow, if i test it OK, then I will approve it11:32
flwang1it's 0:32 now11:32
strigaziflwang1: sounds good!11:33
flwang1have to go off, see you tomorrow11:34
strigaziflwang1: Good Night!11:34
*** jmlowe has quit IRC11:37
*** jmlowe has joined #openstack-containers11:38
mkufi'm trying to deploy a kubernetes cluster in queens. the master node gets deployed, cloudinit finishes and calls wc-notify sccessfully but no minions get deployed. any idea what might cause this behaviour?11:44
*** Bhujay has quit IRC11:46
strigazimkuf: share in paste.openstack.org the output of openstack stack resource list -n2 <stack_id>11:47
*** ttsiouts has joined #openstack-containers11:47
mkufstrigazi: http://paste.openstack.org/show/737549/11:51
*** Bhujay has joined #openstack-containers11:52
strigazimkuf: in the master node, /var/log/cloud-init-output.log and journalctl -u heat-container-agent --no-pager11:53
*** Bhujay has quit IRC11:53
*** Bhujay has joined #openstack-containers11:53
*** salmankhan has joined #openstack-containers11:54
*** Bhujay has quit IRC11:54
*** Bhujay has joined #openstack-containers11:55
*** Bhujay has quit IRC11:56
*** Bhujay has joined #openstack-containers11:56
*** Bhujay has quit IRC11:57
*** Bhujay has joined #openstack-containers11:58
*** Bhujay has quit IRC11:59
mkufstrigazi: cloud-init-output.log: http://paste.openstack.org/show/737538/ and journalctl -u heat-container-agent --no-pager: http://paste.openstack.org/show/737550/11:59
*** Bhujay has joined #openstack-containers11:59
mkufstrigazi: huh, looking at the journalctl output, it seems pretty obious why it's not working. it's using the internal endpoint for heat, instead of the public one.11:59
*** Bhujay has quit IRC12:00
*** Bhujay has joined #openstack-containers12:01
strigazimkuf: +112:01
*** PagliaccisCloud has joined #openstack-containers12:07
mkufstrigazi: where does heat-container-agent get it's config from? is it possibly some misconfig in heat.conf?12:08
*** zufar has quit IRC12:09
*** janki has quit IRC12:17
*** udesale has joined #openstack-containers12:27
*** salmankhan has quit IRC12:27
*** ramishra has quit IRC12:27
*** salmankhan has joined #openstack-containers12:28
strigazimkuf: in master do atomic containers list --no-trunc12:36
strigazimkuf: for the heat agent you should have queens-stable12:36
*** ramishra has joined #openstack-containers12:37
mkufstrigazi: looks like rawhide(?) http://paste.openstack.org/show/737554/12:44
*** brtknr has joined #openstack-containers12:46
*** jonaspaulo_ has quit IRC12:56
*** brtknr has quit IRC13:00
*** brtknr has joined #openstack-containers13:01
*** brtknr has quit IRC13:12
strigazimkuf: you need queens-stable there was a bug in os-collect-config that we fized13:12
strigazis/fized/fixed13:13
*** brtknr has joined #openstack-containers13:13
strigazimkuf: it is in stable but we haven't released. I'll do a release https://github.com/openstack/magnum/blob/stable/queens/magnum/drivers/common/templates/kubernetes/fragments/start-container-agent.sh13:15
openstackgerritSpyros Trigazis proposed openstack/magnum stable/queens: functional: retrieve cluster to get stack_id  https://review.openstack.org/62473213:20
openstackgerritSpyros Trigazis proposed openstack/magnum stable/queens: functional: use vexxhost-specific nodes with nested virt  https://review.openstack.org/62460813:20
openstackgerritSpyros Trigazis proposed openstack/magnum stable/queens: functional: add body for delete_namespaced_service in k8s  https://review.openstack.org/62590713:20
openstackgerritSpyros Trigazis proposed openstack/magnum stable/queens: functional: use default admission_control_list values  https://review.openstack.org/62590813:20
*** ramishra has quit IRC13:22
strigazimkuf: I'll try to do a release today or tmr for queens13:22
*** ramishra has joined #openstack-containers13:22
*** brtknr has quit IRC13:25
*** brtknr has joined #openstack-containers13:25
*** brtknr has quit IRC13:29
*** brtknr has joined #openstack-containers13:29
dimsstrigazi : left a detailed note in https://review.openstack.org/#/c/577477/8 ( cc lxkong )13:37
dimslxkong : you can use the notes in ^^ to update the doc in cloud-provider-openstack and then we can cut a release13:37
strigazidims: thank you, I really appreciate it13:43
strigazidims: this diff [0] covers already what you described in your comment if you have time to check.13:46
strigazi[0] https://review.openstack.org/#/c/577477/7..1013:46
mkufstrigazi: sweet, i'll keep an eye out for it. :) thanks13:47
dimsstrigazi : done14:13
*** ykarel is now known as ykarel|away14:18
*** PagliaccisCloud has quit IRC14:20
*** brtknr has quit IRC14:27
*** brtknr has joined #openstack-containers14:27
*** ykarel|away has quit IRC14:36
*** ttsiouts has quit IRC14:45
*** ttsiouts has joined #openstack-containers14:46
*** ttsiouts has quit IRC14:50
*** Bhujay has quit IRC14:53
*** ramishra_ has joined #openstack-containers14:54
*** ykarel has joined #openstack-containers14:55
*** ramishra has quit IRC14:56
*** ttsiouts has joined #openstack-containers14:58
*** itlinux has quit IRC15:00
brtknrstrigazi: isnt that the kube-controller-manager complaining?15:02
*** munimeha1 has joined #openstack-containers15:05
strigazibrtknr: the kube-controller-manager complains in the logs and at the "same" time reports ok xD15:06
strigazibrtknr: isn't that amazing?15:07
brtknrstrigazi: lol, i think there has been much confusion over nothing.... we are all mostly in agreement here!15:07
brtknrdims: i think we are trying to say the same thing re where --cloud-config arg is required... kube-controller-manager and cloud-controller-manager... others use --cloud-provider=external arg so dont require --cloud-config= arg15:08
dimscool +115:09
brtknrexcept kube-controller-manager needs it for cinder!15:09
strigazidims: all good, thanks15:10
dimsbrtknr : just to be clear ... kubelet does use the file, just not from --cloud-config, but hard coded path :)15:10
dimsi asked to remove --cloud-config from kubelet command line as it is not picked up from the command line but from the hard coded path15:11
dims(so if someone tries to change the path and will be surprised that it is not picked up from there)15:11
dimsthats the other twist15:11
brtknrdims: oh really? how has it worked without the hard coded path all this time? since it has been /etc/kubernetes/kube_openstack_config... sorry im just trying to understand15:12
brtknrdims: for kubelet15:12
*** hongbin has joined #openstack-containers15:14
dimsbrtknr : because we are using the in-tree cloud provider before15:15
dimskubelet with --cloud-provider=openstack and --cloud-config=/etc/kubernetes/kube_openstack_config15:16
brtknrdims: ok, i thought --cloud-provider=external would be enough...15:19
dimsbrtknr : when we switch to csi or external provisioner, we won't need it. this clutch is because we need the "in-tree cinder provider" to still work15:21
brtknrok, im cool, thanks :) i understand15:21
brtknrdims: how is the progress with external provisioner for cinder?15:22
dimsbrtknr : "Standalone-cinder external provisioner" works for rbd/iscsi based providers15:24
*** ramishra_ has quit IRC15:27
brtknrdims: i see, thanks. i imagine that includes softiron ceph based cinder?15:28
dimsbrtknr: someone has to try it ... dunno15:28
brtknrstrigazi: hmm etcd is having trouble starting on my devstack checkout of the latest patch15:37
strigazibrtknr: which patch?15:38
brtknrstrigazi: http://paste.openstack.org/show/737576/15:39
brtknrhttps://review.openstack.org/#/c/577477/1015:39
strigazibrtknr: some timeout in cloud-init-output.log I suppose15:40
brtknrJob for flanneld.service failed because a timeout was exceeded.15:41
strigazihigher in the scritp15:42
brtknrhttp://paste.openstack.org/show/737578/15:44
brtknrFailed running /var/lib/cloud/instance/scripts/part-01115:45
strigaziwhat is in ^^15:46
strigazi?15:46
brtknressentially, my USER_TOKEN is coming back empty15:50
brtknr    USER_TOKEN=`curl $VERIFY_CA -s -i -X POST -H "$content_type" -d "$auth_json" $url \15:50
brtknr        | grep -i X-Subject-Token | awk '{print $2}' | tr -d '[[:space:]]'`15:50
brtknrwait, i think i understand the problem15:54
brtknrstrigazi: my private-subnet cidr is the same as my host15:55
strigaziso you can't reach keystone15:56
strigazibrtknr: correct?15:57
brtknrstrigazi: yep15:57
*** itlinux has joined #openstack-containers16:04
*** itlinux_ has joined #openstack-containers16:08
*** itlinux has quit IRC16:12
*** dave-mccowan has joined #openstack-containers16:13
brtknrstrigazi: are you using octavia for lbaas?16:17
brtknror just neutron?16:17
*** dave-mccowan has quit IRC16:18
strigazibrtknr: at CERN nothing, something in-house with our DNS and haproxy. devstack -> octavia16:19
*** dave-mccowan has joined #openstack-containers16:21
*** fragatina has quit IRC16:22
*** dave-mccowan has quit IRC16:22
*** fragatina has joined #openstack-containers16:24
*** dave-mccowan has joined #openstack-containers16:26
*** fragatina has quit IRC16:41
*** fragatina has joined #openstack-containers16:41
*** fragatina has quit IRC16:45
*** fragatina has joined #openstack-containers16:45
*** udesale has quit IRC16:49
*** sapd1 has joined #openstack-containers16:52
*** fragatina has quit IRC17:03
*** fragatina has joined #openstack-containers17:04
*** fragatina has quit IRC17:06
*** fragatina has joined #openstack-containers17:07
*** fragatina has quit IRC17:09
*** fragatina has joined #openstack-containers17:09
*** fragatina has quit IRC17:10
*** fragatina has joined #openstack-containers17:11
*** ttsiouts has quit IRC17:14
*** ttsiouts has joined #openstack-containers17:15
*** fragatina has quit IRC17:16
*** fragatina has joined #openstack-containers17:16
*** ricolin has quit IRC17:17
*** ttsiouts has quit IRC17:19
*** fragatina has quit IRC17:22
*** fragatina has joined #openstack-containers17:22
*** fragatina has quit IRC17:24
*** fragatina has joined #openstack-containers17:24
*** fragatina has quit IRC17:25
*** fragatina has joined #openstack-containers17:26
*** fragatina has quit IRC17:30
*** fragatina has joined #openstack-containers17:31
*** fragatina has quit IRC17:34
*** fragatina has joined #openstack-containers17:35
*** fragatina has quit IRC17:37
*** salmankhan has quit IRC17:37
*** fragatina has joined #openstack-containers17:37
*** fragatina has quit IRC17:39
*** fragatina has joined #openstack-containers17:40
*** fragatina has quit IRC17:45
*** fragatina has joined #openstack-containers17:46
*** fragatina has quit IRC17:47
*** fragatina has joined #openstack-containers17:47
*** jmlowe has quit IRC17:49
*** fragatina has quit IRC17:51
*** fragatina has joined #openstack-containers17:51
*** fragatina has quit IRC17:56
*** fragatina has joined #openstack-containers17:56
*** fragatina has quit IRC17:57
*** fragatina has joined #openstack-containers17:57
brtknrAny reason this shouldnt be cherrypicked to queens?17:58
brtknrhttps://review.openstack.org/#/c/623724/1417:58
*** fragatina has quit IRC17:58
*** fragatina has joined #openstack-containers17:59
*** fragatina has quit IRC18:00
*** fragatina has joined #openstack-containers18:01
brtknrstrigazi: ^18:01
*** fragatina has quit IRC18:01
*** fragatina has joined #openstack-containers18:02
*** fragatina has quit IRC18:02
*** fragatina has joined #openstack-containers18:03
*** fragatina has quit IRC18:06
*** jmlowe has joined #openstack-containers18:06
*** fragatina has joined #openstack-containers18:06
*** sapd1 has quit IRC18:08
*** fragatina has quit IRC18:10
*** fragatina has joined #openstack-containers18:11
mnaserhey all18:16
*** PagliaccisCloud has joined #openstack-containers18:20
*** itlinux_ has quit IRC18:21
*** jmlowe has quit IRC18:21
*** itlinux has joined #openstack-containers18:21
*** jmlowe has joined #openstack-containers18:22
openstackgerritMohammed Naser proposed openstack/magnum master: wip: multinode conformance  https://review.openstack.org/62544818:22
*** ykarel has quit IRC18:24
*** jmlowe has quit IRC18:36
*** fragatina has quit IRC18:38
*** jmlowe has joined #openstack-containers18:43
*** jmlowe has quit IRC18:53
*** flwang1 has quit IRC18:58
*** jmlowe has joined #openstack-containers19:21
*** salmankhan has joined #openstack-containers19:34
*** salmankhan has quit IRC19:39
openstackgerritMohammed Naser proposed openstack/magnum master: wip: multinode conformance  https://review.openstack.org/62544819:46
*** jmlowe has quit IRC19:53
openstackgerritMohammed Naser proposed openstack/magnum master: containers: clean-up build code  https://review.openstack.org/62599620:03
*** jmlowe has joined #openstack-containers20:11
*** flwang has joined #openstack-containers20:41
*** fragatina has joined #openstack-containers20:49
flwangstrigazi: do we have meeting today?20:54
strigaziflwang: yes20:58
flwangcool20:58
strigazi#startmeeting containers21:00
openstackMeeting started Tue Dec 18 21:00:03 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.21:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:00
*** openstack changes topic to " (Meeting topic: containers)"21:00
openstackThe meeting name has been set to 'containers'21:00
strigazi#topic Roll Call21:00
*** openstack changes topic to "Roll Call (Meeting topic: containers)"21:00
strigazio/21:00
cbrumm_o/21:00
schaneyo/21:00
eanderssono/21:01
flwango/21:02
strigazi#topic Announcements21:02
*** openstack changes topic to "Announcements (Meeting topic: containers)"21:02
strigaziI'd like to make a small announcement and thank mnaser, thanks to vexxhost we have a good ci for magnum after a long time. Take a look here:21:02
strigazihttps://review.openstack.org/#/c/577477/21:03
mnaser:D -- i hope to bring it to voting soon21:03
mnaserand im working here to get conformance tests passing in gates -- https://review.openstack.org/#/c/625448/ (ill let you finish the meeting and announcement =])21:03
cbrumm_nice21:03
*** kosa777777 has joined #openstack-containers21:03
*** PagliaccisCloud has quit IRC21:04
strigazidevstack and a k8s cluster ready in 48'. For the openstack ci i'm used to this is super fast, we can do even better I guess.21:04
strigazithanks mnaser21:05
strigazi#topic Stories/Tasks21:05
*** openstack changes topic to "Stories/Tasks (Meeting topic: containers)"21:05
strigaziLast week I picked Jim's patch and finally made almost everything work: https://review.openstack.org/#/c/577477/ flwang and others you can review.21:06
flwangstrigazi: i tested it yesterday before you add more comments21:07
flwangand it works except the known pvc issue21:07
strigaziAlso last friday/saturday, I finished building all containers in the ci, here is a patch to build v1.11.6 https://review.openstack.org/#/c/625884/21:08
cbrumm_thumbs up21:09
*** hongbin has quit IRC21:09
strigazifinally from me, this branch is to make the ci in queens green https://review.openstack.org/#/q/status:open+project:openstack/magnum+branch:stable/queens+topic:62413221:09
*** hongbin has joined #openstack-containers21:09
strigaziMaybe we miss one more, I'll have a look.21:09
strigaziThat's it from me21:10
flwangon my side, i continually polished the keystone auth patch21:11
flwangand it's ready for another review21:12
flwangand help testing the CPO patch and the resource clean up patch21:12
flwangand I really love the speed up patch from mnaser and lxkong21:12
strigaziLooks ready to me, I'll test after the meeting for keystone auth.21:12
mnaser:D thanks for lxkong to polishing up my work21:13
flwangmeanwhile, i'm working on the auto healing feature with NPD, draino, autoscaler21:13
flwangi have played draino a lot, and next will be NPD, then I will start to integrate those three21:13
flwangthat's all21:13
cbrumm_I'd love to hear your thoughts on those flwang21:14
schaneyWe are also starting to look into those, hopefully we'll have more to discuss in starting in the new year21:14
flwangcbrumm_: we can discuss offline21:14
flwangschaney: sure thing21:15
openstackgerritMohammed Naser proposed openstack/magnum master: wip: multinode conformance  https://review.openstack.org/62544821:16
*** jakeyip has joined #openstack-containers21:16
strigaziDoes anyone have something else to bring up?21:18
cbrumm_None hear, we're deploying magnum to production but we won't be opening it up for use until later. Right now we're just trying to get to the new year.21:19
jakeyiphi all sorry am late21:19
strigazicbrumm_: +!21:19
flwangcbrumm_: mind letting us know your company?21:19
strigazicbrumm_: +121:19
cbrumm_Blizzard21:20
flwangoh, no21:20
flwangi think you already got Magnum on prod, no?21:20
cbrumm_Not prod, been in dev for months21:20
colin-also here lurking, nothing to introduce today21:20
flwangah, i see. sorry, i thought you're in Blizzard21:20
colin-we are (including cbrumm)21:20
schaneyI as well21:21
flwangcolin-: yep, i know21:21
eanderssono/21:21
flwanghah21:21
flwangLOL21:21
flwangjakeyip: anything you want to discuss?21:21
jakeyipwow that's a bunch of people. hi Blizzard21:21
jakeyipnot from me, lurking in the background21:22
colin-testing out 1.13.1 and trying to get ipvs happy with the change they /q flwang21:22
colin-fixed in the patch, that i sent you in PM flwang  (sorry for typo)21:22
strigazicolin-: is it in gerrit?21:22
*** rcernin has joined #openstack-containers21:22
colin-no, built from upstream to see if it was likely to introduce any issues with the control plane. nothing distressing yet but did see a heat-agent trace i will need to investigate21:23
flwangcolin-: great21:23
strigazifor k8s 1.13.1?21:23
colin-yes21:24
strigaziShall we go to 1.13.x? it works as far as I'm concerned and conformance tests are passing.21:24
colin-i'm in favor of it, personally, given the CVE21:25
strigazitested in magnum master devstack and in prod at cern.21:25
colin-did you have to change anything?21:25
strigazicolin-: 1.11.5 covers the CVE21:25
colin-that's fair, just eager for the 1.13 goodies i suppose21:26
strigazicolin-: the scheduler tries to generate certs. I forced to listen in only yo localhost and serve insecurely21:26
strigazicolin-: the scheduler tries to generate certs. I forced to listen only on localhost and serve insecurely21:26
strigaziRegarding, the next meeting, I'll manage to be around only on the 9th of January. I won't manage for the next two tuesdays.21:28
cbrumm_Same for us21:28
strigaziflwang? next meeting on the 9th of January 2019?21:29
colin-i'll take a look at the scheduler startup for that behavior, thanks strigazi21:29
flwangstrigazi: 9th works for me21:29
flwangi will take leave for next 2 weeks21:29
jakeyiphee some here :)21:30
flwangjakeyip: celebrate the summer Xmas :D21:30
jakeyipgoing to be hot hot hot21:30
jakeyipbefore everyone leaves I have something not terriby urgent that I would like to ask21:31
strigazijakeyip: go for it21:31
jakeyipI've been talking to flwang about a use case having floating IPs only for masters so that we can save some v4 addresses. from what we've found it doesn't work. it's not terrribly urgent but wondering if this is something magnum is willing to support?21:32
colin-interesting, what is the objective for it?21:32
strigazijakeyip: yes, it is. We lack manpower at the moment though.21:33
flwangstrigazi: i think you have a patch for that?21:33
strigazicolin-: I think, usally fips == expensive public ipv421:33
flwangor somebody else, but im pretty sure there is a patch somewhere21:33
jakeyipcolin-: well we would rather have a cluster that only uses private network addresses internally, but has FIP so users can use kubectl from outside21:34
strigaziflwang: yes, I do, needs more work though, not just a rebase.21:34
flwangstrigazi: i see and i think the hard part is the backward compatibility21:34
jakeyipstrigazi: can point me to it? I don't mind having a look and see what I can do21:34
flwangjakeyip: if you have bandwidth, please just take it and we're happy to review21:35
strigazihttps://review.openstack.org/#/c/395095/21:35
jakeyipthanks I'll take a look and see what can be done21:35
strigazicool21:36
colin-understood21:37
*** rcernin has quit IRC21:37
*** fragatina has quit IRC21:37
strigaziShall we end the meeting? Anything else to discuss?21:38
cbrumm_I think we're good21:39
strigazicool, thanks everyone. See you next year! Happy holidays21:41
jakeyiphappy holidays!21:41
strigazi#endmeeting21:42
*** openstack changes topic to "OpenStack Containers Team"21:42
openstackMeeting ended Tue Dec 18 21:42:03 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:42
openstackMinutes:        http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-12-18-21.00.html21:42
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-12-18-21.00.txt21:42
openstackLog:            http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-12-18-21.00.log.html21:42
flwangstrigazi: thank you!21:42
flwangcbrumm_: let me know if you want to discuss auto healing21:42
cbrumm_Ok flwang, will need to be later though. The rest of my day is pretty busy.21:44
flwangcbrumm_: no problem21:45
flwangwe just start to investigate, so just ping me when you want to discuss21:45
flwangstrigazi:  and cern are also working on that21:45
cbrumm_that's like us, we're just installing it to see if we want to extend it. Big question is what do we want to test for?21:46
flwangautomatically replace a bad node21:46
cbrumm_sure, but what makes a node bad? How do you detect that it's a node you want to kill?21:47
cbrumm_out of the box NPD looks like it only checks for kernel error messages. I think I'm more concerned with things like node offline, docker daemon stuck, things like that21:48
jakeyipsorry for interrupting - is this autohealing on magnum / heat side? e.g. bring up a new VM if you kill the old one?21:49
flwangjakeyip: on top of k8s21:50
cbrumm_heat autoscaling, so long as we remove bad vms the autoscaling will replace it.21:50
flwangnot magnum/heat, but autoscaler will call magnum/heat to do node replace21:50
flwangcbrumm_: that's a very good question21:50
flwangin my testing, NPD just detect some error and send an event21:51
flwangbut draino will directly check the condition of a node and then cordon/drain it21:51
flwangso technically, for simple case, draino can work without NPD21:51
cbrumm_true, don't need both21:51
flwangafter the node drained, autoscaler can replace it by calling magnum api (may need new api endpint)21:52
flwangand for NPD, i think we probably need to extend it if we want to cover more scenarios21:52
cbrumm_exactly21:52
cbrumm_that's what I'm interested in, a set of tests that we can all share and contribute to21:53
cbrumm_right now we're not even sure what scenarios we want to test for21:53
cbrumm_that's work we have planned for Jan21:54
flwanghttps://kubernetes.io/docs/tasks/debug-application-cluster/monitor-node-health/#kernel-monitor21:54
flwangcorrect, we need to figure out a minimum set of conditions we (magnum) care about21:55
flwangand then configure draino to monitor it21:55
cbrumm_Yep, exactly21:55
flwanghttps://github.com/kubernetes/node-problem-detector/blob/v0.1/config/kernel-monitor.json21:56
cbrumm_I'm thinking we might need other problem daemons besides KernelMonitor21:59
cbrumm_something besides a log scraper21:59
cbrumm_I have to go, we'll chat about this more later22:00
flwangsure, ttyl22:00
cbrumm_bye all22:00
mnasercbrumm_: awesome re blizzard22:00
mnasercan we know WHAT runs on magnum? :D22:00
mnaserstrigazi, flwang: i've been thinking of running conformance tests or functional tests across different versions in master22:01
mnaserthat way our users can always deploy latest 1.11, 1.12, 1.13 (and we know it works)22:01
mnaserand we also can add something to test master (but non voting)22:01
flwangmnaser: i like the idea22:02
flwangfor example, we're running on 1.11.x which has been certificated, and we don't want to move very fast with the k8s version22:02
flwangso if we can have a gate job for those quite popular k8s versions, that would be fantastic22:03
*** rcernin has joined #openstack-containers22:04
*** munimeha1 has quit IRC22:05
colin-sure mnaser! many docker containers :D22:10
mnasercolin-: oh pfft22:10
colin-sorry, too easy22:10
mnaseri was hoping the next time i'd log in to bnet22:11
mnaseri'm launching into magnum-space22:11
mnaser...or maybe when i open my phone.. ha.22:11
mnaserflwang: yep!  that's my goal once i get conformance going22:11
openstackgerritMohammed Naser proposed openstack/magnum master: wip: multinode conformance  https://review.openstack.org/62544822:12
flwangmnaser: cool22:14
mnaseri think i'm close to getting the confromance tests to run on every commit22:16
mnaserwhich is *amazing* D:22:16
openstackgerritMohammed Naser proposed openstack/magnum master: containers: clean-up build code  https://review.openstack.org/62599622:35
*** rcernin has quit IRC22:37
*** rcernin has joined #openstack-containers22:41
*** rcernin has quit IRC22:43
openstackgerritMohammed Naser proposed openstack/magnum master: wip: multinode conformance  https://review.openstack.org/62544822:43
*** rcernin has joined #openstack-containers22:45
*** itlinux has quit IRC22:56
*** yolanda has quit IRC23:55
*** dave-mccowan has quit IRC23:56

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!