Tuesday, 2017-12-19

*** oikiki has quit IRC00:01
*** rcernin has quit IRC00:09
*** rcernin_ has joined #openstack-containers00:09
*** marst has quit IRC00:35
*** rcernin_ has quit IRC00:50
*** hieulq has quit IRC00:53
*** hieulq has joined #openstack-containers00:54
*** oikiki has joined #openstack-containers01:01
*** kiennt26 has joined #openstack-containers01:03
*** yuanying has quit IRC01:14
*** openstackgerrit has joined #openstack-containers01:22
openstackgerritOpenStack Proposal Bot proposed openstack/magnum master: Updated from global requirements  https://review.openstack.org/52840601:22
*** vishwanathj has joined #openstack-containers01:30
*** linkmark has quit IRC01:33
*** oikiki has quit IRC01:33
*** dardelean_ has joined #openstack-containers01:49
*** dardelean_ has quit IRC01:54
*** dardelean_ has joined #openstack-containers02:01
*** penick has quit IRC02:07
*** rcernin has joined #openstack-containers02:11
*** AlexeyAbashkin has joined #openstack-containers02:38
*** AlexeyAbashkin has quit IRC02:43
*** ramishra has joined #openstack-containers02:43
*** ramishra has quit IRC03:15
*** ramishra has joined #openstack-containers03:43
*** dardelean_ has quit IRC03:49
*** dave-mccowan has quit IRC03:51
*** dpawar has joined #openstack-containers03:52
*** flwang1 has quit IRC04:17
*** fragatina has quit IRC04:20
*** fragatina has joined #openstack-containers04:20
*** janonymous has joined #openstack-containers04:22
*** ricolin has joined #openstack-containers04:31
*** janki has joined #openstack-containers04:43
*** ykarel has joined #openstack-containers04:44
*** kiennt26 has quit IRC04:54
*** hieulq has quit IRC04:54
*** kiennt26 has joined #openstack-containers04:55
*** hieulq has joined #openstack-containers04:55
*** yamamoto has joined #openstack-containers04:57
*** chhavi has joined #openstack-containers05:03
*** yamamoto has quit IRC05:07
*** chhavi has quit IRC05:07
*** akhil_jain has quit IRC05:07
*** yamamoto has joined #openstack-containers05:08
*** janki has quit IRC05:09
*** janki has joined #openstack-containers05:10
*** yamamoto has quit IRC05:27
*** yamamoto has joined #openstack-containers05:30
*** penick has joined #openstack-containers05:32
*** yamamoto has quit IRC05:36
*** penick_ has joined #openstack-containers05:36
*** penick has quit IRC05:36
*** chhavi has joined #openstack-containers05:39
*** chhavi has quit IRC05:45
*** chhavi has joined #openstack-containers05:46
*** penick_ has quit IRC06:12
*** kiennt26 has quit IRC06:22
*** kiennt26 has joined #openstack-containers06:22
*** janki has quit IRC06:22
*** yamamoto has joined #openstack-containers06:35
*** yamamoto has quit IRC06:41
*** yolanda has joined #openstack-containers06:47
*** dardelean_ has joined #openstack-containers06:49
*** dardelean_ has quit IRC06:54
*** armaan has quit IRC06:55
*** armaan has joined #openstack-containers06:55
*** mjura has joined #openstack-containers06:59
*** jchhatbar has joined #openstack-containers07:00
*** dpawar has quit IRC07:06
*** dsariel has quit IRC07:17
*** dpawar has joined #openstack-containers07:23
*** magicboiz has quit IRC07:25
*** rcernin has quit IRC07:31
*** dpawar has quit IRC07:35
*** dpawar has joined #openstack-containers07:35
*** yamamoto has joined #openstack-containers07:45
*** AlexeyAbashkin has joined #openstack-containers07:52
*** rcernin has joined #openstack-containers08:02
*** hieulq has quit IRC08:10
*** hieulq has joined #openstack-containers08:11
*** yolanda__ has joined #openstack-containers08:34
gokhanhi ykarel , are you there ? I apply https://review.openstack.org/#/c/525662/ and https://review.openstack.org/#/c/447687/ then I get user data too large error08:34
gokhanykarel, http://paste.openstack.org/show/629285/08:35
*** yolanda has quit IRC08:36
openstackgerritRicardo Rocha proposed openstack/magnum master: [kubernetes] add ingress controller  https://review.openstack.org/52875608:45
*** mdnadeem has joined #openstack-containers08:47
armaanhello folks, i just installed magnum in ocata setup ans i am getting this error in master "kubelet[1950]: Error: unknown flag: --config"  which stopped Kubernetes Kubelet Server, Any idea what could be the reason?08:49
armaanmorning folks, I installed magnum in both Ocata and Pike and in both versions Kubernetes services die in master nodes for some reason.08:50
openstackgerritRicardo Rocha proposed openstack/magnum master: [kubernetes] add ingress controller  https://review.openstack.org/52875608:50
armaanShould i file a bug for this error?08:51
*** magicboiz has joined #openstack-containers09:00
*** magicboiz has quit IRC09:04
*** magicboiz has joined #openstack-containers09:05
armaangokhan: hi, did you use OSA to deploy magnum?09:12
*** flwang1 has joined #openstack-containers09:31
*** armaan has quit IRC09:33
*** armaan has joined #openstack-containers09:34
flwang1strigazi: around?09:40
strigaziflwang1 hi09:43
flwang1strigazi: sorry, are you in holiday?09:43
strigaziflwang1: no, I'm at the office09:44
flwang1strigazi: cool, have a moment?09:44
flwang1need your comments on the features i'm working on09:44
strigaziflwang1: I'm looking in the affinity patch09:45
*** armaan has quit IRC09:45
*** armaan has joined #openstack-containers09:46
strigaziflwang1: Are there cases that setting anti-affinity might not work?09:46
flwang1strigazi: yes, for example, if there are only 3 host available, and user want to create a cluster with 4 nodes, it will fail09:47
flwang1actually, my question is if we should leave the option to end user, or just hardcode it09:48
strigaziflwang1 or a conf in magnum.conf09:49
flwang1or in the middle, make it configurable with magnum.conf09:49
flwang1yes09:49
strigaziflwang1 What works for you?09:49
flwang1for the first step, i like to add a config in magnum.conf09:49
strigaziflwang1 If we pass "" to heat, will it work?09:50
flwang1and see if we can get any keen from ops before adding a new argument09:50
flwang1IIRC, the default policy for server group is anti-affinity09:51
flwang1but i'm not sure if heat can support an empty policy for server group09:51
flwang1ricolin: ^09:51
strigaziflwang1 btw, adding a label field doesn't change the api09:51
strigazibut I prefer an option in magnum conf09:52
flwang1strigazi: i saw that, but I'm not sure if it's a 'label'09:52
flwang1a label means it's a metadata/attr of coe cluster, but for this case, it's not09:52
flwang1personally, i think we can start with a baby step and see how things going09:53
flwang1if we do have users want to have an argument, we can easily add it later09:54
*** mgoddard has joined #openstack-containers09:54
strigaziflwang1: sure, I agree. We just need to decide which default won't complicate things for new magnum deployments09:54
flwang1strigazi: yes, that's why i use soft-anti-affinity09:55
strigaziflwang1: ok, so soft-anti-affinity works in all cases?09:57
strigaziflwang1: if so, have a look in this patch: to see how to pass a parameter from magnum.conf to heat https://review.openstack.org/#/c/525662/09:59
flwang1https://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/soft-affinity-for-server-group.html09:59
flwang1strigazi: cool, thanks for sharing10:00
*** kiennt26 has quit IRC10:02
strigaziflwang1 for monitoring, we need a kind of policy on when to mark the cluster unhealthy and which nodes are down10:02
*** hishh has joined #openstack-containers10:02
flwang1strigazi: i'm thinking another way after some investigation10:02
flwang1or i would say another additional info10:03
flwang1because i failed to figure out the policy how to define if the cluster is healthy based on the number of 'failed' nodes10:04
flwang1for example, if there are 2 minions nodes are unhealthy, but all master nodes are good, can we say the cluster is unhealthy?10:05
flwang1it would be nice if we can be consistent with how k8s 'calculate' this10:05
strigaziflwang1 even if a  single node is down, the cluster is not perfectly healthy10:06
strigaziflwang1: we can say in the status field how many and which nodes are down10:07
flwang1strigazi: yep10:08
flwang1that's what i'm trying to do, instead of telling if the cluster is healthy or not10:09
flwang1just tell them how many nodes are down and the total10:09
*** salmankhan has joined #openstack-containers10:09
flwang1does that make sense?10:09
strigaziflwang1 what do you mean instead? Isn't it the same?10:09
flwang1initially, i just want to put HEALTHY and UNHEALTHY, but now i'm thinking if we can just say 1 down / 10 total, or something like that10:11
strigaziflwang1  was actually thinking both10:12
strigazior only unhealthy actually10:12
strigaziso when you do cluster list, you will see that a cluster is UNHEALTHY10:12
flwang1you mean 'adding' a field when showing a cluster?10:13
flwang1i will +1 for that idea10:13
flwang1and we  can even add the component status as well10:13
flwang1like scheduler, etcd, controller-manager, etc10:14
strigaziflwang1 right now we have status and and status_reasomn10:15
flwang1strigazi: i see. but unfortunately, it's useless after created the cluster10:17
flwang1strigazi: i think the discussion related to how to position magnum10:17
strigaziflwang1: if everything is ok, we don't change anything. If something is down or 'problematic' we change the status_reason to says what is the problem10:17
flwang1if something is wrong, will the 'status' be updated?10:18
strigaziflwang1 We can have a new status which would be UNHEALTHY (henve the upper case letters)10:19
flwang1strigazi: we should if we go for that idea, because current status is useless IMHO10:20
flwang1that's 'status' position mangum like a deployment tool, not a service10:20
gokhanhi armaan, yes I used OSA pike10:21
strigaziflwang1 good point10:21
armaangokhan: Is it working for you?10:21
flwang1if magnum is providing a coe management service, magnum need to monitor the cluster status, reflect it to end user and even auto healing10:21
*** fragatin_ has joined #openstack-containers10:22
gokhanarmaan, yep it is working but I made lots of changes10:22
*** fragatina has quit IRC10:22
strigaziflwang1 yes, that is the goal henve ths blueprint: https://blueprints.launchpad.net/magnum/+spec/cluster-healing10:22
flwang1strigazi: if magnum want to do auto healing, that means magnum is providing a service10:23
flwang1that's the thing i'm happy to see10:23
strigazigokhan can you share your changes summarized somewhere?10:23
armaangokhan: Is it possible that you could share the changes with me. I have been trying to make it work for few days now?10:23
openstackgerritSpyros Trigazis (strigazi) proposed openstack/magnum master: Update kubernetes dashboard to v1.8.1  https://review.openstack.org/50746510:25
gokhanstrigazi, ok I can share but before that for certificates we must apply this patches  https://review.openstack.org/#/c/525662/ and https://review.openstack.org/#/c/447687/ then I get user data too large error.10:27
gokhanstrigazi, if I don't apply these patches I need move my CA manual10:28
gokhanarmaan, are you using heat wih ssl ?10:29
strigazigokhan Ok, in master I fixed the problem with user_data with this patch: https://review.openstack.org/#/c/468816/10:29
armaangokhan: ssl termination happens at the haproxy10:29
*** dardelean_ has joined #openstack-containers10:30
gokhanarmaan, ok I get it . it is like my environment. did you use also self signed-cert?10:30
gokhanstrigazi, ok it means I also need this patch :) can it be possible to cherry pick thise patches for pike ?10:31
strigaziwe will10:31
strigaziwe will cherry-pick10:31
armaangokhan: I think so, my user_variables.yml have this https://pastebin.com/jKtcg4JP10:32
flwang1strigazi: i have a question about the certs in magnum10:32
armaangokhan: Master spawns up but Kubernetes API never get started and minion also never spawns up10:33
flwang1strigazi: i got a problem, when create cluster, there is not certs under /etc/kuberneters/certs10:34
flwang1which cause etcd can't start10:34
flwang1strigazi: any idea? or how can i debug it?10:34
strigaziEvery one has the same problem with apis behind tls :(10:34
strigaziThis is the first patch we start to carry10:35
strigaziflwang1 check in /var/log/cloud-init-output.log10:35
strigaziif part004 failed it means that the node failed to get the ca from magnum10:36
gokhanstrigazi, ok thanks and last question where I can place openstack_cafile in magnum.conf ?10:36
gokhanarmaan, ok your environment is like me. is it pike or ocata ?10:37
strigaziin [drivers] openstack_ca_file10:37
gokhanarmaan, on ocata you can not create kubernates cluster, because it uses kubernates previos version and we can not define ca-file on it. so you must use pike or upgrade kubernates on heat templates10:38
armaangokhan: Pike10:38
gokhanstrigazi, ok thanks.10:39
armaangokhan: Oh, you mean for Ocata we need to change the magnum heat templates10:40
flwang1strigazi: cool, thanks a lot10:40
gokhanarmaan, it is great :) Firstly you need your auth url on heat templates. Because for auth url it uses keystone public endpoint and magnum can not work with versionless keystone, magnum need keystone v310:41
*** syedarmani has joined #openstack-containers10:41
gokhanarmaan, can you ssh master node ?10:41
*** jchhatbar is now known as janki10:41
armaangokhan: Yes, i can ssh into master node10:41
strigazithe above problem is solved in magnum pike10:41
gokhanstrigazi, may be bug is about openstack ansible I am not sure.10:42
strigaziit is in ansible10:43
gokhanarmaan, can you share your /var/log/cloud-init-output.log?10:44
gokhanarmaan, and share output of openstack endpoint list | grep keystone10:44
gokhanarmaan, follow this http://eavesdrop.openstack.org/irclogs/%23openstack-containers/%23openstack-containers.2017-12-13.log.html10:46
armaangokhan: cloud-init https://pastebin.com/eFN3zZJS10:46
armaangokhan: keystone endpoint example https://pastebin.com/2nzuj3ag10:49
gokhanarmaan,  look for AUTH_URL in /etc/sysconfig/heat-params ? it it same with  https://tky1.cloud.com:5000 ?10:52
gokhanarmaan, you need change it with  https://tky1.cloud.com:5000/v310:52
armaangokhan: AUTH_URL="https://tky1.cloud.com:5000/v3/10:53
armaangokhan: do i need to change my keystone endpoint and add /v310:54
strigaziarmaan: gokhan or add redirection10:54
*** yamamoto has quit IRC10:55
*** yamamoto has joined #openstack-containers10:55
*** yamamoto has quit IRC10:56
gokhanarmaan, no you don't need. your problem is about /var/lib/cloud/instance/scripts/part-005. this is different from me.10:56
*** yamamoto has joined #openstack-containers10:56
gokhanarmaan, it seem tky1.citycloud.com  is unreacheable on openstack instance10:56
*** fragatina has joined #openstack-containers10:57
*** fragatin_ has quit IRC10:57
gokhanarmaan, for looking problems /var/log/cloud-init-output.log is great. strigazi I haven't enough information about  /var/lib/cloud/instance/scripts/part-005. can you help armaan ?11:00
armaangokhan: I can ping the public endpoint of keystone but curl does not work11:04
armaannevermind, i can curl as well11:05
gokhanarmaan, "curl -v https://tky1.cloud.com:5000/v3/" result is ok ?11:06
gokhanarmaan, can you share your  /var/lib/cloud/instance/scripts/part-005 ?11:08
gokhanarmaan, you need -k option in curl lines11:09
*** yamamoto has quit IRC11:09
*** hishh has quit IRC11:11
armaangokhan: Yep, curl works fine. Here is the  /var/lib/cloud/instance/scripts/part-005 file https://pastebin.com/yh3sttyxhttps://pastebin.com/yh3sttyx11:12
*** yamamoto has joined #openstack-containers11:13
gokhanarmaan, can you run  /var/lib/cloud/instance/scripts/part-005 this file11:17
*** yamamoto has quit IRC11:18
armaangokhan:  Failed to connect to tky1.cloud.com port 9511: No route to host11:20
*** AlexeyAbashkin has quit IRC11:22
armaangokhan: I have to leave office now. thanks a lot for your help here. I will let you know if could debug my issue :)11:28
*** armaan has quit IRC11:28
*** armaan has joined #openstack-containers11:28
gokhanarmaan, your tky1.cloud.com is unrecheable I can't find reason of it. may be you need consult magnum cores. if you want, I can share my openstack.user.config.yml and user_variables .yml for OSA11:30
flwang1strigazi: here is the log i got from cloud init http://paste.openstack.org/show/629298/11:30
flwang1part005 failed11:30
*** armaan has quit IRC11:33
*** armaan has joined #openstack-containers11:33
flwang1strigazi: i'm using pike, so i think it's matching the result https://github.com/openstack/magnum/blob/stable/pike/magnum/drivers/k8s_fedora_atomic_v1/templates/kubemaster.yaml#L47411:34
armaangokhan: awesome, could you please share the user_variables.yml? i think that will be helpful for me :)11:34
gokhanarmaan, my environment is HA multinode with ceph11:34
armaangokhan: I have the same environment, HA multinode with ceph11:35
gokhanarmaan, this is openstack user config yml file http://paste.openstack.org/show/629295/11:35
gokhanarmaan, and this is user_variables.yml file http://paste.openstack.org/show/629299/11:37
*** AlexeyAbashkin has joined #openstack-containers11:38
*** magicboiz has quit IRC11:42
*** magicboiz has joined #openstack-containers11:44
*** armaan has quit IRC11:45
*** yolanda__ is now known as yolanda11:50
*** dardelean_ has quit IRC11:59
flwang1rochapor1o: ping12:01
strigaziflwang1 Check if you can contact the magnum keystone and heat api from the node, there is either a problem with finding a route or about certs12:05
flwang1/var/lib/cloud/instance/scripts/part-003: line 5: [: 15c5c48e-ba4e-41c1-9ee9-b0ff0f38a592_SIZE: integer expression expected Cloud-init v. 0.7.9 running 'modules:final' at Tue, 19 Dec 2017 02:39:25 +0000. Up 550.14 seconds. 2017-12-19 02:39:39,264 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-005 [1]12:05
flwang115c5c48e-ba4e-41c1-9ee9-b0ff0f38a592_SIZE  looks weird12:06
*** yamamoto has joined #openstack-containers12:06
flwang1https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/configure-etcd.sh#L512:06
strigaziflwang1 did you pass --docker-volume-size?12:06
flwang1yes, with 5GB12:06
strigaziflwang1 but you didn't pass etcd_volume_size12:07
strigaziflwang1 openstack stack show <your stack> | grep etcd_volume_size12:08
flwang1strigazi: where to pass in the etcd_volume_size?12:08
strigazihttps://docs.openstack.org/magnum/pike/user/index.html#etcd-volume-size12:09
flwang1ubuntu@feilong-dev-conf0:~$ openstack stack show k8scluster-ssxntk7cu4m6 | grep etcd_volume_size12:09
flwang1|                       | etcd_volume_size: '0'12:09
flwang1so when setting the docker-volume-size, i have to set the etcd_volume_size as well?12:10
strigaziNo, they are optional12:10
strigaziyou don't have to pass anything12:10
*** yamamoto has quit IRC12:10
flwang1ok, so what's the problem?12:10
strigazigrep ETCD_VOLUME /etc/sysconfig/heat-params12:10
flwang1[root@k8scluster-ssxntk7cu4m6-master-0 fedora]# grep ETCD_VOLUME /etc/sysconfig/heat-params12:11
flwang1ETCD_VOLUME="15c5c48e-ba4e-41c1-9ee9-b0ff0f38a592"12:11
flwang1ETCD_VOLUME_SIZE="15c5c48e-ba4e-41c1-9ee9-b0ff0f38a592_SIZE"12:11
strigaziflwang1 oh no, there is the problem12:12
flwang1is it because i'm using a grandpa heat?12:13
strigazithis is a small bug, but it is not related to your problem12:13
flwang1does that mean i can skip it?12:14
strigaziflwang1 give me a sec to check12:14
flwang1ok12:14
*** salmankhan has quit IRC12:15
strigaziflwang1: yes it is a problem of grandpa heat but we can address.12:15
strigazihow ever12:15
strigazihowever, the problem that you have is different12:15
flwang1strigazi: do you know what's the problem i'm facing ;)12:16
strigaziflwang1: yes, you can not connect to the openstack api from the nodes.12:17
flwang1strigazi: really?12:17
flwang1let me double check12:17
strigaziflwang1: set -x &&  /var/lib/cloud/instance/scripts/part-00512:18
*** salmankhan has joined #openstack-containers12:18
flwang1i think you're right12:20
flwang1we may need another change for the network in our CI12:21
flwang1strigazi: thank you very much. but i don't really understand why the connection issue will make the etcd fail12:23
*** dpawar has quit IRC12:24
strigaziflwang1 the node generates a csr request and asks the magnum API to sign it's key12:24
strigazithen the certs are use to secure etcd and the kubernetes api12:25
strigazithen the certs are used to secure etcd and the kubernetes api12:25
strigaziCheck what part-005 does12:25
flwang1ah, i see. that makes much sense12:25
flwang1thank a lot12:25
strigaziit gets the CA from magnum and then asks magnum to sign it12:25
strigaziyou are welcome12:26
flwang1btw, what's the difference between make-cert.sh and make-cert-client.sh ?12:26
*** dardelean_ has joined #openstack-containers12:27
strigazimake-cert.sh is executed on the master node, client on the worker nodes12:27
strigaziafter rbac the two of them are becoming more different12:27
flwang1got12:28
flwang1thanks12:28
*** dpawar has joined #openstack-containers12:31
*** yamamoto has joined #openstack-containers12:38
*** yamamoto has quit IRC12:39
*** janki has quit IRC13:11
*** janki has joined #openstack-containers13:11
*** rcernin has quit IRC13:20
*** dpawar has quit IRC13:25
*** dsariel has joined #openstack-containers13:30
*** armaan has joined #openstack-containers13:34
*** dardelean_ has quit IRC13:35
*** dardelean_ has joined #openstack-containers13:36
gokhanping strigazi ,13:37
strigazigokhan: hi13:38
gokhanstrigazi, ı applied three patch for certs but ı got error on notify heat13:38
gokhan[fedora@test-zx3ewgo7w5lg-master-0 ~]$ /usr/local/bin/wc-notify13:38
gokhan#!/bin/bash -v13:38
gokhanuntil curl -sf "http://127.0.0.1:8080/healthz"; do13:38
gokhan    echo "Waiting for Kubernetes API..."13:38
gokhan    sleep 513:38
gokhandone13:38
gokhanokcurl -i -X POST -H 'X-Auth-Token: gAAAAABaOQ9PcCrsmsEzmHsB9VT5O-YYRIMD-6Q77w1Yqg8UyKK7juGbMo9_oAHmgLsFSuo2LL6sxRIWqbID3otmhVsg9ui8TQyoenGAWYUBOxDY2mRLAAjg64pKALnED9nq5lHm5JSnzWVBLibINrmN8z4IUV2wqXT6lxBxnaCBZ6dGSYY18S8' -H 'Content-Type: application/json' -H 'Accept: application/json' https://safircloud.b3lab.org:8004/v1/a32d6d8183d6416687c8a5bfcb9b9b85/stacks/test-zx3ewgo7w5lg-kube_masters-sg27lkigtb2k-0-gfyxrzsksv4u/b329ac5a-248d-4edf-966d-f13:38
gokhan09730167130/resources/master_wait_handle/signal True -k --data-binary '{"status": "SUCCESS"}'13:39
gokhanHTTP/1.1 200 OK13:39
gokhanContent-Type: application/json13:39
gokhanContent-Length: 413:39
gokhanx-openstack-request-id: req-93957cef-3211-402e-9374-d54d6833fb8013:39
gokhannullcurl: (6) Could not resolve host: True13:39
strigazigokhan you should use paste.openstack.org :)13:39
strigaziwhat is the problem? us returned 200, no?13:39
strigaziwhat is the problem? it returned 200, no?13:40
*** yamamoto has joined #openstack-containers13:40
*** dardelean_ has quit IRC13:40
gokhanstrigazi, yep it is 200 but wc-notify.service is failed13:40
gokhanstrigazi, is this not problem ?13:41
strigazigokhan it says inactive dead or failed?13:42
strigazigokhan: journalctl -u wc-notify.service --no-pager13:43
gokhanstrigazi, it is failed http://paste.openstack.org/show/629316/13:44
*** armaan has quit IRC13:45
openstackgerritSpyros Trigazis (strigazi) proposed openstack/magnum master: Update kubernetes dashboard to v1.8.1  https://review.openstack.org/50746513:45
*** dave-mccowan has joined #openstack-containers13:45
strigazigokhan I don't think it is a problem, it sent the signal and got 200. I don't know why system says failed13:47
strigazigokhan I don't think it is a problem, it sent the signal and got 200. I don't know why systemd says failed13:47
*** yamamoto has quit IRC13:48
*** ykarel has quit IRC13:48
*** dsariel has quit IRC13:53
gokhanstrigazi, because of  failure of wc-notify.service master node stucks13:56
strigazigokhan what do you mean stucks?13:57
strigazigokhan what is the flavor of the vm?13:57
gokhanstrigazi, it is about 50 minutes, and can not spawn minion nodes13:57
strigaziopenstack stack resource list -n 2 <stack_id> | grep -v COMPLETE13:58
gokhanstrigazi, it is medium, 2 vcpu, 4GB RAM13:58
strigazidisk?13:59
gokhanstrigazi, http://paste.openstack.org/show/629318/14:01
gokhanstrigazi, disk 40 GB14:02
gokhanstrigazi, I have to go now I will be back later14:06
*** dardelean_ has joined #openstack-containers14:06
*** ramishra has quit IRC14:47
*** jmlowe has joined #openstack-containers14:51
*** dardelean_ has quit IRC14:54
*** marst has joined #openstack-containers15:10
*** kiennt26 has joined #openstack-containers15:11
*** dardelean_ has joined #openstack-containers15:11
*** livelace has joined #openstack-containers15:23
*** kiennt26 has quit IRC15:27
*** chhavi has quit IRC15:30
strigaziHello everyone, the magnum meeting will start in 25' in #openstack-meeting-alt15:35
*** slunkad_ has quit IRC15:36
*** ykarel has joined #openstack-containers15:37
*** slunkad has joined #openstack-containers15:39
*** slunkad has quit IRC15:43
*** dardelean_ has quit IRC15:51
*** oikiki has joined #openstack-containers15:55
*** slunkad has joined #openstack-containers15:59
*** dardelean_ has joined #openstack-containers16:00
strigazislunkad: are you there?16:03
openstackgerritKirsten G. proposed openstack/magnum master: Add enable_pull_coe_data configuration parameter  https://review.openstack.org/52909816:03
slunkadslunkad: yes16:03
*** mjura has quit IRC16:03
slunkadstrigazi: sorry forgot about the meeting16:05
*** livelace has quit IRC16:06
*** dardelean_ has quit IRC16:06
*** dardelean_ has joined #openstack-containers16:06
*** dardelean_ has quit IRC16:07
*** ykarel has quit IRC16:11
*** jmlowe has quit IRC16:13
*** dardelean_ has joined #openstack-containers16:14
*** ricolin has quit IRC16:19
strigazioikiki: hi16:29
oikikistrigazi: hi!16:29
strigaziDid you manage to deploy devstack with barbican?16:29
oikikiI just got in from a redeye and am now utc-516:30
oikikii submitted a patch to #116:30
*** janki has quit IRC16:31
oikikii had some issues getting devstack running again16:32
oikikibut im going to work on getting barbican running today16:32
oikikiafter i do my cherry pick16:32
strigazioikiki: ok, do you need any help?16:33
oikikii dont think so im going to give it a try after i get a bit of sleep16:33
strigazioikiki: ok cool16:34
oikikicould you explain why the cherry pick?16:34
strigazioikiki: magnum deployments may use stable/pike or stable/ocata, so we need to make the patch available for those releases16:35
strigazioikiki: now, your patch is in the master branch16:35
strigazioikiki: and it will be available with the next release in February16:35
strigazithat would be stable/queens16:36
oikikistrigazi: ah i understand now16:36
strigaziok16:36
strigazioikiki: do you prefer a different time to sync?16:36
oikikino im ok16:36
strigaziok16:36
*** armaan has joined #openstack-containers16:37
strigazioikiki: If you need anything, send me an email16:37
oikikiso my commit was in master, we will make it avail in stable/pike and eventually it be a part of the next release stable/queens16:37
strigazioikiki it will be available in both releases16:37
strigaziyes16:37
oikikistrigazi: and I just cherry pick in gerrit16:38
oikikineat!16:38
strigazioikiki: if the patch applies cleanly you can use just gerrit16:38
oikikistrigazi: so when i cherry pick in gerrit i will see any conflicts there to fix?16:39
strigazioikiki if there are conflicts, gerrit will say it can't do the cherry-pick16:40
strigaziin this case16:40
strigaziyou need to clone tha magnum repo16:40
strigazicheckout stable/pike16:40
strigazibranch on stable/pike16:40
strigazigit checkout -b <a-branch-name>16:41
strigaziand then you can do git review -x 447687 or git fetch https://git.openstack.org/openstack/magnum refs/changes/87/447687/38 && git cherry-pick FETCH_HEAD16:42
*** penick has joined #openstack-containers16:42
strigaziwhich will cherry-pick the patch16:42
strigazithen you do git status and git will tell you how to solve the conflicts16:42
strigazimakes sense?16:42
oikikigotcha!16:42
oikikii'll give it a try thank you!16:44
strigazioikiki you are welcome16:44
*** salmankhan has quit IRC16:46
oikikistrigazi: also I am utc-5 this week and next week (if you are around, I know it's the holidays)16:46
strigaziThis is east coast in the US right?16:48
oikikiyep16:48
strigaziAre you going to be online tmr morning?16:49
oikikiyep16:49
oikiki:)16:49
strigazicool, I'll be online as well16:49
oikikigreat!16:49
armaangokhan: We had to add a rule in iptables for port 9511 and then curl started working and both master and minion came up. In the master node all services are up except for dashboard http://paste.openstack.org/show/629329/16:52
armaangokhan: Do we have to open any port to make dashboard work as well16:52
*** basvanveen has joined #openstack-containers16:53
armaanstrigazi: Hello!16:53
strigaziarmaan hi16:54
armaanstrigazi: I was wondering if you have any suggestion about this: http://paste.openstack.org/show/629329/16:54
*** basvanve_ has joined #openstack-containers16:54
strigaziarmaan kube-system-namespace. is not a problem16:55
strigaziarmaan: kubernetes creates it by itself16:55
strigaziarmaan: you are in pike right?16:55
armaanstrigazi: ok and what do you think about the dashboard?16:55
armaanstrigazi: Yep, Pike.16:56
strigazikube 1.7 ?16:56
armaanstrigazi: Kubernetes v1.6.716:56
strigaziin pike? how is this possible?16:57
strigaziarmaan: in pike we pull kuberentes in  containers16:57
*** basvanveen has quit IRC16:57
strigaziand the default is 1.7.416:57
*** salmankhan has joined #openstack-containers16:58
strigazisomething didn't run, anyway, you can add the dashboard manually or patch magnum to use dashboard 1.6.316:59
*** basvanve_ has quit IRC16:59
armaanstrigazi: I am using this tag of OSA https://github.com/openstack/openstack-ansible/tree/16.0.517:00
armaanwhich is Pike.17:00
armaanmagnum     4301   4198  0 Dec18 ?        00:00:13 /openstack/venvs/magnum-16.0.5/bin/python17:00
armaanthis is from within the magnum container17:00
strigaziok17:00
strigaziarmaan: on the master node what is in /var/log/cloud-init-output.log ?17:01
*** AlexeyAbashkin has quit IRC17:01
armaanstrigazi: http://paste.openstack.org/show/629333/17:02
strigaziline 130 says apiserver v1.7.417:03
strigazioutput of kubectl version17:04
armaan# kubelet --version17:04
armaanKubernetes v1.6.717:04
strigaziarmaan these are the binaries installed in the qcow17:04
strigazithey are not used17:04
strigazidoL17:04
strigazido:17:04
strigaziatomic containers list17:05
armaanok17:05
armaanstrigazi: atomic container list http://paste.openstack.org/show/629334/17:05
strigazi--no-trunc17:05
armaanstrigazi: http://paste.openstack.org/show/629335/17:06
strigazisee 1.7.4 :)17:06
strigazithat is good17:07
armaanstrigazi: cool :)17:07
armaannow i know how to find out the binary in use17:07
*** dardelean_ has quit IRC17:08
armaanback to dashboard, any ideas on how to troubleshoot it?17:08
strigaziI don't have a fix to deploy the right dashboard automatically, I'm working on it, What you can do, in a runnig cluster, is, delete the existing dashboard and deploy kuberenetes dashboard 1.6.317:09
armaanstrigazi: do you happen to know any document or article on how to do that?17:10
strigaziwith kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.3/src/deploy/kubernetes-dashboard-no-rbac.yaml17:10
strigaziarmaan to delete the old on check17:11
strigazikubectl -n kube-system get deployment17:11
strigazikubectl -n kube-system get cm17:11
strigazikubectl -n kube-system get svc17:11
strigaziand delete everything about the old dashboard17:11
armaanstrigazi: thanks a million for these commands. Could you please recommend any book which can help me learn these things?17:12
armaanstrigazi: http://paste.openstack.org/show/629337/17:14
armaanstrigazi: kubectl delete kubernetes-dashboard?17:15
strigazitry the kunbernetes tutorials and I think there is a free MOOC from the linux foundation. https://kubernetes.io/docs/tutorials/17:15
strigazikubectl --namespace delete deployment  kubernetes-dashboard17:16
strigazikubectl --namespace delete service  kubernetes-dashboard17:16
strigazikubectl --namespace delete serviceaccount  kubernetes-dashboard17:16
strigazikubectl --namespace delete configmap  kubernetes-dashboard17:16
armaanstrigazi: awesome, thanks. there is a edx course as well. nice!17:16
strigazikubectl --namespace get <a kube object> # with this you can check what is there to delete17:17
strigaziautocomplete work very well17:17
strigazikubectl completion bash > kube-complete.sh && source kube-complete.sh17:18
strigazikubectl get namespace # see your namespaces17:18
strigazikubectl --namepace kube-system get <tab> <tab> :)17:18
*** dsariel has joined #openstack-containers17:22
armaanstrigazi: :) :) http://paste.openstack.org/show/629340/17:22
strigaziarmaan better don't be root17:23
strigazithe fedora user has access to the correct kubectl version17:23
strigazido whereis kubectl17:24
armaanroger17:24
armaanstrigazi: As fedora user: http://paste.openstack.org/show/629341/17:25
strigazihere is the kubectl you need /var/usrlocal/bin/kubectl17:27
armaanstrigazi: sorry for being a noob here. I am new to K8.17:28
strigaziMagnum provides a cluster-config commands to configure your client and talk to kube from your laptop17:28
strigaziyou can do openstack coe cluser config <cluster>17:29
strigazimagnum will set your credentials and you can talk to kube from somewhere that you can reach the master node17:29
strigazihttps://docs.openstack.org/magnum/latest/install/launch-instance.html#provision-a-kubernetes-cluster-and-create-a-deployment17:30
armaanstrigazi: thanks for sharing the link. I tried to install the kubectl but something is strange here http://paste.openstack.org/show/629347/17:34
strigazichmox +x kubectl && ./kubectl version17:35
armaandamn :/17:36
armaanstupid mistake17:36
armaan:(17:36
armaanstrigazi: I am wondering what runs on port 8080? http://paste.openstack.org/show/629348/17:41
armaanstrigazi: In our prod environment we run swift on 808017:41
armaanohh /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://127.0.0.1:808017:44
strigaziarmaan only on 127.0.0.117:44
strigaziarmaan: I have to go17:45
strigazisee you tmr17:45
* strigazi is away17:45
armaanstrigazi: thanks a million for your help17:45
armaanstrigazi: I appreciate it a lot!17:45
*** mdnadeem has quit IRC17:52
*** janonymous has quit IRC18:02
*** dsariel has quit IRC18:09
*** harlowja has joined #openstack-containers18:15
*** dsariel has joined #openstack-containers18:17
*** AlexeyAbashkin has joined #openstack-containers18:22
*** armaan has quit IRC18:24
*** AlexeyAbashkin has quit IRC18:27
*** dsariel has quit IRC18:27
*** AlexeyAbashkin has joined #openstack-containers18:46
*** armaan has joined #openstack-containers18:57
*** AlexeyAbashkin has quit IRC18:58
*** basvanveen has joined #openstack-containers19:06
*** dardelean_ has joined #openstack-containers19:08
*** basvanveen has quit IRC19:11
*** ricolin has joined #openstack-containers19:11
*** flwang1 has quit IRC19:15
*** ricolin_ has joined #openstack-containers19:38
*** basvanveen has joined #openstack-containers19:38
*** ricolin has quit IRC19:40
*** openstack has joined #openstack-containers19:43
*** ChanServ sets mode: +o openstack19:43
*** basvanveen has quit IRC19:43
*** dardelean_ has joined #openstack-containers19:46
*** dardelean_ has quit IRC19:51
*** itlinux has quit IRC19:52
*** itlinux has joined #openstack-containers19:52
*** ricolin_ has quit IRC19:53
*** salmankhan has quit IRC19:53
*** fragatina has quit IRC19:56
*** dardelean_ has joined #openstack-containers20:20
*** syedarmani has quit IRC20:22
*** dardelean_ has quit IRC20:24
*** armaan has quit IRC20:27
*** jmlowe has joined #openstack-containers20:28
*** syedarmani has joined #openstack-containers20:34
*** AlexeyAbashkin has joined #openstack-containers20:37
*** armaan has joined #openstack-containers20:38
*** AlexeyAbashkin has quit IRC20:41
*** flwang1 has joined #openstack-containers20:44
oikikiI think strigazi is away. Is anyone able to answer a quick question re: cherry picking for an Openstack intern?20:50
*** penick has quit IRC20:53
oikikiI am cherry picking my commit that added verify_ca to stable/pike20:59
oikikiI added 1 line but the cherry pick is picking up another change where someone added master_flavor prior to my commit to master20:59
oikikihttp://paste.openstack.org/show/62937320:59
*** oikiki has quit IRC20:59
*** oikiki has joined #openstack-containers21:00
*** armaan has quit IRC21:16
*** armaan_ has joined #openstack-containers21:17
*** penick has joined #openstack-containers21:20
*** armaan_ has quit IRC21:28
*** armaan has joined #openstack-containers21:29
*** salmankhan has joined #openstack-containers21:30
*** salmankhan has quit IRC21:34
*** dardelean_ has joined #openstack-containers21:35
*** dardelean_ has quit IRC21:40
openstackgerritKirsten G. proposed openstack/magnum stable/pike: Add verify_ca configuration parameter  https://review.openstack.org/52916621:46
*** armaan has quit IRC21:48
*** armaan has joined #openstack-containers21:48
armaancd /var/lib/cloud/21:54
*** armaan has quit IRC22:05
*** armaan has joined #openstack-containers22:06
*** pcichy has joined #openstack-containers22:08
*** pcichy has quit IRC22:09
*** pcichy has joined #openstack-containers22:09
*** penick has quit IRC22:19
*** dardelean_ has joined #openstack-containers22:19
*** oikiki has quit IRC22:21
*** dardelean_ has quit IRC22:23
*** flwang1 has quit IRC22:24
*** penick has joined #openstack-containers22:25
*** rcernin has joined #openstack-containers22:28
*** dardelean_ has joined #openstack-containers22:29
*** pcichy has quit IRC22:38
*** penick has quit IRC22:40
*** marst has quit IRC22:41
*** penick has joined #openstack-containers22:43
*** oikiki has joined #openstack-containers22:57
*** flwang1 has joined #openstack-containers23:02
*** flwang1 has quit IRC23:08
*** flwang1 has joined #openstack-containers23:14
*** dardelean_ has quit IRC23:19
*** oikiki has quit IRC23:21
*** itlinux has quit IRC23:33
*** syedarmani has quit IRC23:37
*** flwang1 has quit IRC23:37
*** penick has quit IRC23:39
*** syedarmani has joined #openstack-containers23:39
*** syedarmani has joined #openstack-containers23:40
*** flwang1 has joined #openstack-containers23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!