Tuesday, 2017-12-12

*** lenards has joined #openstack-containers00:00
*** mnasiadka has joined #openstack-containers00:01
*** AlexeyAbashkin has quit IRC00:02
*** mnasiadka has quit IRC00:06
*** mnasiadka has joined #openstack-containers00:09
*** mnasiadka has quit IRC00:15
*** mnasiadka has joined #openstack-containers00:21
*** mnasiadka has quit IRC00:28
*** portdirect has joined #openstack-containers00:29
*** mnasiadka has joined #openstack-containers00:32
*** mnasiadka has quit IRC00:37
*** absubram has quit IRC00:39
*** mnasiadka has joined #openstack-containers00:39
*** mnasiadka has quit IRC00:44
*** hishh1 has joined #openstack-containers00:46
*** absubram has joined #openstack-containers00:47
*** hishh has quit IRC00:48
*** hishh1 is now known as hishh00:48
*** mnasiadka has joined #openstack-containers00:51
*** jmlowe_ has joined #openstack-containers00:54
*** jmlowe has quit IRC00:55
*** mnasiadka has quit IRC00:58
*** jmlowe_ has quit IRC00:59
*** mnasiadka has joined #openstack-containers01:01
*** mnasiadka has quit IRC01:05
*** mnasiadka has joined #openstack-containers01:09
*** mnasiadka has quit IRC01:16
*** dardelean_ has quit IRC01:17
*** mnasiadka has joined #openstack-containers01:21
*** penick has joined #openstack-containers01:25
*** mnasiadka has quit IRC01:27
*** absubram has quit IRC01:30
*** mnasiadka has joined #openstack-containers01:31
*** dardelean_ has joined #openstack-containers01:32
*** mnasiadka has quit IRC01:35
*** dardelean_ has quit IRC01:37
*** mnasiadka has joined #openstack-containers01:39
*** dardelean_ has joined #openstack-containers01:39
*** clenimar has quit IRC01:39
*** yamamoto has joined #openstack-containers01:41
*** penick has quit IRC01:45
*** mnasiadka has quit IRC01:45
*** mnasiadka has joined #openstack-containers01:51
*** mnasiadka has quit IRC01:58
*** jhesketh has quit IRC01:59
*** mnasiadka has joined #openstack-containers02:01
*** mnasiadka has quit IRC02:06
*** fragatina has quit IRC02:08
*** mnasiadka has joined #openstack-containers02:08
*** fragatina has joined #openstack-containers02:10
*** rajivk has quit IRC02:12
*** mnasiadka has quit IRC02:13
*** fragatina has quit IRC02:14
*** mnasiadka has joined #openstack-containers02:16
*** absubram has joined #openstack-containers02:19
*** rajivk has joined #openstack-containers02:23
*** mnasiadka has quit IRC02:28
*** mnasiadka has joined #openstack-containers02:31
*** mnasiadka has quit IRC02:35
*** mnasiadka has joined #openstack-containers02:38
*** vijaykc4 has joined #openstack-containers02:40
*** clenimar has joined #openstack-containers02:41
*** absubram_ has joined #openstack-containers02:44
*** dardelean_ has quit IRC02:44
*** absubram has quit IRC02:45
*** absubram_ is now known as absubram02:45
*** mnasiadka has quit IRC02:45
*** vijaykc4 has quit IRC02:51
*** rajivk has quit IRC02:52
*** mnasiadka has joined #openstack-containers02:56
*** vijaykc4 has joined #openstack-containers02:56
*** dardelean_ has joined #openstack-containers03:00
*** mnasiadka has quit IRC03:00
*** mnasiadka has joined #openstack-containers03:01
*** armaan has quit IRC03:02
*** armaan has joined #openstack-containers03:02
*** dardelean_ has quit IRC03:04
*** rajivk has joined #openstack-containers03:04
*** mnasiadka has quit IRC03:06
*** ianychoi has joined #openstack-containers03:08
*** mnasiadka has joined #openstack-containers03:09
*** vijaykc4 has quit IRC03:12
*** mnasiadka has quit IRC03:14
*** vijaykc4 has joined #openstack-containers03:16
*** ramishra has joined #openstack-containers03:24
*** oikiki has quit IRC03:33
*** mikal_ has joined #openstack-containers03:39
*** mikal has quit IRC03:42
*** ricolin_ has joined #openstack-containers03:43
*** absubram has quit IRC03:44
*** vijaykc4 has quit IRC03:45
*** ykarel has joined #openstack-containers03:53
*** v1k0d3n has quit IRC03:56
*** v1k0d3n has joined #openstack-containers03:57
*** flwang1 has quit IRC04:06
*** fragatina has joined #openstack-containers04:15
*** fragatin_ has joined #openstack-containers04:16
*** vijaykc4 has joined #openstack-containers04:18
*** fragatina has quit IRC04:20
*** clenimar has quit IRC04:22
*** clenimar has joined #openstack-containers04:22
*** shu-mutou has joined #openstack-containers04:34
*** vijaykc4 has quit IRC04:45
*** chhavi has joined #openstack-containers04:48
*** penick has joined #openstack-containers04:54
*** janki has joined #openstack-containers04:57
*** dpawar has joined #openstack-containers05:04
*** flwang1 has joined #openstack-containers05:32
*** penick has quit IRC05:36
*** robcresswell has quit IRC05:39
*** fungi has quit IRC05:45
*** fungi has joined #openstack-containers05:47
*** absubram has joined #openstack-containers06:00
*** ykarel has quit IRC06:03
*** ykarel has joined #openstack-containers06:03
*** absubram has quit IRC06:04
*** vijaykc4 has joined #openstack-containers06:05
*** absubram has joined #openstack-containers06:05
*** dsariel has joined #openstack-containers06:11
*** ykarel_ has joined #openstack-containers06:12
*** ykarel has quit IRC06:15
*** penick has joined #openstack-containers06:19
openstackgerritChandan Kumar proposed openstack/magnum master: Remove intree magnum tempest plugin  https://review.openstack.org/52661806:21
openstackgerritMerged openstack/python-magnumclient master: inline comment typo fix  https://review.openstack.org/49417906:26
*** penick_ has joined #openstack-containers06:30
*** penick has quit IRC06:31
*** vijaykc4 has quit IRC06:32
*** penick_ has quit IRC06:36
*** vijaykc4 has joined #openstack-containers06:47
*** mjura has joined #openstack-containers06:58
*** dardelean_ has joined #openstack-containers07:00
*** dsariel has quit IRC07:01
*** mnasiadka has joined #openstack-containers07:01
*** dardelean_ has quit IRC07:04
*** absubram has quit IRC07:09
*** magicboiz has joined #openstack-containers07:11
*** chhavi__ has joined #openstack-containers07:20
*** adisky__ has joined #openstack-containers07:20
*** rcernin has quit IRC07:21
*** chhavi has quit IRC07:22
*** dsariel has joined #openstack-containers07:23
*** armaan has quit IRC07:33
*** armaan has joined #openstack-containers07:34
*** mdnadeem has joined #openstack-containers07:36
*** dpawar has quit IRC07:37
*** robcresswell has joined #openstack-containers07:39
*** AlexeyAbashkin has joined #openstack-containers07:53
*** dardelean_ has joined #openstack-containers07:57
*** dardelean_ has quit IRC08:02
*** armaan has quit IRC08:03
*** dpawar has joined #openstack-containers08:04
*** b_bezak has joined #openstack-containers08:06
*** dardelean_ has joined #openstack-containers08:08
openstackgerritSpyros Trigazis (strigazi) proposed openstack/magnum master: [doc-migration] Consolidate install guide  https://review.openstack.org/52692608:11
*** ykarel_ has quit IRC08:13
*** ykarel_ has joined #openstack-containers08:14
openstackgerritSpyros Trigazis (strigazi) proposed openstack/magnum master: [doc-migration] Consolidate install guide  https://review.openstack.org/52692608:14
*** vijaykc4 has quit IRC08:16
*** armaan has joined #openstack-containers08:23
*** armaan has quit IRC08:32
*** armaan has joined #openstack-containers08:32
*** mdnadeem has quit IRC08:32
*** dardelean_ has quit IRC08:36
*** rcernin has joined #openstack-containers08:36
*** mdnadeem has joined #openstack-containers08:37
*** dpawar has quit IRC08:41
*** ykarel_ is now known as ykarel|away08:42
*** dpawar has joined #openstack-containers08:44
-openstackstatus- NOTICE: Our CI system Zuul is currently not accessible. Wait with approving changes and rechecks until it's back online. Currently waiting for an admin to investigate.08:47
*** ykarel|away has quit IRC08:47
*** ykarel has joined #openstack-containers08:50
*** linkmark has joined #openstack-containers08:51
*** janonymous has joined #openstack-containers09:00
openstackgerritMerged openstack/magnum master: The os_distro of image is case sensitive  https://review.openstack.org/52696409:02
*** dpawar has quit IRC09:05
-openstackstatus- NOTICE: Zuul is back online, looks like a temporary network problem.09:07
*** dardelean_ has joined #openstack-containers09:10
*** jappleii__ has quit IRC09:10
strigazi_savvas_: what is the range of your private network?09:17
*** strigazi_ is now known as strigazi09:17
strigaziflwang: ping09:19
*** mnasiadka has quit IRC09:21
*** mgoddard has joined #openstack-containers09:23
*** armaan has quit IRC09:24
*** mnasiadka has joined #openstack-containers09:24
*** armaan has joined #openstack-containers09:25
*** dardelean_ has quit IRC09:34
strigaziykarel: Can you have a look? it is ready: https://review.openstack.org/#/c/525662/09:34
ykarelstrigazi, Ok will check09:37
*** ramishra has quit IRC09:43
*** ramishra has joined #openstack-containers09:44
*** dardelean_ has joined #openstack-containers09:49
*** dardelean_ has quit IRC09:50
*** dardelean_ has joined #openstack-containers09:50
*** dardelean_ has quit IRC09:56
*** jhesketh has joined #openstack-containers10:04
*** janki has quit IRC10:07
*** lpetrut has joined #openstack-containers10:09
*** kiennt26 has quit IRC10:09
*** ykarel_ has joined #openstack-containers10:12
*** ykarel has quit IRC10:16
*** vijaykc4 has joined #openstack-containers10:16
*** vijaykc4 has quit IRC10:16
*** vijaykc4 has joined #openstack-containers10:19
*** vijaykc4 has quit IRC10:21
*** salmankhan has joined #openstack-containers10:24
*** sheva_ has joined #openstack-containers10:25
*** ykarel__ has joined #openstack-containers10:26
*** armaan has quit IRC10:29
*** armaan has joined #openstack-containers10:30
*** ykarel_ has quit IRC10:30
*** lpetrut has quit IRC10:31
*** vijaykc4 has joined #openstack-containers10:33
*** yamamoto has quit IRC10:35
*** lpetrut has joined #openstack-containers10:35
*** dardelean_ has joined #openstack-containers10:39
*** vijaykc4 has quit IRC10:42
*** yamamoto has joined #openstack-containers10:48
*** vijaykc4 has joined #openstack-containers10:50
*** vijaykc4 has quit IRC10:50
*** shu-mutou is now known as shu-mutou-AWAY10:59
*** vijaykc4 has joined #openstack-containers11:11
*** AlexeyAbashkin has quit IRC11:15
gokhanstrigazi, ykarel__ I tried create kubernates and swarm cluster again get timeout error. I can not ssh any nodes.11:17
gokhanI am sharing logs on console11:17
gokhanfor swarm http://paste.openstack.org/show/628706/11:17
gokhanfor kubernates http://paste.openstack.org/show/628707/11:17
gokhanI am on pike branch11:18
gokhanand used https://ftp-stud.hs-esslingen.de/pub/Mirrors/alt.fedoraproject.org/atomic/stable/Fedora-Atomic-26-20171030.0/CloudImages/x86_64/images/Fedora-Atomic-26-20171030.0.x86_64.qcow2 this image11:18
*** vijaykc4 has quit IRC11:25
*** ricolin_ has quit IRC11:31
*** salmankhan has quit IRC11:34
*** vijaykc4 has joined #openstack-containers11:36
ykarel__gokhan, looking at the console logs it seems vms are not able to reach the metadata server, are you able to successfully boot an instance using nova11:38
*** ykarel__ is now known as ykarel11:38
*** AlexeyAbashkin has joined #openstack-containers11:40
gokhanykarel, yes I can booot. I think problem is about fedora image.11:42
gokhanykarel, When I create fedora image, I am using --property os_distro='fedora-atomic'11:43
gokhanis this be problem ?11:43
ykarelthis is required by magnum11:43
*** salmankhan has joined #openstack-containers11:43
ykarelgokhan, we are using following image in the CI jobs: Fedora-Atomic-26-20170723.0.x86_6411:45
ykarelhttps://github.com/openstack/magnum/blob/stable/pike/magnum/tests/contrib/gate_hook.sh#L88-L8911:46
gokhanykarel, ok now I am trying  this image11:48
ykarelgokhan, also have you used this patch: https://review.openstack.org/#/c/524151/111:48
*** dpawar has joined #openstack-containers11:49
gokhanykarel, yes I used this patch and also https://review.openstack.org/#/c/518700/ this patch.11:50
*** vijaykc4 has quit IRC11:51
*** yamamoto has quit IRC11:54
ykarelgokhan, Ok11:54
*** yamamoto has joined #openstack-containers11:58
*** yamamoto has quit IRC12:03
*** vijaykc4 has joined #openstack-containers12:08
*** dpawar has quit IRC12:14
savvas_strigazi: I've tried default where it creates 10.0.0.0/24, and an existing private range 10.0.35.0/2412:15
gokhanykarel, agaim same metadata warning http://paste.openstack.org/show/628716/12:17
*** sheva_ has quit IRC12:17
gokhanykarel,  when I boot this image, I don't get this metada warning http://paste.openstack.org/show/628718/12:22
gokhanykarel, when magnum tries to boot this image, I see this warning12:22
ykarelgokhan, strange, can you try creating a cluster with fixed-network and fixed-subnet(same that you used during nova boot)12:26
gokhanykarel, ok I am trying now12:28
gokhanykarel, now I didn't get metadata warninghttp://paste.openstack.org/show/628719/12:34
*** yamamoto has joined #openstack-containers12:35
*** chhavi__ has quit IRC12:43
*** chhavi__ has joined #openstack-containers12:44
*** vijaykc4 has quit IRC12:52
*** dpawar has joined #openstack-containers12:53
gokhanykarel, I restart master node , I can ssh12:56
gokhanykarel, this is cloud init output log http://paste.openstack.org/show/628723/12:57
*** armaan has quit IRC12:59
*** armaan has joined #openstack-containers13:15
*** ricolin_ has joined #openstack-containers13:17
*** janonymous has quit IRC13:20
ykarelgokhan, looks like you are facing the same issue you faced earlier, the issue is with the network created by magnum13:23
*** fragatin_ has quit IRC13:43
*** fragatina has joined #openstack-containers13:43
*** dave-mccowan has joined #openstack-containers13:45
gokhanykarel, I think this is different because I can not see any docker bridge13:46
*** vijaykc4 has joined #openstack-containers13:47
*** kiennt26 has joined #openstack-containers13:48
*** dave-mccowan has quit IRC13:48
*** dave-mccowan has joined #openstack-containers13:56
*** dave-mccowan has quit IRC14:00
openstackgerritMurali Annamneni proposed openstack/magnum master: Enables MySQL Cluster Support for Magnum  https://review.openstack.org/46574614:04
*** mdnadeem_ has joined #openstack-containers14:09
*** mdnadeem has quit IRC14:10
*** ykarel is now known as ykarel|afk14:23
*** yamamoto has quit IRC14:24
*** yamamoto has joined #openstack-containers14:24
*** salmankhan has quit IRC14:25
*** ykarel|afk has quit IRC14:28
*** vijaykc4 has quit IRC14:33
*** salmankhan has joined #openstack-containers14:38
-openstackstatus- NOTICE: We're currently seeing an elevated rate of timeouts in jobs and the zuulv3.openstack.org dashboard is intermittently unresponsive, please stand by while we troubleshoot the issues.14:38
*** marst has joined #openstack-containers14:50
*** salmankhan has quit IRC14:53
*** salmankhan has joined #openstack-containers15:00
*** rcernin has quit IRC15:00
*** armaan has quit IRC15:01
*** armaan has joined #openstack-containers15:03
openstackgerritSpyros Trigazis (strigazi) proposed openstack/magnum master: k8s_fedora: Add RBAC configuration  https://review.openstack.org/52710315:04
*** dave-mccowan has joined #openstack-containers15:08
*** armaan_ has joined #openstack-containers15:10
*** armaan has quit IRC15:10
*** kiennt26 has quit IRC15:23
openstackgerritSpyros Trigazis (strigazi) proposed openstack/python-magnumclient master: Make cluster-config rbac compatible for kubebernetes  https://review.openstack.org/52742815:30
*** chhavi__ has quit IRC15:34
strigaziHi everyone, magnum meeting in #openstack-meeting-alt in 23 mins15:36
*** salmankhan has quit IRC15:37
*** jmlowe has joined #openstack-containers15:44
*** vijaykc4 has joined #openstack-containers15:47
*** jmlowe has quit IRC15:48
*** salmankhan has joined #openstack-containers15:55
*** oikiki has joined #openstack-containers15:59
*** mjura has quit IRC16:00
*** dave-mccowan has quit IRC16:02
*** b_bezak has quit IRC16:04
*** b_bezak has joined #openstack-containers16:04
*** mnasiadka has quit IRC16:08
*** b_bezak has quit IRC16:09
*** jmlowe has joined #openstack-containers16:10
*** mnasiadka has joined #openstack-containers16:11
*** armaan_ has quit IRC16:12
*** mnasiadka has quit IRC16:18
*** mnasiadka has joined #openstack-containers16:21
*** jmlowe has quit IRC16:21
*** mnasiadka has quit IRC16:26
*** AlexeyAbashkin has quit IRC16:26
*** dpawar has quit IRC16:26
openstackgerritSpyros Trigazis (strigazi) proposed openstack/magnum master: Remove intree magnum tempest plugin  https://review.openstack.org/52661816:27
*** mnasiadka has joined #openstack-containers16:28
*** hishh has quit IRC16:29
*** dsariel has quit IRC16:34
flwang1strigazi: ping16:34
strigaziflwang1: hi16:35
strigaziflwang1: you just missed the meeting16:36
flwang1strigazi: have a moment for some quick questions?16:36
strigaziflwang1: yes16:36
flwang1strigazi: oh, it's 5:36 am here :)16:36
*** mnasiadka has quit IRC16:36
flwang1strigazi: 1. about the monitoring16:36
*** KwozyMan has joined #openstack-containers16:36
strigaziflwang1: oh sorry about that :( we can set a time if you want that we can sync16:37
strigaziflwang1: tell me16:37
*** armaan has joined #openstack-containers16:37
flwang1we'd like to charge our customer for a little extra fee for upgrade and maintenance for clusters, but the question is16:37
flwang1that's alright, i can read the log :)16:38
*** AlexeyAbashkin has joined #openstack-containers16:38
flwang1the question is currently based on the code, seems magnum only monitors the lb of cluster16:38
flwang1does magnum plan to monitor the health of the cluster itself?16:38
flwang1or it's out of the scope of magnum based on current goal of magnum?16:39
strigaziflwang1 for kuberentes we have two types of monitoring16:39
*** jmlowe has joined #openstack-containers16:39
strigaziflwang1: one is the kubernetes-dashboard and the other one is a stack based on prometheus, node-exporter, cadvisor and grafana16:40
flwang1but can admin user access them? or only the tenant user (owner) has the access?16:40
strigaziflwang1: for swarm we have a contributor working on porting the above stack from kube to swarm16:41
*** mnasiadka has joined #openstack-containers16:41
*** dsariel has joined #openstack-containers16:42
flwang1in other words, can the admin user help tenant user take care the clustrers so that tenant user don't have to pay any attention to the clusters but just use it?16:42
strigaziflwang1 At the moment, only the owner of the cluster can get the credentials to talk to the cluster as admin of the cluster we can change it to allow the operator to have this kind access but the are privacy concerns there16:43
flwang1in order to make it like a k8s 'service'16:43
strigaziyou need to have moniroing that only the openstack operator can access?16:43
flwang1both will be great16:43
*** jmlowe has quit IRC16:44
flwang1i haven't dig into that part, is it possible to generate another ops credential?16:44
flwang1which could be transparent for the owner (tenant user), i'm not too sure at the moment16:45
flwang1otherwise, if we (cloud provider) can't help monitor the cluster, it's not really like a 'k8s' as a service, does that make any sense?16:46
strigaziit does16:46
*** AlexeyAbashkin has quit IRC16:47
flwang1we can discuss more about this if you think it's reasonable16:48
flwang12. this is related to 1st question, about the remote access16:48
strigaziThe easiest thing to do is allow the operator to get the certs of the cluster and be able to act on it, but in this case you can see the secrets of the cluster16:48
*** dave-mccowan has joined #openstack-containers16:49
strigaziwith RBAC we can limit that by adding a new role. I also think that if the user has his role configured properly even the admin won't be able to see secrets etc16:49
strigaziwhat about 2?16:50
flwang1strigazi: when there is a failure/error happening in a node/instance, admin don't have permission to debug it16:51
flwang1what's the best practice in CERN?16:51
flwang1because admin user can't login to the instance without credentials16:52
strigaziflwang1 yes, admin can't access the node. We ask the user to add our ssh-key in the vms and permission to access the kuberentes-api16:52
flwang1ha16:52
flwang1that's the 'best' way we can do :D16:53
*** absubram has joined #openstack-containers16:53
strigazia lot of access means less privacy for the user16:53
flwang1yep, i totally understand16:54
strigaziThe perfect balance is:16:54
flwang1i think it's because of the original design16:54
flwang1it's not a service, technically16:54
*** absubram has quit IRC16:55
strigaziaccess to the nodes and access only for moniring to the cluster API16:55
flwang1strigazi: agree16:55
strigazithis is very doable and not too difficult to do16:55
strigazibut,16:56
flwang1i'm not really keen to resolve the 2nd question now16:56
flwang1because i totally understand the privacy concern16:56
flwang1but for the 1st, i think it's a valid requirement16:56
strigaziif the user doesn implement best practices he is more are risk16:56
strigaziit is the same with let's say a db on demand service16:57
flwang1and we could be able to do it by proper role/permission settings16:57
*** mnasiadka has quit IRC16:57
strigazithe operator can see your data but if you encrypt them he can't16:57
flwang1true16:57
strigaziwe can add ways to allow you to select what you want to have access to16:59
strigazieg pass the operator ssh-key16:59
flwang1strigazi: you're talking about the 2nd?16:59
flwang1yep, i understand16:59
strigaziyes, and for the 1st17:00
*** fragatina has quit IRC17:00
flwang1but there is no difference between this and asking the user to add it manually17:00
*** fragatina has joined #openstack-containers17:00
strigazichange the policy to allow the admin user do cluster-condig17:00
strigazi*cluster-config17:00
strigaziflwang1: we can differenciate, eg access only the master nodes17:01
strigazithe end result regarding access level will be same17:01
flwang1strigazi: personally, i'd like to see an argument when creating cluster17:01
strigaziflwang1: easy to implement17:02
flwang1--allow-admin-login or something like that17:02
*** ramishra has quit IRC17:02
strigaziwe are talking about ssh access right?17:02
flwang1yes17:02
strigaziwe can have theree options: NO, master, all17:03
flwang1because i'm not sure if we should silently inject keys17:03
flwang1exactly17:03
strigaziSo the user should know that the admin will have root access17:04
flwang1yes17:04
strigaziif he implements best practices he can limit what the operator can see even with root access17:05
*** dave-mccowan has quit IRC17:05
flwang1i think so17:05
flwang1but may need more tests17:05
strigazido you have any other concerns/questions?17:07
flwang1yes, a small one17:07
flwang1at summit, you mentioned there is a server group created for nodes17:08
*** absubram has joined #openstack-containers17:08
strigaziin heat it is a resource group17:08
flwang1would you mind pointing me where is the code so that I can set the affinity policy ?17:08
flwang1ah, yes, it's a resource group, i can see it17:08
strigazihttps://github.com/openstack/magnum/blob/master/magnum/drivers/k8s_fedora_atomic_v1/templates/kubecluster.yaml#L54417:09
flwang1you mean heat will create a sever group automatically?17:09
strigazihttps://github.com/openstack/magnum/blob/master/magnum/drivers/k8s_fedora_atomic_v1/templates/kubeminion.yaml#L40717:09
strigaziflwang1: They will be independent servers that are on the same network and have the same flavor image17:10
strigaziI think it is not a server group17:10
flwang1ok, so we need to create a server group17:10
flwang1and set the default policy to 'anti-affinity'17:11
strigaziflwang1:  the server group is a parameter to the the server right?17:12
flwang1yes17:12
strigaziok, we can modify slightly the cluster template17:13
*** lpetrut has quit IRC17:14
strigazinot very difficult as well17:14
strigazisavvas_: hi17:15
flwang1cool, do you think we need a bp/spec?17:15
flwang1i can take this17:15
savvas_hi strigazi17:16
strigaziflwang1: not spec, bp or bug17:16
savvas_thanks for getting back to me17:16
flwang1strigazi: just do it? ;)17:16
strigaziflwang1: bugs have comments, open a bug and go for it17:17
strigaziflwang1: just to track what we do17:17
strigazisavvas_: tell me17:17
flwang1cool17:17
flwang1i have another thing wann double check17:18
savvas_alrighty. So I am still where I was yesterday, haven't made any progress. Have tried various cluster deployments (mostly kubernetes ) on OpenStack Ansible's integration of Magnum17:18
strigazisavvas_ magnum version?17:18
savvas_my first and foremost problem is that the Atomic VMs that are being created as part of the process don't seem to have network connectivity partially or throughout the entire process17:18
strigaziflwang1: what is it?17:18
flwang1strigazi: when create cluster, there are script in the instance need to call heat to notify its status, right?17:18
flwang1strigazi: like https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/wc-notify-master.sh#L1417:19
savvas_strigazi: 2.7.017:19
*** oikiki has quit IRC17:19
strigazisavvas_: can you boot a standalone vm with fedora-atomic17:19
strigaziflwang1 yes17:19
*** mdnadeem_ has quit IRC17:19
strigazisavvas_ 2.7.0 is the client version17:19
flwang1so we need the instance to be able to access the api node, right?17:19
strigaziflwang1 yes17:20
strigazisavvas_: ok, maybe pike17:20
savvas_ye it is definitely pike17:20
savvas_I am running checkout stable/pike on OA17:20
strigazisavvas_: can you boot a standalone vm with fedora atomic 26?17:20
savvas_booting one now, I got atomic 25 and 27 on my install17:21
strigazisavvas_ try 26, it is better for pike17:21
savvas_pings right away, an Atomic 27 VM17:21
*** mgoddard has quit IRC17:21
strigaziflwang1 the vms in general need to be able to access the openstack apis, keystone, heat, magnum and for the k8s cloud provider, nova and cidner too17:23
savvas_can acces the VM via SSH as well, ssh key is on there, so deploying a regular VM with Fedora Atomic 27 shows no issues17:23
strigazisavvas_ is docker running when you login?17:23
strigazisystem status docker17:23
strigazisystemctl status docker17:23
*** magicboiz has quit IRC17:24
savvas_yes sir17:24
*** dsariel has quit IRC17:24
strigaziwhat I would do is create a cluster and monitor nova, then try to login as soon as you see that a vm is running17:25
*** dardelean_ has quit IRC17:25
strigazithe monitor /var/log/cloud-init-output.log to see when you lose connectivity17:26
strigaziI suspect that the flannel/docker configuration break the network17:26
strigazis/the/then17:26
strigazibut17:26
savvas_I don't get any connectivity straight out of the gate17:26
strigazitry fedora atomic 2617:26
savvas_I monitor the spawn process and as soon as it gets an IP assigned, I start pinging17:27
savvas_and monitor the console output17:27
*** mgoddard has joined #openstack-containers17:27
savvas_I am downloading Atomic 26 now as well17:27
strigazihttps://download.fedoraproject.org/pub/alt/atomic/stable17:28
savvas_hehe thx, ye I noticed they aren't too font of publicly listing other releases17:29
savvas_but I had found that17:29
strigaziWhen you are in the vm, keep a copy of /var/lib/cloud/instance/user_data.txt17:30
savvas_stack creating now17:30
flwang1strigazi: for the 2nd issue, should we create a bp/spec to track the discussion in case we lost it?17:30
strigazithis what is executed in that order when the vm boots17:31
*** aspiers has quit IRC17:31
savvas_strigazi: will do17:31
savvas_that is, if I get access at all17:31
*** salmankhan has quit IRC17:31
strigaziflwang1 Create a bug17:32
flwang1strigazi: great17:32
*** KwozyMan has quit IRC17:32
flwang1strigazi: and as for the 1st, i'm not too sure if we have a conclusion17:33
flwang1do you think we should merge it with 2nd?17:33
flwang1or it's separated track?17:33
strigaziflwang1 for the first, we need to give access to the cluster API to the operator right?17:34
flwang1strigazi: yes17:34
flwang1so that ops can monitor the cluster's health17:34
savvas_strigazi: I don't know how long I should wait but it is at login prompt already and sometime during the boot process it says that eth0 isn't ready, apart from that I can't catch anything17:35
savvas_but I have no network, so I can't access the VM17:35
savvas_I don't think atomic has a predefined user/pass for console access17:35
flwang1but it would be nice if it can work with the 2nd, otherwise, after detected there is a shit happened, ops still can't do anything17:35
strigaziflwang1 so open a bug to add what we can do, what kind of policy we need to add for the operator to access the api17:36
flwang1strigazi: cool17:36
strigaziflwang1: also17:36
strigaziflwang1: we have some periodic tasks that talk to the API17:36
flwang1yep, i see there is a basic monitor17:37
*** salmankhan has joined #openstack-containers17:37
flwang1can we leverage that api?17:37
strigazia thought is to check that the cluster has all it's nodes in ready status17:37
flwang1i mean the task17:37
*** vijaykc4 has quit IRC17:37
savvas_strigazi: I do see it resizing the atomic volume now to, this is something that was failing on the other versions17:37
savvas_*though17:37
strigaziflwang1: and add new statuses to mark as healthy or unhealthy17:38
*** dave-mccowan has joined #openstack-containers17:38
flwang1strigazi: yep, sounds good for stage 117:40
flwang1i will take a look17:40
strigaziflwang1: do you still have the heat template I gave you for testing a couple of days ago, savvas_ can benefit from it17:41
flwang1strigazi: i'm allll good now. thank you sooo much for your time, very helpful17:41
flwang1strigazi: yes, i have17:41
strigaziflwang1: you are very welcome17:41
savvas_That would be wonderful17:41
flwang1savvas_: wait a sec17:41
savvas_:)17:41
savvas_strigazi: I am right to assume that network should be available evne though cloud-init scripts are still running right? There's nothing in there which locks up network until it is done?17:42
flwang1savvas_: https://gist.github.com/openstacker/26e31c9715d52cc502397b65d3cebab617:42
strigazisavvas_: it is a heat template that creates only the vm, the networks and the ports that magnum usually creates17:42
strigazisavvas_ exactly17:42
savvas_Reason I am asking is because it does seem to go further with Atommic 26 then with the other 2 releases. It actually resized and recreated the docker pool volume and now says configuring kubernetes (master), for a couple of minutes now17:42
*** vijaykc4 has joined #openstack-containers17:42
strigazias soon the vm is running and sshd us up you should be able to login17:42
savvas_ok ye than it is still as messed up as before17:43
savvas_thanks for sharing flwang117:43
flwang1savvas_: no problem17:43
savvas_could this be a bug in our release?17:43
savvas_I've seen it reported recently by another OA user17:44
strigazisavvas_ magnum release?17:44
savvas_https://bugs.launchpad.net/magnum/+bug/172081617:44
openstackLaunchpad bug 1720816 in Magnum "magnum create cluster "create_in_progress" and changes to "create_failed" after timeout" [Undecided,New]17:44
savvas_symptoms are similar17:45
strigazisavvas_: I don't think so, since they can login17:45
savvas_ah ye just noticed that in the comments17:45
savvas_I don't have that luxury lol17:45
strigazihow did you boot your vm that worked?17:45
savvas_from horizon17:46
savvas_not on volume17:46
strigaziin  a private network?17:46
savvas_ye same private network17:46
savvas_worked like a charm, 30 secs and it was up and running17:46
savvas_docker works etc17:46
strigaziso you are able to login17:47
*** aspiers has joined #openstack-containers17:47
savvas_yes, when I create a VM manually with Atomic image  Iam able to access it just fine17:47
savvas_other networking aspects of the cloud and this particular private network also work17:47
strigaziok, try to create a swarm-mode cluster17:47
savvas_it is just when I create a magnum cluster through the CLI or horizon that it doesn't work17:47
savvas_Are volume and devicemapper settings mandatory?17:48
strigaziyou can try overlay17:48
strigazisavvas_: do you use the  --docker-volume-size param?17:49
strigaziif yes try without17:49
savvas_with it, it fails right a way17:49
savvas_I think it has something to do with how I deployed magnum17:49
savvas_should've added a cinder variables17:49
savvas_-s17:49
savvas_so I am trying to deploy on image rather than volume17:49
strigazitry swarm-mode with overlay and WITHOUT docker-volume-size17:50
savvas_running already17:50
*** absubram has quit IRC17:51
*** oikiki has joined #openstack-containers17:51
strigaziI need to go now, I'm sorry about that, we can continue tmr morining (morining for you)17:51
strigaziif you can login to a swarm-mode node17:51
savvas_that's fine, I need to go as well, appreciate the help17:51
strigazimean that the kube config breaks something17:52
savvas_I am usually online around 1PM CET, so I'll try you then if you're available17:52
strigazisavvas_ I'll be arount 14:30pm CET17:52
savvas_ye the docker vm is spawning now, let's see if that does anything17:52
strigaziI have a meeting  before17:52
savvas_ok that works perfect for me, appreciate the support17:53
strigazisavvas_: you can also try this https://review.openstack.org/#/c/524151/17:53
strigazisee you tmr17:53
savvas_talk to you tmr, bb17:54
*** absubram has joined #openstack-containers17:59
*** mgoddard has quit IRC18:09
*** jmlowe has joined #openstack-containers18:12
*** penick has joined #openstack-containers18:12
*** dardelean_ has joined #openstack-containers18:13
*** penick has quit IRC18:14
*** AlexeyAbashkin has joined #openstack-containers18:14
*** lpetrut has joined #openstack-containers18:16
*** penick has joined #openstack-containers18:16
*** AlexeyAbashkin has quit IRC18:18
*** ricolin_ has quit IRC18:21
*** dardelean_ has quit IRC18:27
*** dardelean_ has joined #openstack-containers18:27
*** dardelean_ has quit IRC18:29
*** lpetrut has quit IRC18:32
*** AlexeyAbashkin has joined #openstack-containers18:34
*** AlexeyAbashkin has quit IRC18:38
*** lpetrut has joined #openstack-containers18:48
*** adisky__ has quit IRC18:50
*** armaan has quit IRC18:59
*** armaan has joined #openstack-containers19:00
*** armaan has quit IRC19:03
*** armaan has joined #openstack-containers19:04
*** dardelean_ has joined #openstack-containers19:08
DimGReimaste poloi19:11
*** mgoddard has joined #openstack-containers19:12
*** dardelean_ has quit IRC19:13
*** flwang1 has quit IRC19:16
*** salmankhan has quit IRC19:34
*** mdnadeem has joined #openstack-containers19:44
*** fragatina has quit IRC19:44
*** masuberu has joined #openstack-containers19:48
*** masber has quit IRC19:51
*** dave-mccowan has quit IRC19:56
*** mgoddard has quit IRC19:58
*** fragatina has joined #openstack-containers20:12
-openstackstatus- NOTICE: The zuul scheduler has been restarted after lengthy troubleshooting for a memory consumption issue; earlier changes have been reenqueued but if you notice jobs not running for a new or approved change you may want to leave a recheck comment or a new approval vote20:14
*** vijaykc4 has quit IRC20:16
*** absubram has quit IRC20:21
*** lpetrut has quit IRC20:30
*** savvas has joined #openstack-containers20:33
*** openstackgerrit has quit IRC20:34
*** sahilsinha_ has quit IRC20:35
*** armaan has quit IRC20:35
*** armaan has joined #openstack-containers20:35
*** savvas_ has quit IRC20:35
*** mgariepy has quit IRC20:35
*** vkmc has quit IRC20:36
*** penick has quit IRC20:38
*** mgoddard has joined #openstack-containers20:39
*** sahilsinha has joined #openstack-containers20:39
*** oikiki has quit IRC20:40
*** mgariepy has joined #openstack-containers20:40
*** Oku_OS-away has quit IRC20:45
*** Oku_OS-away has joined #openstack-containers20:45
*** vkmc has joined #openstack-containers20:45
*** vkmc has quit IRC20:46
*** vkmc has joined #openstack-containers20:46
*** oikiki has joined #openstack-containers20:48
*** flwang1 has joined #openstack-containers20:58
*** mgoddard has quit IRC20:59
*** dave-mcc_ has joined #openstack-containers20:59
*** linkmark has quit IRC21:20
*** penick has joined #openstack-containers21:23
*** jappleii__ has joined #openstack-containers21:23
*** jappleii__ has quit IRC21:24
*** jappleii__ has joined #openstack-containers21:25
*** jappleii__ has quit IRC21:26
*** jappleii__ has joined #openstack-containers21:27
*** jappleii__ has quit IRC21:27
*** jappleii__ has joined #openstack-containers21:28
*** chhavi__ has joined #openstack-containers21:30
*** chhavi__ has quit IRC21:34
*** AlexeyAbashkin has joined #openstack-containers21:34
*** AlexeyAbashkin has quit IRC21:38
*** AlexeyAbashkin has joined #openstack-containers21:57
*** rcernin has joined #openstack-containers21:58
*** AlexeyAbashkin has quit IRC22:02
*** flwang has quit IRC22:06
*** flwang has joined #openstack-containers22:19
*** jmlowe has quit IRC22:55
*** armaan has quit IRC23:03
*** armaan has joined #openstack-containers23:04
*** absubram has joined #openstack-containers23:05
*** marst has quit IRC23:08
*** penick has quit IRC23:12
*** jmlowe has joined #openstack-containers23:21
*** chandankumar has quit IRC23:22
*** chandankumar has joined #openstack-containers23:23
*** chandankumar has quit IRC23:29
*** jmlowe has quit IRC23:33
*** chandankumar has joined #openstack-containers23:34
*** penick has joined #openstack-containers23:36
*** jmlowe has joined #openstack-containers23:53
*** absubram has quit IRC23:54
*** oikiki has quit IRC23:55

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!