Thursday, 2018-06-21

*** markguz_ has joined #openstack-containers00:03
flwangstrigazi: you probably mentioned that before, but i forgot00:04
*** markguz has quit IRC00:06
*** colin- has quit IRC00:22
*** yamamoto has joined #openstack-containers00:46
*** yamamoto has quit IRC00:52
*** ricolin_ has joined #openstack-containers01:11
*** zhubingbing has joined #openstack-containers01:12
*** zhubingbing has quit IRC01:14
*** zhubingbing has joined #openstack-containers01:14
*** zhubingbing has quit IRC01:16
*** zhubingbing has joined #openstack-containers01:16
*** zhubingbing has quit IRC01:18
*** zhubingbing has joined #openstack-containers01:18
*** zhubingbing has quit IRC01:22
*** zhubingbing has joined #openstack-containers01:22
*** yamamoto has joined #openstack-containers01:48
*** yamamoto has quit IRC01:54
*** zhubingbing has joined #openstack-containers02:23
*** zhubingbing has quit IRC02:25
*** georgem1 has joined #openstack-containers02:28
*** markguz_ has quit IRC02:41
*** itlinux has joined #openstack-containers02:43
itlinuxnot sure that's the case imdigitaljim: but I will check02:44
*** yamamoto has joined #openstack-containers02:50
*** spsurya has joined #openstack-containers02:51
*** rcernin has quit IRC02:52
*** yamamoto has quit IRC02:54
*** rcernin has joined #openstack-containers03:10
*** yolanda has quit IRC03:12
*** markguz has joined #openstack-containers03:25
*** ramishra has joined #openstack-containers03:37
*** georgem1 has quit IRC03:39
*** lpetrut has joined #openstack-containers03:46
*** ykarel|away has joined #openstack-containers03:49
*** yamamoto has joined #openstack-containers03:51
*** udesale has joined #openstack-containers03:52
*** rcernin has quit IRC03:55
*** yamamoto has quit IRC03:56
*** jkuei_ has joined #openstack-containers04:09
*** jkuei has quit IRC04:12
*** rcernin has joined #openstack-containers04:15
*** lpetrut has quit IRC04:23
*** lpetrut has joined #openstack-containers04:30
*** markguz has quit IRC04:38
*** yamamoto has joined #openstack-containers04:41
*** ykarel|away is now known as ykarel04:44
*** lpetrut has quit IRC04:45
*** ozerovandrei has joined #openstack-containers05:08
*** janki has quit IRC05:09
*** ozerovandrei has quit IRC05:15
*** yolanda has joined #openstack-containers05:34
*** udesale_ has joined #openstack-containers05:36
*** udesale has quit IRC05:37
*** AlexeyAbashkin has joined #openstack-containers05:58
*** janki has joined #openstack-containers06:00
*** yasemin has quit IRC06:18
*** lpetrut has joined #openstack-containers06:19
*** ykarel is now known as ykarel|afk06:23
*** zhubingbing has joined #openstack-containers06:26
*** zhubingbing has quit IRC06:28
*** ykarel|afk has quit IRC06:33
*** ykarel|afk has joined #openstack-containers06:34
*** Roybent has joined #openstack-containers06:37
*** yasemin has joined #openstack-containers06:38
*** AlexeyAbashkin has quit IRC06:43
*** AlexeyAbashkin has joined #openstack-containers06:43
*** iranzo has joined #openstack-containers06:45
*** iranzo has joined #openstack-containers06:45
*** ykarel|afk is now known as ykarel06:46
*** ispp has joined #openstack-containers06:57
*** lpetrut has quit IRC07:08
*** lpetrut has joined #openstack-containers07:11
*** udesale__ has joined #openstack-containers07:14
*** peereb has joined #openstack-containers07:15
*** udesale_ has quit IRC07:16
*** armaan has joined #openstack-containers07:20
*** Roybent has quit IRC07:21
mago_Hi team, strigazi, FYI I found a way of "protecting" my public clusters using the policy file, as suggested by georgem1.07:22
*** pcaruana has joined #openstack-containers07:22
mago_I replaced the default value ("clustertemplate:delete": "rule:deny_cluster_user") with : "clustertemplate:delete": "rule:admin_or_owner". This seems to be working very well07:23
*** Alexey_Abashkin has joined #openstack-containers07:23
*** AlexeyAbashkin has quit IRC07:24
*** Alexey_Abashkin is now known as AlexeyAbashkin07:24
*** markguz has joined #openstack-containers07:38
*** markguz has quit IRC07:43
mago_I have another feature request already :D (don't know if this is the right place to talk about it or if I should also / instead fill in some feature request form, please let me know)07:45
mago_It would be great if we could be able to specify two different images for the master k8s and for the nodes07:46
*** ktibi has joined #openstack-containers07:46
*** rcernin has quit IRC07:47
mago_In the context of enabling GPUs in our k8s cluster, as we configure GPU passthrough on the image directly, booting up a master k8s node with this image "wastes" a GPU07:47
*** belmoreira has joined #openstack-containers07:48
*** mgoddard has joined #openstack-containers07:49
*** ykarel is now known as ykarel|lunch08:02
*** serlex has joined #openstack-containers08:31
*** flwang1 has joined #openstack-containers08:50
*** salmankhan has joined #openstack-containers09:22
*** zhubingbing has joined #openstack-containers09:28
*** ykarel|lunch is now known as ykarel09:29
*** zhubingbing has quit IRC09:30
*** rcernin has joined #openstack-containers09:40
*** rcernin has quit IRC09:53
*** spsurya has quit IRC09:58
strigazimago_: https://storyboard.openstack.org/#!/project/1032 Why do you need a special image?10:11
strigazimago_: We should be able to do it without a special one, we are working on ti10:11
strigazimago_: see: https://storyboard.openstack.org/#!/story/200257610:11
strigazimago_: for the CT use case please create a story in https://storyboard.openstack.org/#!/project/103210:12
*** vijaykc4 has joined #openstack-containers10:35
openstackgerritMerged openstack/magnum stable/queens: Strip signed certificate  https://review.openstack.org/57416710:42
*** lpetrut has quit IRC10:45
mago_strigazi, you're totally right, I made a stupid confusion. I'm working with olivier and I'm using the images he built. I just forgot for a moment that we use flavors with 'pci_passthrough' properties to attach the GPU card. sorry about that ... must have forgot my coffee this morning.10:49
*** rcernin has joined #openstack-containers10:55
*** zul has quit IRC10:56
*** zul has joined #openstack-containers10:57
*** jento has joined #openstack-containers11:02
*** yamamoto has quit IRC11:05
*** yamamoto has joined #openstack-containers11:06
*** yamamoto has quit IRC11:10
*** rcernin has quit IRC11:19
*** udesale__ has quit IRC11:20
*** udesale__ has joined #openstack-containers11:21
*** armaan_ has joined #openstack-containers11:24
*** udesale__ has quit IRC11:27
*** armaan has quit IRC11:27
*** lpetrut has joined #openstack-containers11:40
*** lpetrut has quit IRC11:48
*** lpetrut has joined #openstack-containers11:52
*** yamamoto has joined #openstack-containers12:04
*** vijaykc4 has quit IRC12:09
*** markguz has joined #openstack-containers12:20
*** mgoddard has quit IRC12:21
*** pc_m has quit IRC12:21
*** mgoddard has joined #openstack-containers12:27
*** lpetrut has quit IRC12:27
*** pc_m has joined #openstack-containers12:30
*** vijaykc4 has joined #openstack-containers12:46
*** lpetrut has joined #openstack-containers12:47
*** yamamoto has quit IRC12:51
*** salmankhan has quit IRC13:04
*** salmankhan has joined #openstack-containers13:04
*** armaan_ has quit IRC13:06
*** markguz has quit IRC13:07
*** georgem1 has joined #openstack-containers13:08
*** armaan has joined #openstack-containers13:08
*** chhagarw has joined #openstack-containers13:21
*** chhavi__ has joined #openstack-containers13:22
*** yamamoto has joined #openstack-containers13:25
*** zhubingbing has joined #openstack-containers13:32
*** zhubingbing has quit IRC13:33
*** markguz has joined #openstack-containers13:36
*** yamamoto has quit IRC13:36
*** jento has quit IRC13:37
*** spsurya has joined #openstack-containers13:44
*** yamamoto has joined #openstack-containers13:48
*** janki has quit IRC13:56
*** vijaykc4 has quit IRC13:59
*** armaan has quit IRC14:00
*** armaan has joined #openstack-containers14:00
*** ktibi has quit IRC14:08
*** ykarel is now known as ykarel|away14:12
*** janki has joined #openstack-containers14:19
*** itlinux has quit IRC14:31
*** armaan has quit IRC14:37
*** markguz has quit IRC14:37
*** armaan has joined #openstack-containers14:37
*** zhubingbing has joined #openstack-containers14:50
*** peereb has quit IRC14:51
*** zhubingbing has quit IRC14:52
*** zhubingbing has joined #openstack-containers14:53
*** zhubingbing has quit IRC14:54
*** zhubingbing has joined #openstack-containers14:55
*** pcaruana has quit IRC14:56
*** jento has joined #openstack-containers14:56
*** zhubingbing has quit IRC14:57
*** zhubingbing has joined #openstack-containers14:57
*** markguz has joined #openstack-containers14:58
*** zhubingbing has quit IRC15:02
*** zhubingbing has joined #openstack-containers15:03
*** zhubingbing has quit IRC15:05
*** zhubingbing has joined #openstack-containers15:05
*** zhubingbing has quit IRC15:07
*** markguz has quit IRC15:07
*** zhubingbing has joined #openstack-containers15:08
*** zhubingbing has quit IRC15:09
*** ramishra has quit IRC15:09
*** zhubingbing has joined #openstack-containers15:09
*** zhubingbing has quit IRC15:10
*** zhubingbing has joined #openstack-containers15:11
*** zhubingbing has quit IRC15:13
*** zhubingbing has joined #openstack-containers15:13
*** zhubingbing has quit IRC15:14
*** zhubingbing has joined #openstack-containers15:14
*** zhubingbing has quit IRC15:15
*** markguz has joined #openstack-containers15:15
*** zhubingbing has joined #openstack-containers15:16
*** zhubingbing has quit IRC15:18
*** zhubingbing has joined #openstack-containers15:18
*** itlinux has joined #openstack-containers15:19
*** zhubingbing has quit IRC15:19
*** zhubingbing has joined #openstack-containers15:19
*** zhubingbing has quit IRC15:21
*** georgem1 has quit IRC15:21
*** zhubingbing has joined #openstack-containers15:21
*** zhubingbing has quit IRC15:22
*** zhubingbing has joined #openstack-containers15:23
*** zhubingbing has quit IRC15:24
*** markguz has quit IRC15:24
*** zhubingbing has joined #openstack-containers15:24
*** zhubingbing has quit IRC15:25
*** zhubingbing has joined #openstack-containers15:26
*** zhubingbing has quit IRC15:28
*** zhubingbing has joined #openstack-containers15:28
*** zhubingbing has quit IRC15:29
*** zhubingbing has joined #openstack-containers15:30
*** vijaykc4 has joined #openstack-containers15:30
*** zhubingbing has quit IRC15:32
*** zhubingbing has joined #openstack-containers15:33
*** zhubingbing has quit IRC15:34
*** zhubingbing has joined #openstack-containers15:35
*** zhubingbing has quit IRC15:36
*** zhubingbing has joined #openstack-containers15:37
*** zhubingbing has quit IRC15:38
*** zhubingbing has joined #openstack-containers15:38
*** zhubingbing has quit IRC15:39
*** zhubingbing has joined #openstack-containers15:40
*** zhubingbing has quit IRC15:43
*** zhubingbing has joined #openstack-containers15:44
*** zhubingbing has quit IRC15:45
*** zhubingbing has joined #openstack-containers15:45
*** zhubingbing has quit IRC15:46
*** zhubingbing has joined #openstack-containers15:47
*** ykarel|away has quit IRC15:47
*** ramishra has joined #openstack-containers15:47
*** ianychoi has joined #openstack-containers15:48
*** ispp has quit IRC15:49
*** zhubingbing has quit IRC15:50
*** vijaykc4 has quit IRC15:50
*** zhubingbing has joined #openstack-containers15:50
*** zhubingbing has quit IRC15:51
*** zhubingbing has joined #openstack-containers15:52
*** zhubingbing has quit IRC15:53
*** zhubingbing has joined #openstack-containers15:54
*** ricolin_ has quit IRC15:54
*** raopajay has joined #openstack-containers15:55
*** zhubingbing has quit IRC15:55
*** zhubingbing has joined #openstack-containers15:56
*** zhubingbing has quit IRC15:57
*** zhubingbing has joined #openstack-containers15:57
*** vijaykc4 has joined #openstack-containers15:57
*** zhubingbing has quit IRC15:58
*** georgem1 has joined #openstack-containers15:58
*** zhubingbing has joined #openstack-containers15:59
*** zhubingbing has quit IRC16:00
*** zhubingbing has joined #openstack-containers16:01
*** zhubingbing has quit IRC16:03
*** zhubingbing has joined #openstack-containers16:04
*** zhubingbing has quit IRC16:04
*** zhubingbing has joined #openstack-containers16:04
*** ramishra has quit IRC16:05
*** zhubingbing has quit IRC16:05
*** zhubingbing has joined #openstack-containers16:06
*** zhubingbing has quit IRC16:07
*** zhubingbing has joined #openstack-containers16:07
*** raopajay has quit IRC16:08
*** zhubingbing has quit IRC16:09
*** raopajay has joined #openstack-containers16:09
*** zhubingbing has joined #openstack-containers16:09
*** zhubingbing has quit IRC16:10
*** zhubingbing has joined #openstack-containers16:11
*** zhubingbing has quit IRC16:14
*** zhubingbing has joined #openstack-containers16:15
*** iranzo has quit IRC16:15
*** zhubingbing has quit IRC16:16
*** zhubingbing has joined #openstack-containers16:16
*** zhubingbing has quit IRC16:17
*** raopajay has quit IRC16:17
*** zhubingbing has joined #openstack-containers16:18
*** zhubingbing has quit IRC16:19
*** raopajay has joined #openstack-containers16:19
*** zhubingbing has joined #openstack-containers16:19
*** zhubingbing has quit IRC16:20
*** zhubingbing has joined #openstack-containers16:21
*** zhubingbing has quit IRC16:22
*** zhubingbing has joined #openstack-containers16:23
*** zhubingbing has quit IRC16:24
*** zhubingbing has joined #openstack-containers16:25
*** zhubingbing has quit IRC16:26
*** zhubingbing has joined #openstack-containers16:26
*** zhubingbing has quit IRC16:27
*** spsurya has quit IRC16:28
*** zhubingbing has joined #openstack-containers16:28
*** zhubingbing has quit IRC16:29
*** zhubingbing has joined #openstack-containers16:30
*** zhubingbing has quit IRC16:31
*** zhubingbing has joined #openstack-containers16:32
*** zhubingbing has quit IRC16:35
*** zhubingbing has joined #openstack-containers16:35
*** zhubingbing has quit IRC16:36
*** zhubingbing has joined #openstack-containers16:37
*** zhubingbing has quit IRC16:38
*** yamamoto has quit IRC16:38
*** armaan has quit IRC16:38
*** zhubingbing has joined #openstack-containers16:39
*** armaan has joined #openstack-containers16:39
*** zhubingbing has quit IRC16:41
*** zhubingbing has joined #openstack-containers16:41
*** ARussell has joined #openstack-containers16:42
*** zhubingbing has quit IRC16:42
*** zhubingbing has joined #openstack-containers16:42
*** zhubingbing has quit IRC16:44
*** zhubingbing has joined #openstack-containers16:45
*** yamamoto has joined #openstack-containers16:46
*** zhubingbing has quit IRC16:47
*** georgem1 has quit IRC16:47
*** zhubingbing has joined #openstack-containers16:47
*** zhubingbing has quit IRC16:48
*** markguz has joined #openstack-containers16:49
*** zhubingbing has joined #openstack-containers16:49
*** zhubingbing has quit IRC16:50
*** zhubingbing has joined #openstack-containers16:51
*** yamamoto has quit IRC16:51
*** mgoddard has quit IRC16:52
*** zhubingbing has quit IRC16:52
*** zhubingbing has joined #openstack-containers16:53
*** armaan has quit IRC16:54
*** armaan has joined #openstack-containers16:54
*** yamamoto has joined #openstack-containers16:55
*** zhubingbing has quit IRC16:56
flwang1strigazi: meeting in mins?16:56
*** zhubingbing has joined #openstack-containers16:56
*** zhubingbing has quit IRC16:57
*** armaan has quit IRC16:57
*** zhubingbing has joined #openstack-containers16:58
strigaziflwang1: yes16:58
*** cbrumm has joined #openstack-containers16:58
strigazi#startmeeting containers16:59
*** vijaykc4 has quit IRC16:59
openstackMeeting started Thu Jun 21 16:59:24 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.16:59
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:59
*** openstack changes topic to " (Meeting topic: containers)"16:59
openstackThe meeting name has been set to 'containers'16:59
strigazi#topic Roll Call16:59
*** openstack changes topic to "Roll Call (Meeting topic: containers)"16:59
strigazio/16:59
imdigitaljimo/16:59
cbrummo/16:59
*** yamamoto has quit IRC17:00
flwang1o/17:00
strigaziThanks for joining the meeting imdigitaljim cbrumm flwang117:00
strigazi#topic Announcements17:00
*** openstack changes topic to "Announcements (Meeting topic: containers)"17:00
jslatero/17:01
ARussello/17:01
strigaziWe can have this meeting for 3-4 weeks to see if it works for people, then we decide either to have two meetings, alternate or change to this hour Thursdays or Tuesdays17:02
strigazihello jslater and ARussell17:02
strigazi#topic Blueprints/Bugs/Ideas17:02
*** openstack changes topic to "Blueprints/Bugs/Ideas (Meeting topic: containers)"17:02
*** zhubingbing has quit IRC17:02
flwang1jslater: ARussell: are you also from blizzard?17:02
cbrummYes, they both are.17:03
jslatercorrect17:03
*** zhubingbing has joined #openstack-containers17:03
flwang1wow, big team here17:03
cbrummJim, and Jslater are two of my engineers. Arussell is my PM17:03
*** colin_ has joined #openstack-containers17:03
*** colin_ is now known as colin-17:03
cbrummand colin who just joined17:03
strigaziUsually in this section of the meeting we report what are we working on and ask for help/feedback17:03
*** georgem1 has joined #openstack-containers17:03
strigazicbrumm: excellent :)17:04
colin-sorry for being late17:04
strigazicolin-: no worries17:04
imdigitaljimwe're working on changes with Magnum to enable cloud-controller-manager to work with OpenStack.17:04
flwang1good to see all you folks17:04
strigaziimdigitaljim: cool, we have a story, just a moment17:05
imdigitaljimso far we've had to add back the kube-proxy to the master, which should maybe optional rather than just simply removed17:05
strigazi#link https://storyboard.openstack.org/#!/story/176274317:05
strigaziimdigitaljim: yes, we went that direction because for almost two year we didn't have any use case for proxy and kubelet in the master nodes17:06
strigaziWe can put them back,17:06
imdigitaljimwe've also explored the idea on adding another k8s feature-gate (labels) to have post-cluster creation scripts/manifests for more customization17:07
*** zhubingbing has quit IRC17:08
strigaziAdd extra software deployments?17:08
*** zhubingbing has joined #openstack-containers17:08
flwang1imdigitaljim: gerrit gate?17:08
imdigitaljimwe have a number of different customers with varying requirements and adding it to magnum would be inefficient/ineffective17:09
strigaziWe investigated this option for some time but we had only software configs available17:09
strigaziimdigitaljim: so at CERN, we do this already but we patch magnum17:09
cbrummis that patch in your public gitlab?17:10
strigaziwe add extra labels to enable/disanble CERN-specific features17:10
flwang1strigazi: if we can host those scripts/yaml somewhere (on github), this could be doable?17:10
imdigitaljimbut simplying being able  to do --labels extra_manifests="mycode.sh"17:10
strigazican you see this? https://gitlab.cern.ch/cloud-infrastructure/magnum17:10
imdigitaljimyeah we have access here17:10
cbrummYes, thank you17:10
strigaziimdigitaljim: not like this, but we want what you just mentioned too17:11
imdigitaljimyeah we're not set on that design just the idea of having the feature17:11
strigazifor example we added cephfs this week https://gitlab.cern.ch/cloud-infrastructure/magnum/commit/42d35211c540c9b86eb91fd1c042505dbddbfcef17:12
imdigitaljimwhy not have a label that links to a single file with additional parameters to source?17:12
strigaziimdigitaljim: flwang1 I think the extra script is better to be saved in the magnum DB or live as a config option17:12
imdigitaljimso you can add n parameters on a single label17:13
*** zhubingbing has quit IRC17:13
flwang1strigazi: i also realized using label for new feature config is hard to manage sometimes17:13
strigaziimdigitaljim: we can have a generic field yes17:14
colin-how has ceph been doing since you added it? curious about reliability of attaches/detaches/unexpected losses17:14
strigazione more json17:14
strigazicolin-: scale tests are planned for july17:14
strigaziwe will try crach our testing cephfs basically :)17:15
strigazi*crash17:15
strigaziWe just finished the implementation of cephfs-csi, so we can't tell for cephfs. Ceph however is going very well17:16
imdigitaljimdoes anyone here use octavia with magnum currently?17:16
strigaziflwang1: ^^17:16
cbrummgood to hear, we have a storage team that might be looking at ceph in the future17:16
strigaziwe don't have octavia17:16
imdigitaljimwe've got plans to hit it (hopefully by the end of july ;D)17:17
cbrummsince it has support in the cloud controller already I'm not sure there will be much magnum work for that17:18
strigaziso we have 10PB out 15 PB in use on two ceph clusters17:18
cbrummmight have to just add a switch for octavia vs neutron17:18
flwang1strigazi: we're using octavia17:19
flwang1we just release it yesterday on one region17:19
colin-curious, is CERN using another LoadBalancer provider integrated with the in-tree Kubernetes/Openstack libraries? Perhaps a hardware device+driver?17:19
imdigitaljimflwang1: good to know :)17:19
flwang1now we're going to upgrade heat, and then deploy magnum17:19
imdigitaljimflwang1: i might need to pick your brain later17:19
strigazicolin-: we use traefik and an in-house DNS lbaas17:19
colin-strigazi: got it, thanks17:19
strigazicolin-: so some nodes are labeled to run traefik17:20
strigaziand these nodes use the same alias17:20
imdigitaljimwhat are plans to upgrade heat templates in magnum to something like queens and do refactoring?17:20
strigazithe alias is created manually, not very dynamic17:20
colin-sounds interesting :)17:20
strigaziimdigitaljim: what do you mean by upgrade, what changes do yo need?17:21
strigaziimdigitaljim: what do you mean by upgrade, what changes do you need?17:21
imdigitaljimwe could use later heat template versions and new features with them17:21
imdigitaljimto cleanup some workflows17:21
imdigitaljimhttps://docs.openstack.org/heat/latest/template_guide/hot_spec.html17:22
strigazilet's start the discussion to see which version works17:23
strigaziin queens only filter is new?17:23
imdigitaljimmostly right now we use Juno17:23
imdigitaljim2014-10-1617:23
imdigitaljimso maybe not be 7 versions behind17:24
imdigitaljimtheres a lot more than filter ;)17:24
strigaziThe minimum that we should go is Newton which is not eol17:24
flwang1i think newton is eol now17:24
strigaziok, I see the diff17:25
flwang1can we have a etherpad to track this ideas?17:25
strigazipike has quite a lot of things17:25
imdigitaljimi was hoping pike17:26
flwang1so that we can plan these work in Stein17:26
imdigitaljim(pike at minimum)17:26
strigaziflwang1: we have the log of the meeting we can add stories17:26
imdigitaljimfor queens+17:26
flwang1ok, fair enough17:26
strigazi#action add story to upgrade the heat-template version to pike or queens17:27
strigazi#action add story for post create deployments17:27
strigaziwe have a story for the cloud-provider already17:28
flwang1thanks strigazi17:28
imdigitaljimkube-proxy/kubelet optionally included for master17:28
imdigitaljim?17:28
strigaziimdigitaljim let's add them back17:28
flwang1imdigitaljim: if you're using calico, we have alreay added kubelet on master17:28
strigaziflwang1: thoughts?17:28
flwang1strigazi: i'm OK with that ;)17:28
imdigitaljimall CNI's should be separated from these17:29
flwang1imdigitaljim: what's the network driver you're using btw?17:29
imdigitaljimwe use calico so we didnt have the kubelet problem17:29
cbrummCalico17:29
strigazi#action add story to put kubelet/proxy back in the master nodes17:29
flwang1imdigitaljim: good to know you're using calico :D17:29
imdigitaljimhttps://review.openstack.org/#/c/576623/17:29
imdigitaljimstrigazi: that was where the confusion came from here17:30
imdigitaljimwe should generally decouple master components from CNI17:30
strigaziflannel doesn't need the proxy on the master but if the master node wants to ping a service it is required17:31
*** salmankhan has quit IRC17:31
flwang1strigazi: so let's get them back?17:32
strigaziWe can have a script per CNI, flwang1 started it17:32
strigaziflwang1: yes17:32
imdigitaljimyeah that sounds great17:32
strigaziI'll add the story17:32
flwang1strigazi: then my next question is should it be backported to queens?17:32
flwang1given we removed them in queens17:33
strigaziflwang1: we can17:33
strigaziWe can ask in the ML17:33
strigazibut it also depends on what we need17:34
strigazicern, catalyst, blizzard, other users17:34
strigaziI think it's good, the use cases for it, increased quite a lot17:34
flwang1cool cool17:35
imdigitaljimsidenote: we also have someone here working on magnum for gophercloud17:35
strigazicern doesn't have a big stake at it, we cherry-pick anyway what we need17:36
strigaziimdigitaljim: \o/17:36
flwang1imdigitaljim: i already have a patch for gophercloud to support magnum17:36
strigaziimdigitaljim: to add the client?17:36
flwang1just doing some testing now17:36
flwang1strigazi: yes17:36
flwang1so that user can use terraform to talk to magnum api17:36
strigaziall I hear is autoscaling17:36
flwang1and i'm also working on support mangum in ansible/shade17:37
flwang1imdigitaljim: ^17:37
cbrummflwang1: I'll make sure our guy contacts you about gophercloud17:37
flwang1cbrumm: no problem17:37
strigaziplease cc me in github17:37
cbrummansible/shade is good, we will likely need terraform too17:37
flwang1strigazi: for what? the gophercloud work?17:38
strigaziyes, you have pr?17:38
strigaziyes, you have PRs?17:38
flwang1not yet, in my local17:38
imdigitaljimyeah let me know and i can connect you with our engineer on gophercloud17:38
flwang1i will start to submit probably since today or early next week17:38
cbrummgood to know, we don't want to duplicate effort17:39
*** marcin12345 has joined #openstack-containers17:39
flwang1#link magum support in gophercloud https://github.com/gophercloud/gophercloud/issues/100317:39
imdigitaljimuser on that link JackKuei is working with us17:40
cbrummYeah, JackKuei is our guy at the bottom of that17:40
colin-flwang1: for your Calico deployment are you using the iptables or ipvs method?17:40
strigaziflwang1: you talked to him already17:41
flwang1imdigitaljim: nice, i will interlock with him/her17:41
imdigitaljimoh just curious question, do you use systemd vs cgroupfs and why?17:41
mordredflwang1: ++ magnum support in ansible/shade17:41
imdigitaljimrelated here17:41
imdigitaljimhttps://review.openstack.org/#/c/571583/17:41
flwang1mordred: hah, good to see you here17:42
* mordred lurks everywhere17:42
strigaziimdigitaljim: systemd because it the default in docker in fedora17:42
flwang1mordred: i'm still waiting for your answer about the upgrade17:42
flwang1colin-: can we talk offline? i need to check the config17:42
strigaziimdigitaljim: we will move to cgroupfs with this option17:42
colin-for sure17:42
imdigitaljimi havent heard from brtknr but i'd like to get this pushed through17:43
mordredflwang1: oh - I responded in channel this morning ... but maybe the irc gods were not friendly to us17:43
mordredflwang1: http://eavesdrop.openstack.org/irclogs/%23openstack-sdks/latest.log.html#t2018-06-21T11:42:2417:43
flwang1mordred: oh, ok, i'm on laptop now, i will check on my workstation later17:43
flwang1thank you!17:44
mordredflwang1: sweet17:44
imdigitaljimstrigazi: would i be able to complete this story on his behalf? https://review.openstack.org/#/c/571583/17:44
flwang1mordred: i will start the coding work soon, pls review it17:44
strigaziI don't think brtknr will mind you can push a patchset, I tested PS317:45
flwang1imdigitaljim: i would suggest hold a bit because i think brtknr is still active17:45
strigaziyou can leave a comment in gerrit17:45
strigazithe patch is in good state17:45
imdigitaljimflwang1: sounds good17:46
imdigitaljimthe patch would technically break ironic17:47
imdigitaljimi just was gonna add a compatibility portion of it17:47
strigaziDo you use it? the ironic driver?17:47
imdigitaljimno, just didnt know if that was a concern17:48
strigaziWe use the fedora-atomic one with ironic17:48
imdigitaljimor would/could we consider reducing some of our support of other drivers17:48
flwang1colin-: re your calico question, we're using iptables17:48
strigaziWe want to remove the ironic driver17:48
flwang1colin-: IIRC, the ipvs is still in beta?17:48
imdigitaljimstrigazi: +117:48
strigazifolks, let's start wrapping up and plan for next week17:49
colin-flwang1: good to know thanks for checking!17:49
cbrummThank you all for having this meeting at this time17:49
imdigitaljimwell great meeting! thanks for accommodating us with this meeting, we really appreciate it17:50
flwang1mordred: read your comments, sounds good idea, i will go for that direction, thanks a lot17:50
strigazicbrumm: thank you for joining, it was great to see you all17:50
colin-nice chatting with you folks, will idle here more often17:50
colin-nice to be back on irc17:50
strigazicolin-: :)17:51
flwang1good to see you guys17:51
*** eagleson has joined #openstack-containers17:51
flwang1it's 5:51 AM, very dark17:51
flwang1waves from NZ17:51
mordredflwang1: woot!17:51
imdigitaljimo/17:51
cbrummenjoy your friday flwang117:51
mordredflwang1: it also is probably a better idea to focus on the copy of the shade code in the openstacksdk repo ...17:52
strigaziflwang1 is a hero17:52
flwang1strigazi:  i need a medal17:52
mordredflwang1: the ansible modules use it now - and I'm hoping to work soon on making shade use the code in sdk - so less churn on your part17:52
* mordred hands flwang1 a medal17:52
strigaziok, I think it is better to summarize the meeting in an email17:52
strigazithere quite some content17:52
flwang1mordred: so do you mean implement the code in openstacksdk instead of in shade? sorry if it's a newbie question17:53
strigaziflwang1: definitely, I can bring you a CERN helmet in Berlin :)17:53
flwang1strigazi: logged17:54
flwang1and beer17:54
strigaziblizzard folks you should pass by here and pick yours ;)17:55
imdigitaljim:D17:55
strigaziflwang1: deutsche beer17:55
flwang1can I get a logo of blizzard?17:55
strigazilast action:17:55
strigazi#action strigazi to summarize the meeting in a ML mail17:56
*** yamamoto has joined #openstack-containers17:56
*** cbrumm has quit IRC17:56
imdigitaljimflwang1: you mean like this? https://bit.ly/2MdWCOn17:57
flwang1thank you! haha17:57
strigaziLet's end the meeting then :) Thank you all!17:58
strigazi#endmeeting17:58
*** openstack changes topic to "OpenStack Containers Team"17:58
openstackMeeting ended Thu Jun 21 17:58:28 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:58
openstackMinutes:        http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-06-21-16.59.html17:58
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-06-21-16.59.txt17:58
openstackLog:            http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-06-21-16.59.log.html17:58
* strigazi will back in ~12hrs18:00
* strigazi will be back in ~12hrs18:00
flwang1strigazi: ttyl18:00
strigaziflwang1: see you, thanks again for being here this early18:01
flwang1strigazi: no problem18:01
*** yamamoto has quit IRC18:04
*** flwang1 has quit IRC18:07
*** marcin12345 has quit IRC18:08
*** marcin1239976 has joined #openstack-containers18:09
*** marcin1239976 has quit IRC18:09
*** AlexeyAbashkin has quit IRC18:10
*** marcin1asdasd has joined #openstack-containers18:13
*** marcin1asdasd has quit IRC18:13
*** lpetrut has quit IRC18:15
*** marcin-12345 has joined #openstack-containers18:18
*** marcin-12345 has quit IRC18:18
*** janki has quit IRC18:25
*** flwang1 has joined #openstack-containers18:28
*** ARussell has quit IRC18:37
georgem1last update of my failed attempts to deploy Magnum on Pike and Ubuntu 16.4: the cluster creation fails with "CREATE aborted (Task create from ResourceGroup "kube_masters" Stack "k8s_cluster-hzojxlidkpas" [85ec8154-d607-45da-a290-570339dc21f4] Timed out) " now18:37
georgem1I changed in [keystone_authtoken] the "auth_uri" to point to the public endpoint and the "/etc/sysconfig/heat-params" inside the kube-master has a reachable address for the WAIT_CURL and I tested it and it gets 200 OK18:38
*** lpetrut has joined #openstack-containers18:44
imdigitaljimyou cant do k8s on ubuntu with magnum18:49
imdigitaljimgeorgem1:^18:49
imdigitaljimunless you're making your own driver18:49
imdigitaljimmesos has an ubuntu driver18:51
imdigitaljimk8s has a fedora atomic driver18:51
imdigitaljimand swarm has a fedora atomic driver18:52
georgem1imdigitaljim: I created the cluster template using this command: magnum cluster-template-create --name k8s_template --image-id fedora-atomic --keypair-id magnum_key --external-network-id ext-net --dns-nameserver 8.8.8.8 --flavor-id c2.large --master-flavor-id c2.small --docker-volume-size 5 --network-driver flannel --coe kubernetes --floating-ip-enabled18:52
imdigitaljim"Magnum on Pike and Ubuntu 16."18:53
imdigitaljimyour image is Ubuntu?18:53
flwang1imdigitaljim: he installed devstack on ubuntu18:53
georgem1no, the Openstack runs on Ubuntu18:53
georgem1not devstack, this is a production cluster18:54
georgem12600 cores18:54
imdigitaljimsure18:54
flwang1georgem1: ok, cool18:54
georgem1cloud-init inside the master node failed first at "Failed running /var/lib/cloud/instance/scripts/part-004"18:55
imdigitaljimoh ok i misunderstood sorry!18:55
flwang1you can run it manually to verify18:55
georgem1which should create "/etc/kubernetes/kube_openstack_config" and indeed it didn't exist18:55
imdigitaljimflwang1: we really should put -x on all the scripts18:55
georgem1but if I run the script "/var/lib/cloud/instance/scripts/part-004' manually, the "/etc/kubernetes/kube_openstack_config" gets created just fine18:55
flwang1imdigitaljim: and enough logs for those scripts18:56
georgem1so I'm not sure what's the problem18:56
georgem1do you have a working magnum config that I can look at? sanitized?18:56
flwang1georgem1: then i would suggest change the script before you deploy any cluster18:56
flwang1add +x and enough logs you want to check18:56
flwang1such as the user, group, etc18:56
flwang1georgem1: currently, the devstack is very stable, you can install a devstack env to compare18:57
georgem1I have a feeling something is wrong in my config, the only instructions I found and used were for Ocata18:57
georgem1sure, I can do do that18:58
georgem1thanks for the tip18:58
georgem1do you have a local.conf for it? with all the dependencies ?18:58
flwang1georgem1: this is a typical magnum.conf installed by devstack http://paste.openstack.org/show/724052/18:58
flwang1georgem1: you want a local.conf?18:59
georgem1sure, if you have one handy18:59
georgem1the magnum.conf you sent should at least confirm if my config is sane, thanks for it19:00
flwang1georgem1: http://paste.openstack.org/show/724053/19:00
flwang1no problem, happy to help19:01
*** yamamoto has joined #openstack-containers19:01
*** lpetrut has quit IRC19:03
*** lpetrut has joined #openstack-containers19:04
*** yamamoto has quit IRC19:06
*** jmlowe_ has quit IRC19:11
*** flwang1 has quit IRC19:31
*** jmlowe has joined #openstack-containers19:32
*** markguz_ has joined #openstack-containers19:56
*** markguz has quit IRC19:58
*** yamamoto has joined #openstack-containers20:02
*** yamamoto has quit IRC20:07
*** yolanda_ has joined #openstack-containers20:07
*** yolanda has quit IRC20:11
*** georgem1 has quit IRC20:42
*** lpetrut has quit IRC20:49
*** jmlowe has quit IRC20:54
*** flwang1 has joined #openstack-containers21:03
*** yamamoto has joined #openstack-containers21:04
*** yamamoto has quit IRC21:09
*** serlex has quit IRC21:09
*** jmlowe has joined #openstack-containers21:14
*** jmlowe has quit IRC21:31
*** chhavi__ has quit IRC21:33
*** chhagarw has quit IRC21:33
*** jmlowe has joined #openstack-containers21:39
*** itlinux has quit IRC22:04
*** yamamoto has joined #openstack-containers22:06
*** yamamoto has quit IRC22:11
*** rcernin has joined #openstack-containers22:26
*** georgem1 has joined #openstack-containers22:47
*** georgem1 has quit IRC23:00
*** yamamoto has joined #openstack-containers23:07
*** itlinux has joined #openstack-containers23:10
*** yamamoto has quit IRC23:12
*** yamamoto has joined #openstack-containers23:36
*** markguz_ has quit IRC23:46

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!