Tuesday, 2018-09-25

*** ricolin has joined #openstack-containers00:17
*** ricolin has quit IRC00:19
*** ricolin has joined #openstack-containers00:19
*** ricolin has quit IRC00:21
*** ricolin has joined #openstack-containers00:21
*** ricolin has quit IRC00:27
*** ricolin has joined #openstack-containers00:27
*** ricolin has quit IRC00:33
*** ricolin has joined #openstack-containers00:33
*** ricolin has quit IRC00:35
*** ricolin has joined #openstack-containers00:35
*** hongbin has joined #openstack-containers01:00
*** imdigitaljim has quit IRC01:10
openstackgerritmelissaml proposed openstack/magnum master: Update the URL in HACKING.rst  https://review.openstack.org/60488001:58
*** Bhujay has joined #openstack-containers02:29
*** dave-mccowan has quit IRC02:29
*** Bhujay has quit IRC02:30
*** Bhujay has joined #openstack-containers02:30
openstackgerritFeilong Wang proposed openstack/magnum-ui master: Support api-version when building client  https://review.openstack.org/60495502:39
*** ramishra has joined #openstack-containers03:18
*** edisonxiang has joined #openstack-containers03:21
*** ykarel|away has joined #openstack-containers03:39
*** ricolin has quit IRC03:41
*** ricolin has joined #openstack-containers03:41
*** Bhujay has quit IRC03:41
*** ricolin has quit IRC03:43
*** ricolin has joined #openstack-containers03:43
*** ykarel|away is now known as ykarel03:49
*** udesale has joined #openstack-containers03:57
*** rcernin has quit IRC04:24
*** hongbin has quit IRC04:33
*** Bhujay has joined #openstack-containers04:37
*** ricolin has quit IRC04:37
*** rcernin has joined #openstack-containers04:38
openstackgerritFeilong Wang proposed openstack/magnum-ui master: Fix cluster update  https://review.openstack.org/60496604:48
openstackgerritFeilong Wang proposed openstack/magnum-ui master: Display master_flavor_id and flavor_id when updating cluster  https://review.openstack.org/60496704:48
*** ykarel has quit IRC04:51
*** ykarel has joined #openstack-containers05:07
*** rcernin_ has joined #openstack-containers05:17
*** lbragstad has quit IRC05:18
*** rcernin has quit IRC05:19
*** pcaruana has joined #openstack-containers05:41
*** ricolin has joined #openstack-containers05:52
*** belmoreira has joined #openstack-containers05:58
*** Bhujay has quit IRC06:00
*** Bhujay has joined #openstack-containers06:14
*** Bhujay has quit IRC06:15
*** Bhujay has joined #openstack-containers06:16
*** Bhujay has quit IRC06:32
*** Bhujay has joined #openstack-containers06:33
*** lpetrut has joined #openstack-containers06:54
*** strigazi has joined #openstack-containers06:55
*** lpetrut has quit IRC06:56
*** strigazi has quit IRC06:56
*** lpetrut has joined #openstack-containers06:56
*** strigazi has joined #openstack-containers06:56
*** rcernin_ has quit IRC07:05
*** ykarel is now known as ykarel|lunch07:25
*** mattgo has joined #openstack-containers07:28
*** serlex has joined #openstack-containers07:45
*** eyalb has joined #openstack-containers07:49
openstackgerritMerged openstack/magnum stable/rocky: Use existing templates for cluster-update command  https://review.openstack.org/60486408:10
openstackgerritPanFengyun proposed openstack/magnum master: Specify storage driver in /etc/sysconfig/docker-storage  https://review.openstack.org/60500208:26
*** ttsiouts has joined #openstack-containers08:27
brtknrstrigazi: hmm how often does magnum poll heat for status update on queens? my heat status is CREATE_COMPLETE but magnum still reports CREATE_IN_PROGRESS08:30
strigazibrtknr: https://github.com/openstack/magnum/blob/master/magnum/service/periodic.py#L11208:31
strigazibrtknr: see https://bugs.launchpad.net/magnum/+bug/1746510 and https://github.com/openstack/magnum/commit/cf8468394027ffb1db420a72312b6a9f59b7838108:32
openstackLaunchpad bug 1746510 in Magnum "Kubernetes client is incompatible with evenlet and breaks the periodic tasks" [Undecided,In progress] - Assigned to Feilong Wang (flwang)08:32
*** Dobroslaw has joined #openstack-containers08:35
*** ykarel|lunch is now known as ykarel08:35
strigazibrtknr: have checked you conductor? is it crashing?08:37
strigazibrtknr: have you checked the conductor? is it crashing?08:37
brtknrYes, the logs do not show anything anamolous08:37
strigazido you see requests from magnum to heat?08:38
strigaziin the heat api log08:38
brtknrI am not sure what the request looks like08:39
strigazibrtknr: are you using httpd?08:39
strigazilike this: /v1/c197dee4-64da-452a-9a96-a28d79ef4c38/stacks/d1b49000-3cd9-4adc-b354-b9304bec00d708:40
strigazior in master/rocky like this: /v1/c197dee4-64da-452a-9a96-a28d79ef4c38/stacks/d1b49000-3cd9-4adc-b354-b9304bec00d7?resolve_outputs=False08:40
brtknr2018-09-25 09:39:00.425 25 INFO eventlet.wsgi.server [-] 10.60.253.1,192.168.7.2 - - [25/Sep/2018 09:39:00] "GET / HTTP/1.1" 300 307 0.00087908:41
brtknr2018-09-25 09:39:14.869 25 INFO eventlet.wsgi.server [req-bf03b842-798a-4e63-b807-86a86381e333 - 5638e8577bc84379baba4bfb66177086-d524b080-58f6-480c-b9d8-3f9ddcb - 4811be4349784d5b9b89005228fbd4f1 4811be4349784d5b9b89005228fbd4f1] 10.60.253.13,192.168.7.2 - - [25/Sep/2018 09:39:14] "GET08:41
brtknr/v1/cfa75d82627a413886fd7ce20fd2813c/stacks/k8s-fa27-ww7gmvyso7dx-kube_masters-6amvdqkykrvd-0-z274zn264nr6/8ccefc66-d3a4-426b-8ac5-b4f2a6dbdc04/resources/kube-master/metadata HTTP/1.1" 200 56985 0.12219108:41
brtknr2018-09-25 09:39:30.424 27 INFO eventlet.wsgi.server [-] 10.60.253.1,192.168.7.2 - - [25/Sep/2018 09:39:30] "GET / HTTP/1.1" 300 307 0.00081808:41
strigaziif you restart the conductor and the status is updated it means that something is craching the conducot08:41
brtknr2018-09-25 09:39:46.939 29 INFO eventlet.wsgi.server [req-803de1d8-4b21-49b3-ad52-feff9b5094aa - 5638e8577bc84379baba4bfb66177086-d524b080-58f6-480c-b9d8-3f9ddcb - 4811be4349784d5b9b89005228fbd4f1 4811be4349784d5b9b89005228fbd4f1] 10.60.253.13,192.168.7.2 - - [25/Sep/2018 09:39:46] "GET08:41
brtknr/v1/cfa75d82627a413886fd7ce20fd2813c/stacks/k8s-fa27-ww7gmvyso7dx-kube_masters-6amvdqkykrvd-0-z274zn264nr6/8ccefc66-d3a4-426b-8ac5-b4f2a6dbdc04/resources/kube-master/metadata HTTP/1.1" 200 56985 0.12620908:41
brtknr2018-09-25 09:40:00.427 28 INFO eventlet.wsgi.server [-] 10.60.253.1,192.168.7.2 - - [25/Sep/2018 09:40:00] "GET / HTTP/1.1" 300 307 0.00081208:41
brtknr2018-09-25 09:40:18.992 29 INFO eventlet.wsgi.server [req-d33c5837-2905-4142-8c40-8181ff8da89d - 5638e8577bc84379baba4bfb66177086-d524b080-58f6-480c-b9d8-3f9ddcb - 4811be4349784d5b9b89005228fbd4f1 4811be4349784d5b9b89005228fbd4f1] 10.60.253.13,192.168.7.2 - - [25/Sep/2018 09:40:18] "GET08:41
brtknr/v1/cfa75d82627a413886fd7ce20fd2813c/stacks/k8s-fa27-ww7gmvyso7dx-kube_masters-6amvdqkykrvd-0-z274zn264nr6/8ccefc66-d3a4-426b-8ac5-b4f2a6dbdc04/resources/kube-master/metadata HTTP/1.1" 200 56985 0.29949408:41
brtknr2018-09-25 09:40:30.457 26 INFO eventlet.wsgi.server [-] 10.60.253.1,192.168.7.2 - - [25/Sep/2018 09:40:30] "GET / HTTP/1.1" 300 307 0.00086008:41
brtknr2018-09-25 09:40:51.028 27 INFO eventlet.wsgi.server [req-8b088766-f80e-4a11-be67-e0dc9f2ec6d5 - 5638e8577bc84379baba4bfb66177086-d524b080-58f6-480c-b9d8-3f9ddcb - 4811be4349784d5b9b89005228fbd4f1 4811be4349784d5b9b89005228fbd4f1] 10.60.253.13,192.168.7.2 - - [25/Sep/2018 09:40:51] "GET08:41
brtknr/v1/cfa75d82627a413886fd7ce20fd2813c/stacks/k8s-fa27-ww7gmvyso7dx-kube_masters-6amvdqkykrvd-0-z274zn264nr6/8ccefc66-d3a4-426b-8ac5-b4f2a6dbdc04/resources/kube-master/metadata HTTP/1.1" 200 56985 0.13048408:41
brtknr2018-09-25 09:41:00.485 29 INFO eventlet.wsgi.server [-] 10.60.253.1,192.168.7.2 - - [25/Sep/2018 09:41:00] "GET / HTTP/1.1" 300 307 0.00084808:41
brtknri can see queens style requests, wasnt sure whether these requests are coming from magnum08:42
strigazibrtknr: maybe paste.openstack.org was a better option08:42
strigazibrtknr: the /metadata is not what we look for08:42
brtknrstrigazi: sorry! http://paste.openstack.org/show/730687/08:42
strigazibrtknr: these are coming from the heat agent08:44
strigaziif you restart the conductor and the status is updated it means that something is crashing the conductor08:44
*** mannamne has joined #openstack-containers08:46
*** ttsiouts has quit IRC08:46
brtknryes, restarting the container fixed the issue08:47
*** ttsiouts has joined #openstack-containers08:47
brtknrconductor*08:47
*** ttsiouts has quit IRC08:51
strigazibrtknr: so disable send_cluster_metrics https://github.com/openstack/magnum/commit/cf8468394027ffb1db420a72312b6a9f59b7838109:04
brtknrdoes this apply to magnum/queens as well? our magnum deployment has been working fine, it was only yesterday this happened, we havent upgraded to rocky yet09:06
strigazibrtknr: I don't know why it wasn't happening before. queens was affected, it is in the reno.09:07
strigazibrtknr: probable you changed something, kuberntes clusters are causing this09:07
*** ttsiouts has joined #openstack-containers09:07
*** ttsiouts has quit IRC09:10
*** ttsiouts has joined #openstack-containers09:10
brtknrso are you saying that swarm clusters shouldnt be affected?09:11
brtknrstrigazi: ^09:11
brtknrthat they ought to be reporting their statuses correctly?09:12
strigazibrtknr: no, if you have a k8s cluster in _COMPLETE then magnum will try to query the cluster for pods and due to the bug mentioned above the periodic job will crash09:12
strigazibrtknr: even if you create swarm cluster and you have any number of k8s clusters in _COMPLETE it will happen09:13
*** salmankhan has joined #openstack-containers09:15
*** janki has joined #openstack-containers09:20
brtknrso set set_cluster_metrics=False in /etc/magnum/magnum.conf?09:36
brtknrstrigazi: ^09:36
strigaziin [drivers] send_cluster_metrics=False09:37
strigazibrtknr: ^^09:37
*** suanand has joined #openstack-containers09:49
*** janki has quit IRC10:07
*** ricolin has quit IRC10:19
*** Bhujay has quit IRC10:31
*** Bhujay has joined #openstack-containers10:32
*** ttsiouts has quit IRC10:35
*** rcernin_ has joined #openstack-containers10:44
*** eyalb has quit IRC10:48
*** janki has joined #openstack-containers10:50
*** rcernin_ has quit IRC10:52
*** ttsiouts has joined #openstack-containers11:04
*** janki has quit IRC11:07
*** pcaruana has quit IRC11:15
*** udesale has quit IRC11:17
*** mattgo has quit IRC11:35
*** eyalb has joined #openstack-containers11:52
*** mattgo has joined #openstack-containers12:02
*** ttsiouts has quit IRC12:09
*** ttsiouts has joined #openstack-containers12:21
*** Bhujay has quit IRC12:31
*** Bhujay has joined #openstack-containers12:32
*** Bhujay has quit IRC12:33
*** Bhujay has joined #openstack-containers12:33
*** lpetrut has quit IRC12:33
*** lpetrut has joined #openstack-containers12:36
*** ramishra has quit IRC12:50
*** ramishra has joined #openstack-containers12:54
*** lbragstad has joined #openstack-containers12:59
*** ricolin has joined #openstack-containers13:00
*** belmoreira has quit IRC13:05
*** suanand has quit IRC13:08
*** lbragstad has quit IRC13:09
*** ttsiouts has quit IRC13:27
*** belmoreira has joined #openstack-containers13:29
*** hongbin has joined #openstack-containers13:57
*** ttsiouts has joined #openstack-containers14:05
*** eyalb has quit IRC14:10
*** ykarel is now known as ykarel|away14:57
*** Bhujay has quit IRC15:03
*** lpetrut has quit IRC15:04
*** mattgo has quit IRC15:07
*** ttsiouts has quit IRC15:08
*** ttsiouts has joined #openstack-containers15:10
*** udesale has joined #openstack-containers15:12
*** serlex has quit IRC15:17
*** ykarel|away has quit IRC15:22
brtknrstrigazi: that worked like a treat, thanks :)15:38
*** dave-mccowan has joined #openstack-containers15:38
*** ttsiouts has quit IRC15:40
*** ttsiouts has joined #openstack-containers15:43
*** belmoreira has quit IRC15:52
*** ykarel|away has joined #openstack-containers15:59
*** ykarel|away is now known as ykarel15:59
*** pcaruana has joined #openstack-containers16:07
*** dave-mccowan has quit IRC16:13
*** ttsiouts has quit IRC16:20
*** ttsiouts has joined #openstack-containers16:20
*** ttsiouts has quit IRC16:25
*** udesale has quit IRC16:27
*** ykarel is now known as ykarel|away17:01
*** ramishra has quit IRC17:16
*** edisonxiang has quit IRC17:30
*** salmankhan has quit IRC17:33
*** ykarel has joined #openstack-containers17:49
*** ykarel|away has quit IRC17:49
*** mannamne has quit IRC18:10
*** eandersson has joined #openstack-containers18:45
*** ykarel has quit IRC19:05
*** salmankhan has joined #openstack-containers19:48
*** salmankhan has quit IRC19:53
*** janki has joined #openstack-containers20:12
*** ricolin has quit IRC20:29
*** pcaruana has quit IRC20:43
*** ttsiouts has joined #openstack-containers20:53
strigazi#startmeeting containers21:00
openstackMeeting started Tue Sep 25 21:00:07 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.21:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:00
*** openstack changes topic to " (Meeting topic: containers)"21:00
openstackThe meeting name has been set to 'containers'21:00
strigazi#topic Roll Call21:00
*** openstack changes topic to "Roll Call (Meeting topic: containers)"21:00
strigazio/21:00
ttsioutso/21:00
cbrummo/21:00
colin-hello21:00
colin-jim is otw21:00
strigaziit seems that flwang is not here21:01
strigazicolin-: cool21:01
strigaziagenda:21:02
strigazi#link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2018-09-25_2100_UTC21:02
*** imdigitaljim has joined #openstack-containers21:02
imdigitaljimo/21:02
strigazi#topic Stories/Tasks21:02
*** openstack changes topic to "Stories/Tasks (Meeting topic: containers)"21:02
strigaziimdigitaljim: hello21:02
strigaziI have put  4 items in the agenda21:02
strigaziThe 1st one is merged in rocky Fix cluster update command https://storyboard.openstack.org/#!/story/1722573 Patch in review: https://review.openstack.org/#/c/600806/ \o/21:03
cbrummvery nice21:03
strigazino more broken stacks cause one char changed in the templates21:04
strigaziAnd actually I want to mention the 4th one:21:04
strigaziscale cluster as admin or other user in the same project21:04
strigazi#link https://storyboard.openstack.org/#!/story/200264821:04
strigaziWe have discussed this before,21:05
strigaziand I think our only option is pass the public key as a string.21:05
strigaziplus the patch from imdigitaljim to not pass a keypair at all21:05
imdigitaljimyeah this story wont be an issue for us21:05
strigaziimdigitaljim: cbrumm you are not using keypairs at all21:05
strigazi?21:06
imdigitaljimcorrect21:06
strigazionly sssd?21:06
imdigitaljimyeah21:06
imdigitaljimkeypair is less secure as well21:06
imdigitaljimsince if anyone gets access to said key21:06
strigazidoes this make sense to go upstream?21:06
strigaziit is a ds right?21:06
imdigitaljimits fine to support it but we should consider the option for without21:06
strigaziwe could have a recipe we some common bits21:07
strigaziwithout sssd?21:07
imdigitaljimyeah that would be good, an option that works as you need it to and an option that will not worry about it at all for usages like sssd21:07
strigazi*we could have a recipe with some common bits21:07
imdigitaljimyup21:07
imdigitaljimive noticed this issue occur in other cases too btw21:08
strigazilike?21:08
*** janki has quit IRC21:08
imdigitaljimnot with keys but just policy control flow21:08
strigazioh, right21:09
imdigitaljimwe have a current issue where admin/owner A creates cluster in tenant A for user B, the user B cannot create a config file (using CLI/API) for that cluster because they are neither admin/owner21:09
imdigitaljimand user B belongs to tenant A as well21:09
strigazithat is fixable in the policy file21:10
imdigitaljimwe would like any users of tenant A be able to generate a config for clusters of tenant A21:10
imdigitaljimnot in its current state21:10
imdigitaljimits an API enforcement issue where our issue sits21:10
strigaziwe have it, wihtout any other change21:10
strigazione sec21:10
imdigitaljimmaybe share the policy, perhaps we're missing something :D21:11
strigazi    "certificate:create": "rule:admin_or_owner or rule:cluster_user",21:12
strigazi    "certificate:get": "rule:admin_or_owner or rule:cluster_user",21:12
imdigitaljimwwhat is your cluster_user rule21:12
strigazi    "admin_or_user": "is_admin:True or user_id:%(user_id)s",21:12
strigazi    "cluster_user": "user_id:%(trustee_user_id)s",21:12
imdigitaljimthats what we have21:12
strigazialso:     "admin_or_owner": "is_admin:True or project_id:%(project_id)s",21:13
*** canori02 has joined #openstack-containers21:13
canori02o/21:13
strigazihey canori0221:13
imdigitaljimyeah thats what we have, i think theres a condition that doesnt get met somewhere and it fails the policy21:14
imdigitaljimill have to find it, sorry it was a couple weeks ago21:14
strigaziimdigitaljim: that is our policy, works for brtknr too21:14
imdigitaljimyeah, id like for it to work too :)21:15
strigaziimdigitaljim: I'll double check in devstack too21:15
strigaziok, I have two more21:16
strigaziThis patch requires a first pass [k8s] Add vulnerability scanner https://review.openstack.org/#/c/59814221:16
strigaziit was done by an intern, in the past months at CERN21:16
strigaziit is a scanner to scan all images in a runnning cluster21:17
strigazicombined with a clair serve21:17
strigazicombined with a clair server21:17
strigaziYou can have a look and give some input21:17
imdigitaljimoh excellent21:17
*** canori02 has quit IRC21:18
strigaziThe first iteration works only for public images, in subsequent steps we can enhance it to work for private registies too21:18
imdigitaljimgreat!21:19
colin-yeah that could be really useful21:19
imdigitaljimlooks good on everything but ill have some comments for the shell file21:19
strigazinice :) The last item, from me and ttsiouts is about nodegroups Nodegroups patches: https://review.openstack.org/#/q/status:open+project:openstack/magnum+branch:master+topic:magnum_nodegroups21:20
*** canori02 has joined #openstack-containers21:20
imdigitaljimyeah can we discuss that21:20
imdigitaljimim not sure what thats about/whats its purpose21:20
strigaziWe need to dig the spec and bring it up to date, but these patches are a kickstart21:20
ttsioutsimdigitaljim: I'm drafting a spec for this21:20
imdigitaljimi couldnt follow21:20
imdigitaljimoh ok great thanks21:20
ttsioutscool21:21
ttsioutsI'll try to have it upstream asap21:21
strigaziatm the clusters are homogeneous, one AZ one flavor21:21
imdigitaljimoh is it for cluster node groups21:21
imdigitaljimi understand21:21
colin-strigazi: is this to provide the option to support different types of minions in the cluster?21:22
colin-distinctly21:22
strigaziyes21:22
imdigitaljimi think i have some other thoughts too for the WIP21:22
colin-neat21:22
strigaziFrom our side,21:22
strigaziis to have minimum two groups of nodes21:23
strigazione for master one for minion21:23
strigaziand then add as you go, like in GKE21:23
strigaziin gke they call them nodepools21:23
strigaziwe don't have a strong opinon on the master nodegroups, but I think it is the most straight forward option atm21:24
strigaziimdigitaljim: do you have some quick input21:24
strigaziwe can take the details in the spec21:24
imdigitaljimyeah a couple questions21:25
imdigitaljimso, is this intended to be runtime nodegroups or determined at creation time?21:25
strigazithe first two nodegroups will be created at creation time and then the user will add more21:26
strigazilike now21:26
strigaziwhen you create a cluster21:26
strigazithe heat stack  has two resource groups, one for master one for minions21:26
strigazithis can be the minimum21:27
colin-could you add a nodegroup to a cluster that was created without it?21:27
colin-at a later time?21:27
strigazithe you call POST cluster/UUID/nodegroups and you add more21:27
colin-interesting21:27
imdigitaljimfor this design i was thinking something more clever with leveraging heat more21:28
imdigitaljimhttps://docs.openstack.org/heat/latest/template_guide/hot_spec.html21:28
strigazicolin-: it could be possible, but I'm not sure what is the benefit. IMO for this use case21:28
imdigitaljimif we update the minimum heat we could have a repeat for the # of pools21:28
strigaziimdigitaljim: this is what we want to do ^^21:28
imdigitaljimso like 1-N pools, and provide the data through template (for now)21:28
strigaziimdigitaljim: not many stacks21:28
imdigitaljimpools/resourcegroups21:29
strigazi a shallow nested  stack21:29
imdigitaljimyeah21:29
imdigitaljimso where do all these controllers come into play21:29
imdigitaljimi dont see why these would be necessary to accomplish node pools21:30
strigazicolin-: for this use case we could have the concept of extrnal groups or smth21:30
colin-ok21:30
strigaziimdigitaljim:  in the end it would be one stack. But end user, that don't know about heat need a way to express this21:31
strigaziimdigitaljim: we need a route in the api21:31
strigaziimdigitaljim: otherwise we need to do CRUD operations in a field or many fields in the cluster21:32
strigazihave a nodegroup field that describes those pools/groups21:33
imdigitaljimoh is this part for the feedback for api/cli/ on what exists?21:33
imdigitaljimfeedback/usage via cli/api?21:34
strigaziI think I got the question and I'll say yes21:34
strigazi:)21:34
imdigitaljimlet me sit on it a little longer21:34
imdigitaljimand maybe if you can answer those questions from ricardo21:35
strigaziok21:35
imdigitaljimbut if its that then i can better judge the PR :)21:35
imdigitaljimbut i do think i understand what these PR's are now21:35
strigazi:)21:35
imdigitaljimyeah21:36
imdigitaljimnow i see21:36
imdigitaljimcool beans21:36
imdigitaljimlooks about right21:36
imdigitaljimill keep following it21:36
imdigitaljimthanks for clarifying!21:36
strigazi:)21:36
strigazittsiouts++21:37
ttsiouts:)21:37
strigazioh, I would like to add two more things21:37
strigazione is, for imdigitaljim21:37
strigaziDo you have experience on rebooting cluster nodes?21:38
imdigitaljimyeah21:38
imdigitaljimsomewhat21:38
strigaziour experience is pretty unplesant with flannel21:38
cbrummwe've played a lot with killing and creating minions, rebooting is generally fine too21:38
imdigitaljim^21:39
imdigitaljimand also killing LB's and recoverying21:39
strigaziwith the current model of flannel, 30% of the nodes lose network21:39
imdigitaljimrecoverying/recovering*21:40
strigaziI hope that the slef-hosted flannel works better21:40
imdigitaljimyeah i feel like it would21:40
imdigitaljimi think you guys are doing the right thing switching to a self-hosted flannel imho21:40
strigazicbrumm: imdigitaljim your experience is with calico hosted on k8s, right?21:40
imdigitaljimor join us with calico21:40
cbrummyeah21:40
imdigitaljimyeah21:40
colin-did you guys already consider that strigazi ?21:41
imdigitaljimwe're using latest calico 3.3.921:41
colin-must have at some point21:41
cbrummcalico has "just worked" for us21:41
strigaziwe sticked with what we know, no other reason so far21:41
colin-understood21:41
imdigitaljimcbrumm+121:41
strigazibut we must give it a go21:42
colin-it's nice not to deal with any layer 2 matters i have to say21:42
strigaziwe also have tungsten waiting in the corner and we kind of wait for it21:42
colin-been a relief for me personally from an operator perspective to use calico only21:42
strigazicolin-: you use calico for vms too?21:42
cbrummcolin is with us21:43
colin-as much imdigitaljim and cbrumm  do :)21:43
strigazioh, right :)21:43
strigaziit is close to midnight here, sorry :)21:44
strigazithe last thing is for people interested in Fedora CoreOS21:45
strigaziI promised the FCOS team to try systemd-portable services for kubelet and dockerd/containerd21:46
strigaziBut I didn't have time so far, if anyone wants to help, is more than welcome21:46
strigaziI'm fetching the pointer21:46
cbrummnot sure we'll have time to try it out21:47
imdigitaljimnot sure we can aid with that yet but keep a finger on them for a minimal image ;)21:47
cbrummmight, but our timeline is tight21:47
strigazi#link https://github.com/systemd/systemd/blob/master/docs/PORTABLE_SERVICES.md21:47
imdigitaljimstrigazi: ill catch up on the literature21:48
strigaziThe goal is to run the kubelet a portable systemd service21:48
imdigitaljimoh i see21:49
strigaziI just wanted to share with it with you21:49
imdigitaljimits super similar to the atomic install model already21:49
imdigitaljimeyah21:49
imdigitaljimill read up some more21:49
strigazimaybe canori01 is interested too21:49
strigaziimdigitaljim:  and should work in many distros (? or !)21:50
imdigitaljimyeah21:50
imdigitaljimits same pattern/benefits of containers21:50
imdigitaljimjust rebranded/ slightly different21:50
strigaziplus maintained by the systemd team21:51
cbrummI think this is the right thing to look into21:51
imdigitaljimi can see kubelet being done fairly easily21:51
imdigitaljimbut dockerd/containerd would be much more complicated21:51
cbrummWe'll all want to make sure it works well, but its the correct starting place21:52
strigaziimdigitaljim: would it though? we managed to run dockerd in a syscontainer already21:52
strigazilet's see21:52
imdigitaljimperhaps21:53
imdigitaljimmaybe im thinking of something more complicated21:53
imdigitaljimand not this context21:53
imdigitaljimbut ill check it out21:53
imdigitaljimdo you have the dockerd in a syscontainer?21:53
imdigitaljimdoes it look like the dind project?21:53
strigaziyes, for swarm, but we look to use it for k8s too21:53
strigaziimdigitaljim: no, not like dind21:54
strigaziimdigitaljim: https://gitlab.cern.ch/cloud/docker-ce-centos/21:54
imdigitaljimoh ok21:55
imdigitaljimcool21:55
imdigitaljimand this works for you alreayd?21:55
strigaziyes21:56
strigazifor swarm for a year or so21:56
imdigitaljimi just personally dont have intimate knowledge of the dockerd requirements but if you've got it already it should be cake!21:56
strigazifor k8s we didn't put a lot of effort, but for some tests it was fine21:56
strigaziimdigitaljim:  the only corner case can be mounting weird dirs on the host21:57
imdigitaljimyeah21:57
imdigitaljimthats where my complexities were concerned21:57
strigaziimdigitaljim: our mount points are pretty much standard21:57
imdigitaljimweird dirs/weird mounts21:57
imdigitaljim./weird permissions21:58
colin-interesting idea, would be curious to see how it's implemented for k8s and how kubelet reacts21:58
strigaziimdigitaljim:  we have tested mounting cinder volumes too21:58
imdigitaljimanyways yeah we'll keep an eye on it and catch up21:59
strigaziimdigitaljim: colin- if dockerd and kubelet share the proper bind mounts it "Just Works"21:59
colin-nice22:00
colin-good to remember that does still happen in real life :)22:00
colin-(sometimes)22:00
strigazi:)22:00
imdigitaljim'proper' :P22:00
imdigitaljimis the complexity22:00
imdigitaljimbut yeah22:00
cbrummneed to go, bye everyone22:01
strigaziwe are and hour in22:01
strigazicbrumm: thanks22:01
strigazilet's wrap then22:01
strigaziThanks for joining the meeting everyone22:02
colin-ttyl!22:02
ttsioutsbye!22:02
strigazi#endmeeting22:02
*** openstack changes topic to "OpenStack Containers Team"22:02
openstackMeeting ended Tue Sep 25 22:02:36 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:02
openstackMinutes:        http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-09-25-21.00.html22:02
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-09-25-21.00.txt22:02
openstackLog:            http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-09-25-21.00.log.html22:02
imdigitaljimo.22:03
imdigitaljimo/22:03
colin-o-22:04
colin-o\22:04
*** ttsiouts has quit IRC22:05
*** ttsiouts has joined #openstack-containers22:06
strigaziI'm lost, o\ is closing o/ ?22:07
strigazi\m/ is always a good option too :)22:07
strigaziI need to sleep have a nice day imdigitaljim colin-22:07
*** canori02 has quit IRC22:08
imdigitaljimnight!22:08
*** ttsiouts has quit IRC22:10
openstackgerritFeilong Wang proposed openstack/magnum-ui master: Display master_flavor_id and flavor_id when updating cluster  https://review.openstack.org/60496722:52
openstackgerritFeilong Wang proposed openstack/magnum-ui master: Fix cluster update  https://review.openstack.org/60496622:52
colin-out of curiosity, what network resources is everybody using in their clouds to provide floating ip resources?22:54
*** mannamne has joined #openstack-containers22:56
*** rcernin has joined #openstack-containers23:07
*** dave-mccowan has joined #openstack-containers23:18
*** hongbin has quit IRC23:27
*** mannamne has quit IRC23:33
openstackgerritFeilong Wang proposed openstack/magnum-tempest-plugin master: Support k8s testing  https://review.openstack.org/60432323:57

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!