Wednesday, 2019-10-09

*** goldyfruit_ has joined #openstack-containers00:03
*** goldyfruit_ has quit IRC00:56
guilhermespyeah I was going to do that brtknr but just saw the the PR is in the gate already :)01:02
openstackgerritMerged openstack/magnum master: k8s_atomic: Run all syscontainer with podman  https://review.opendev.org/68574901:10
*** ykarel|away has joined #openstack-containers01:45
*** goldyfruit_ has joined #openstack-containers02:05
*** ricolin has joined #openstack-containers02:21
*** goldyfruit_ has quit IRC02:35
*** ramishra has joined #openstack-containers02:37
guilhermespbrtknr: do we expect that, with that change, we will keep compatibility with later k8s versions? https://github.com/openstack/magnum/commit/3674b3617a770bd71d09e23137ff96f90eb1241a02:55
guilhermespjust asking because we already have cluster templates set to deploy v1.14 and v1.15 and some existent clusters running02:55
guilhermespuowww just applied that commit03:38
guilhermespv1.16.0 created sucessfully and faster then usually I used to see03:38
guilhermesp7 minutes agains an average of 15 minutes03:39
*** ramishra has quit IRC03:43
*** udesale has joined #openstack-containers03:55
*** ykarel|away has quit IRC04:01
*** ykarel|away has joined #openstack-containers04:22
*** ykarel|away is now known as ykarel04:22
guilhermespand conformance ok https://github.com/cncf/k8s-conformance/pull/74204:50
*** ramishra has joined #openstack-containers04:59
guilhermespand yeah, don't take into account what I asked above about compatibility, it clearly works with older version as faster as the cluster deployment with v1.1604:59
*** dave-mccowan has quit IRC05:12
*** namrata has joined #openstack-containers06:39
*** pcaruana has joined #openstack-containers07:10
brtknrguilhermesp: sorry I was sleeping! Thanks for testing, good to hear it’s faster, it’s a single hyperkube image vs multiple docker images previously.... and they’re all published by google so must be good :) with regards to api compatibility, I assume we will need to make necessary adaptations in v1.17 but should be good for 1.1607:26
brtknrBecause they’re google images, i bet v1.16.1 is already available too07:26
*** namrata has quit IRC07:31
*** ivve has joined #openstack-containers07:41
*** ykarel is now known as ykarel|lunch07:44
*** ramishra_ has joined #openstack-containers08:27
*** ramishra has quit IRC08:29
*** ykarel|lunch is now known as ykarel08:34
brtknrflwang: strigazi: meeting in 20 mins?08:38
*** flwang1 has joined #openstack-containers08:43
strigaziyes08:43
flwang1hello08:43
brtknrhi both :)08:45
brtknrguilhermesp: dioguerra: you guys also there?08:45
brtknralso ttsiouts but he's offline on irc08:46
brtknrjakeyip: ^08:46
brtknrmight write an irc bot to notify all the people active in the channel in the past week08:47
brtknr:P08:47
flwang1:)08:51
jakeyipI'm here08:55
strigaziflwang1: you start the meeting?08:59
flwang1#startmeeting magnum08:59
openstackMeeting started Wed Oct  9 08:59:33 2019 UTC and is due to finish in 60 minutes.  The chair is flwang1. Information about MeetBot at http://wiki.debian.org/MeetBot.08:59
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.08:59
*** openstack changes topic to " (Meeting topic: magnum)"08:59
openstackThe meeting name has been set to 'magnum'08:59
flwang1#topic roll call08:59
*** openstack changes topic to "roll call (Meeting topic: magnum)"08:59
flwang1o/08:59
strigazio/08:59
flwang1brtknr: ?09:01
brtknroo/09:01
flwang1#topic fcos driver09:02
*** openstack changes topic to "fcos driver (Meeting topic: magnum)"09:02
flwang1strigazi: do you have give us an update?09:02
flwang1s/have/wanna09:02
strigazithe patch is in working state, I want to only add UTs09:02
jakeyipo/09:02
strigaziworks with 1.15.x and 1.16.x09:02
brtknrstrigazi: when I tested it yesterday, I was havving issues  with the $ssh_cmd09:03
strigaziThe only thing I want to check before merging is selinux09:03
strigazibrtknr: works for me09:03
strigazibrtknr: can you be more specific?09:03
brtknrwhy do we use $ssh_cmd in some places and not others09:03
strigaziis this the problem?09:04
brtknr+ ssh -F /srv/magnum/.ssh/config root@localhost openssl genrsa -out /etc/kubernetes/certs/kubelet.key09:04
brtknrssh: connect to host 127.0.0.1 port 22: Connection refused09:04
strigazisshd down?09:04
brtknrthis is in the worker node after which it fails to join the cluster09:04
brtknrwhen I tried the command manually, it worked09:05
brtknralso when i restarted the heat-container-agent manually, it joined the cluster09:05
strigaziI have After=sshd.service09:05
strigaziI can add requires too09:05
brtknrwondering if heat-container-agent is racing against sshd09:05
strigazitransient error? I never saw this after adding After=sshd.service09:06
flwang1strigazi: it would be nice to have After=sshd.service09:06
strigaziwe do have it!!!09:06
strigazihttps://review.opendev.org/#/c/678458/8/magnum/drivers/k8s_fedora_coreos_v1/templates/user_data.json@7509:06
flwang1then good09:06
flwang1i haven't got time to test your new patch09:07
strigazihm, no it only in configure-agent-env.service09:07
strigazibut09:07
strigaziheat-container-agent.service has After=network-online.target configure-agent-env.service09:08
strigaziand configure-agent-env.service has After=sshd.service09:08
flwang1so it would work09:08
strigaziI'll add Requires as well09:08
strigazianyway, that is a detail09:09
flwang1strigazi: re UT, i'm happy with having limited UT for this new driver given the gate is closing09:09
strigazithe important parts are two09:09
strigazione, the iginition patch is no working :(09:10
flwang1???09:10
flwang1your mean my heat patch?09:10
strigaziI missed testing the last PS09:10
strigaziyes09:10
flwang1why?09:10
flwang1what's the problem?09:10
strigazihttps://review.opendev.org/#/c/683723/5..13/heat/engine/clients/os/nova.py@45409:10
flwang1is it a regression issue?09:10
strigazi/var/lib/os-collect-config/local-data must be a direcroty09:11
strigazinot a file09:11
strigazisee https://github.com/openstack/os-collect-config/blob/master/os_collect_config/local.py#L7009:11
flwang1shit09:11
flwang1do you have a fix ?09:12
strigazinot a proper one09:12
strigaziif we write a file in /var/lib/os-collect-config/local-data our agent doesn't work still09:12
*** ttsiouts has joined #openstack-containers09:13
strigazibut at least os-collect-config is not throwing an exception09:13
ttsioutso/ sorry I was late..09:13
strigazimy fix was to just copy the file to /var/lib/cloud/data/cfn-init-data"09:14
strigaziI don't know why the local collector is not working09:14
flwang1can we propose a patch to still use the two 'old' file paths?09:14
strigaziyes a milllion times09:14
strigazior the heat team can help us using the local collector which will require us to patch our agent09:16
strigaziso action is to patch heat again with the old files?09:16
flwang1ok, i will take this09:16
flwang1i will try the current way first09:17
strigaziFYI, FCOS team is happy to patch ignition as well09:17
strigaziI'm testing that too09:17
strigazithey wrote the patch aleary09:17
flwang1ok, let's do it in parallel09:17
strigazithey wrote the patch already09:17
flwang1just in case09:17
strigaziyeap09:17
strigaziand the second part for fcos driver is:09:18
strigaziwill we enable selinux? conformance pass (I think), I'll post the result to be sure09:18
flwang1if we can enable selinux, why not?09:18
strigaziI don't know how it will work with the storage plugins09:19
strigaziflwang1: you can test cinder if you use it09:19
strigazinot sure if cinder works with 1.1609:20
flwang1strigazi: you mean test cinder with 1.16 with selinxu enabled? or they are 2 different cases09:20
strigazione09:21
strigazitest cinder with 1.16 with selinxux enabled09:21
strigazidoes it work with 1.15?09:21
flwang1iirc, in 1.16, k8s won't use the build-in cinder driver, right?09:21
strigaziexcellent :)09:22
strigaziso nothing to test?09:22
strigaziwill we use csi?09:22
flwang1i don't know, i need to confirm with CPO team09:22
flwang1i will let you guys know09:22
strigaziwe get off-topic, but leave a comment in the review for cinder09:23
strigaziwe go with selinux if conformance passes.09:23
strigazithat's it for me09:23
flwang1cool, thanks09:23
strigazinext? NGs?09:25
flwang1ttsiouts: ng?09:26
ttsioutsflwang1: strigazi: sure09:26
flwang1i can see there are still several patches for NG, are we still going to get them in train?09:26
strigaziwe need them, they are fixes really09:27
ttsioutsflwang1: yes09:27
ttsioutsflwang1: the NG upgrade is the main thing I wanted to talk about09:27
flwang1ok09:27
ttsioutsthe WIP patch https://review.opendev.org/#/c/686733/ does what it is supposed to do09:28
ttsioutsit upgrades the given NG09:29
ttsioutsthe user has to be really careful as to what labels to use09:30
flwang1ttsiouts: ok, user can upgrade a specific ng with the current upgrade command, right?09:30
ttsioutsflwang1: exactly09:30
ttsioutsflwang1: by defining --nodegroup <nodegroup>09:30
flwang1what do you mean 'what labels to use'?09:30
ttsioutsflwang1: for example if the availability zone label is in the cluster template, and it is different than the one set in the NG09:31
ttsioutsflwang1: it will cause the nodes of the NG to be rebuilt09:32
brtknrttsiouts: i thought all other labels get ignored apart from kube_tag?09:32
strigazithe heat_container_agent_tag can be destructive too09:32
flwang1brtknr: it should be, that's why i asked09:32
ttsioutshmm.. I tried to remove the labels that could cause such things here: https://review.opendev.org/#/c/686733/1/magnum/drivers/k8s_fedora_atomic_v1/driver.py@12109:33
ttsioutsflwang1: brtknr: I am not sure they are ignored. But I can test again09:34
flwang1hmm... i'd like to test as well to understand it better09:34
strigazithe happy path scenarios work great09:35
ttsioutsstrigazi: yes09:35
strigazicorner cases like changing one of the labels ttsiouts linked in the patch can be descrtuctive09:35
strigazior a bit rebuildy09:35
ttsioutsyes this is why I wanted to raise this here.09:36
ttsioutsso we are on the same page09:36
flwang1ok09:36
flwang1thanks09:36
flwang1anything else?09:38
brtknrI would rather prevent upgrade taking place if the specific subset of labels do not match09:38
strigaziand we need these for tain09:38
strigaziand we need these for train09:38
brtknre.g. force those specific labels to match09:38
brtknrttsiouts: ^09:38
strigazifor U yes09:38
strigazifor T a bit late?09:38
ttsioutsbrtknr: the logic is that the non-default NGs get upgraded to match the cluster09:40
ttsioutsbrtknr: there is a validation in the api that checks that the user provided the same CT as the cluster has (only for non-default NGs)09:41
brtknrstrigazi: Yes, I am happy to develop it further later... perhaps add accompanying notes for bits that are "hacks" for this release... we dont want code to silently ignore...09:41
ttsioutsbrtknr: If we have one CT the labels cannot match to all cluster NGs09:42
brtknrttsiouts: i meant, matching label during upgrades09:43
brtknryou have CT-A-v1 which has AZ-A09:43
brtknrthen you want that to upgrade nodegroup from CT-A-v1 to CT-A-v2 which has AZ-B09:44
brtknrI'd rather the upgrade wasnt allowed in this situation09:44
brtknras it might be error on part of the user...09:44
strigazithis doesn't make sense09:44
strigaziI have ng-1-az-A and ng-1-az-B09:45
strigaziI won't to upgrade both09:45
strigaziI want to upgrade both09:45
strigazithe CT in the cluster can be only one.09:45
strigazithe rule above would never allow me to upgrade one of the two NGs09:46
ttsioutsbrtknr: check here: https://review.opendev.org/#/c/686733/1/magnum/api/controllers/v1/cluster_actions.py@15309:47
strigazifor U we will put it in the NG spec with details.09:47
strigazifor T we can say that only kube_tag will be taken into account09:47
strigazisounds reasonable?09:48
brtknrim happy with only kube_tag for T09:48
strigaziso we add only kube_tag in https://review.opendev.org/#/c/686733 ?09:49
strigazittsiouts: brtknr flwang1 ^^09:49
flwang1i'm ok09:49
ttsioutsthis is the safest for now09:49
strigaziand safeest09:49
strigaziand safest09:49
brtknryes, does it still require stripping things out?09:49
brtknror would it be better to only select kube_tag during upgrade?09:50
ttsioutsbrtknr: we could use the code that checks the validity of the versions and pass only the kube_tag09:51
brtknrit might be better to use "upgradeable" parts rather than "skip these labels"09:51
brtknrit might be better to use "upgradeable labels" rather than "skip these labels"09:51
brtknrfor clarity09:51
brtknrwe can expand the list of "upgradeable labels" as we are able to upgrade more things09:52
flwang1we're running out time09:52
brtknrdoes that make sense?09:52
strigazilet's keep it simple for T09:52
strigazipass onlt kube_tag, end of story09:52
strigazipass only kube_tag, end of story09:53
strigaziadd it in reno too, and we are clear09:53
brtknrsounds good09:53
brtknrlets start backporting things09:54
flwang1#topic backport for train09:54
*** openstack changes topic to "backport for train (Meeting topic: magnum)"09:54
brtknrbfv+ng+podman+fcos09:54
strigazibasically current master plus ng and fcos?09:55
brtknryes09:55
strigaziflwang1: sounds good?09:56
flwang1yep, good for me09:57
strigazianything else for the topic/meeting?09:58
flwang1i'm good09:58
flwang1actually, i'm sleepy ;)09:58
brtknrthats all09:58
strigazion time09:58
flwang1cool, thank you, guys09:59
flwang1#endmeeting09:59
*** openstack changes topic to "OpenStack Containers Team | Meeting: every Wednesday @ 9AM UTC | Agenda: https://etherpad.openstack.org/p/magnum-weekly-meeting"09:59
openstackMeeting ended Wed Oct  9 09:59:32 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)09:59
openstackMinutes:        http://eavesdrop.openstack.org/meetings/magnum/2019/magnum.2019-10-09-08.59.html09:59
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/magnum/2019/magnum.2019-10-09-08.59.txt09:59
openstackLog:            http://eavesdrop.openstack.org/meetings/magnum/2019/magnum.2019-10-09-08.59.log.html09:59
brtknrstrigazi: are you planning to fix fcos flannel for T?10:00
flwang1brtknr: he did i think10:00
flwang1see the new patchset10:00
strigazibrtknr: I did, plus selinux support10:01
strigazibasically run flannel as root10:01
flwang1i'm off now, ttyl10:01
* strigazi goes for lunch10:01
brtknrflwang1: sleep well!10:01
*** ttsiouts has quit IRC10:11
*** lpetrut has joined #openstack-containers10:39
*** rcernin has quit IRC10:48
*** udesale has quit IRC11:22
*** ramishra_ has quit IRC11:30
*** ttsiouts has joined #openstack-containers11:32
*** dave-mccowan has joined #openstack-containers11:49
*** ramishra has joined #openstack-containers11:49
*** goldyfruit_ has joined #openstack-containers12:12
*** goldyfruit___ has joined #openstack-containers12:14
*** goldyfruit_ has quit IRC12:17
*** goldyfruit___ has quit IRC12:19
*** ttsiouts has quit IRC12:24
*** ttsiouts has joined #openstack-containers12:25
*** ttsiouts_ has joined #openstack-containers12:28
*** ttsiouts has quit IRC12:28
guilhermespah sorry brtknr I'm now :)13:03
*** lsimngar has joined #openstack-containers13:06
*** lsimngar has quit IRC13:08
*** goldyfruit___ has joined #openstack-containers13:28
*** goldyfruit___ has quit IRC13:29
openstackgerritTheodoros Tsioutsias proposed openstack/magnum master: Optimize cluster list api  https://review.opendev.org/68755613:30
*** pcaruana has quit IRC13:31
*** goldyfruit has joined #openstack-containers13:32
*** pcaruana has joined #openstack-containers13:33
*** ianychoi has quit IRC13:41
*** udesale has joined #openstack-containers13:45
*** sapd1_ has joined #openstack-containers13:53
*** johanssone has quit IRC13:56
*** ivve has quit IRC13:56
*** sapd1 has quit IRC13:56
*** ioni has quit IRC13:56
*** _ioni has joined #openstack-containers13:56
*** ivve has joined #openstack-containers13:56
*** openstackstatus has quit IRC13:58
*** johanssone has joined #openstack-containers13:58
*** _ioni is now known as ioni13:59
*** kgz has quit IRC14:00
*** ioni has left #openstack-containers14:00
*** ioni has joined #openstack-containers14:00
*** kgz has joined #openstack-containers14:07
*** lpetrut has quit IRC14:09
brtknrstrigazi: strange, I can create v1.16.1 clusters but not v1.16.014:10
brtknrfor coreos14:10
brtknrjust repeating in case it was a fluke14:10
*** udesale has quit IRC14:41
strigazithis can't be14:48
strigazibrtknr: try with calico, not flannel, just in case14:48
*** udesale has joined #openstack-containers14:54
brtknrstrigazi: https://seashells.io/p/7H5RTJKM15:03
brtknrstrigazi: okay second time, calico works fine15:03
*** ivve has quit IRC15:03
brtknrflannel has pods in kube-system namespace stuck in ContainerCreating phase15:03
*** lsimngar_ has joined #openstack-containers15:04
brtknrand this is what I see what I describe them https://seashells.io/p/skmduwdt15:04
brtknrI suspect I'm not using the latest PS15:04
brtknreven though I am...15:04
strigazibrtknr: it is selinux now, before it was kernel module.15:04
brtknrI'm testing sha 20e0803ddf6edb4f7b7419b0cf7d12e5e6614b8b15:05
strigaziI mentioned it this morning, I haven't fixed selinux yet15:05
strigaziI want to find the proper way15:05
strigaziI can send you a patch with something that works15:05
strigazione sec15:05
brtknrstrigazi: Oh right, I thought selinux was fixed, my misunderstanding then15:07
*** udesale has quit IRC15:14
*** lsimngar_ has quit IRC15:16
*** ramishra has quit IRC15:21
*** lpetrut has joined #openstack-containers15:21
*** ryn_eq has joined #openstack-containers15:22
*** ykarel is now known as ykarel|afk15:26
strigazibrtknr: the first flannel issue was fixed. Now there is a second one15:26
openstackgerritSpyros Trigazis proposed openstack/magnum master: [WIP] Support Fedora CoreOS 30  https://review.opendev.org/67845815:29
strigazibrtknr: this should work ^^ if I don't have any typos15:29
strigazibrtknr: Ill try to make it work without running as root.15:30
* strigazi is going home15:30
*** ttsiouts_ has quit IRC15:33
*** ttsiouts has joined #openstack-containers15:33
*** lpetrut has quit IRC15:35
*** ttsiouts has quit IRC15:37
*** openstackgerrit has quit IRC15:52
*** ykarel|afk is now known as ykarel16:35
*** ramishra has joined #openstack-containers16:37
strigazibrtknr: have you tried what I sent you? it works fine16:53
strigazibrtknr: have you tried this? https://review.opendev.org/#/c/678458/9/magnum/drivers/common/templates/kubernetes/fragments/flannel-service.sh16:53
brtknrstrigazi: i have no doubts it will work, as it is allow everything policy...16:54
strigazibrtknr: even before it had NET_ADMIN so not much changed...16:56
strigazibrtknr: you said in atomic it works. I want to point out that in atomic selinux is in permissive.16:58
brtknrstrigazi: Hmm I didnt realise16:59
brtknrstrigazi: I’m updating my dev stack env to 18.04, will test your change when it’s ready17:05
strigaziubuntu 18.04?17:06
*** ramishra has quit IRC17:27
*** ykarel_ has joined #openstack-containers17:32
*** ykarel_ has quit IRC17:33
*** ykarel has quit IRC17:35
*** ricolin has quit IRC17:41
brtknrstrigazi: 1yes17:59
brtknrstrigazi: been using 16.05 so far18:00
brtknrstrigazi: been using 16.04 so far18:00
strigazibrtknr: https://github.com/coreos/flannel/issues/120218:33
*** flwang1 has quit IRC18:46
*** pcaruana has quit IRC19:07
*** openstackgerrit has joined #openstack-containers19:53
openstackgerritSpyros Trigazis proposed openstack/magnum master: [WIP] Support Fedora CoreOS 30  https://review.opendev.org/67845819:53
strigazibrtknr: flwang for flannel this works, although, spc_t mean super privileged container19:54
*** pcaruana has joined #openstack-containers20:33
flwangbut the super privileged permission only for the flannel container, right?20:40
flwangstrigazi: ^20:40
*** pcaruana has quit IRC20:40
*** lpetrut has joined #openstack-containers20:41
flwangstrigazi: are you around?20:51
*** lpetrut has quit IRC20:52
*** lpetrut has joined #openstack-containers21:13
*** lpetrut has quit IRC21:30
*** goldyfruit has quit IRC21:32
*** goldyfruit has joined #openstack-containers21:38
brtknrflwang: strigazi: i suppose its better than having selinux set to permissive21:56
*** rcernin has joined #openstack-containers22:14
brtknrstrigazi: nonetheless it’s working for me22:57
brtknrflwang: is it working for you?22:57
*** flwang has quit IRC22:57
*** openstackstatus has joined #openstack-containers23:18
*** ChanServ sets mode: +v openstackstatus23:18
*** goldyfruit has quit IRC23:42

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!