Thursday, 2019-02-14

*** hongbin has quit IRC00:13
*** sapd1 has joined #openstack-containers01:01
*** ricolin has joined #openstack-containers01:30
*** hongbin has joined #openstack-containers02:07
*** ArchiFleKs has quit IRC02:07
*** ArchiFleKs has joined #openstack-containers02:22
*** ykarel|away has joined #openstack-containers03:40
*** ykarel|away is now known as ykarel03:48
*** ricolin has quit IRC04:05
*** ramishra has joined #openstack-containers04:09
*** udesale has joined #openstack-containers04:36
*** ykarel has quit IRC04:57
*** ricolin has joined #openstack-containers05:02
*** ykarel has joined #openstack-containers05:11
*** suanand has joined #openstack-containers05:23
*** hongbin has quit IRC05:44
*** ramishra_ has joined #openstack-containers06:00
*** ramishra has quit IRC06:01
*** _fragatina has quit IRC06:12
*** ramishra_ is now known as ramishra06:17
*** _fragatina has joined #openstack-containers06:22
*** _fragatina has quit IRC06:37
*** _fragatina has joined #openstack-containers06:48
*** janki has joined #openstack-containers07:11
*** janki has quit IRC07:13
*** janki has joined #openstack-containers07:13
*** ykarel is now known as ykarel|lunch07:42
*** _fragatina has quit IRC07:57
*** ramishra has quit IRC08:08
brtknreveryone: what approaches are people taking for configuring minions with multiple flavors?08:17
*** ramishra has joined #openstack-containers08:18
*** ykarel|lunch is now known as ykarel08:29
ricolinstrigazi, flwang guys what is the best way to delete a single node from Magnum cluster08:51
*** ign0tus has joined #openstack-containers09:04
*** Horrorcat has joined #openstack-containers09:30
strigaziricolin: apart from scale down by node?09:32
strigaziricolin: apart from scale down by one node?09:32
*** FlorianFa has joined #openstack-containers09:34
*** ign0tus has quit IRC09:55
Horrorcathi folks. I’m having trouble with spawning a k8s cluster using Magnum. I’m using the following temlate <http://paste.debian.net/hidden/f1e9611a/>. Magnum and Heat are on the Rocky release, everything else is Pike. I observed the logs of both magnum and heat during cluster creation and was not able to spot any errors, except the occasional database reconnect.10:12
HorrorcatThe cluster spawns (CREATE_COMPLETE), but it is unusable, because the minions have the node.cloudprovider.kubernetes.io/uninitialized taint.10:12
Horrorcatfrom my research, this means that magnum failed to do a step during the initialisation, is that correct?10:12
Horrorcatif it is, where do I need to look to figure out *why* it didn’t do that step?10:13
*** ign0tus has joined #openstack-containers10:13
HorrorcatI also checked cloud-init logs on both master and minion, didn’t find anything suspicious there either.10:15
*** ArchiFleKs has quit IRC10:19
*** sapd1 has quit IRC10:29
*** ArchiFleKs has joined #openstack-containers10:30
HorrorcatI also get `No resources found.` from `kubectl get pods --all-namespaces`10:43
*** mkuf has quit IRC10:51
*** udesale has quit IRC10:58
*** sapd1 has joined #openstack-containers11:24
*** janki has quit IRC11:48
*** janki has joined #openstack-containers11:48
*** mkuf has joined #openstack-containers11:51
*** janki has quit IRC11:55
*** janki has joined #openstack-containers11:55
*** sapd1 has quit IRC12:03
*** sapd1 has joined #openstack-containers12:23
*** _fragatina has joined #openstack-containers12:30
*** janki has quit IRC12:41
*** udesale has joined #openstack-containers12:45
*** suanand has quit IRC13:05
*** sapd1 has quit IRC13:08
*** dave-mccowan has joined #openstack-containers13:36
*** jmlowe has quit IRC13:54
imdigitaljimhorrorcat14:15
imdigitaljimthat is a label that gets removed by the cloud-controller-manager when it comes online14:16
imdigitaljimmaybe check that this came up okay?14:16
*** ykarel is now known as ykarel|away14:18
*** ykarel|away has quit IRC14:23
*** jmlowe has joined #openstack-containers14:25
*** ykarel|away has joined #openstack-containers14:40
*** mrodriguez has joined #openstack-containers14:43
*** ykarel|away is now known as ykarel14:45
Horrorcatimdigitaljim, I’m not at work anymore, but thanks for your reply. So I was able to figure out that it works with CoreOS instead of Fedora 27 Atomic. How would I check if the cloud-controller-manager came up?14:46
imdigitaljimyou should be able to see with kubectl get all --all-namespaces=true14:47
imdigitaljimbut it likely would show up in -n kube-system14:47
imdigitaljimwith the all it might show up as a deployment or DS14:48
imdigitaljimi dont use either fedora or coreos so i dont know specifics sorry =|14:48
Horrorcatokay, yeah, that turns up empty, as I mentioned14:51
Horrorcatah, you said get all14:51
Horrorcatthat has some output which I don’t have with me14:51
strigaziimdigitaljim can you provide a reproducer for this comment: "the ro file-system doesnt protect you if you're root, you can just mount -o remount,rw /anything"? people from fedora think it does protect you.15:11
*** ign0tus has quit IRC15:12
strigaziprotects you from the exploit, not if you are root in general in the host.15:12
strigaziimdigitaljim: you would run mount -o remount,rw /(root) ?15:13
strigaziimdigitaljim: this doesn't work "mount -o remount,rw /"15:13
imdigitaljimyou can do15:15
imdigitaljimmount -o remount,rw /etc15:15
imdigitaljimmount -o remount,rw /usr15:15
imdigitaljimetc15:15
imdigitaljimmaybe not on /15:15
imdigitaljimbut i in fact did it to overcome some on-host writing15:15
imdigitaljimi know fedora had like 2 rw paths15:16
imdigitaljimmaybe /var and /etc/?15:16
imdigitaljimi dont remember15:16
imdigitaljimbut you can remount most directories15:16
imdigitaljimim not sure in terms of this exploit fwiw, but if you do acquire root the ro wont save you =]15:18
*** udesale has quit IRC15:22
*** itlinux has joined #openstack-containers15:30
strigazithis cve doesn't give you root access. It allows you to touch the runc binary15:35
strigaziimdigitaljim: ^^15:35
*** hongbin has joined #openstack-containers15:38
imdigitaljimif you can overwrite a binary you can get a root shell =) my comments were just in the event that runc for FA27/28 was able to be hit despite ro mount15:40
strigaziimdigitaljim Can it be hit from inside an unprivileged container?15:48
imdigitaljimi believe so15:49
strigaziimdigitaljim: with selinux disabled. With selinux enforcing you cannot15:49
strigaziimdigitaljim: reproducer?15:49
strigaziimdigitaljim: fyi, https://github.com/kubernetes/autoscaler/pull/169015:50
strigaziimdigitaljim: the fun has started from people who want to sell their solution15:50
imdigitaljimis https://gitlab.cern.ch/cloud-infrastructure/magnum/blob/master/magnum/drivers/k8s_fedora_atomic_v1/templates/kubemaster.yaml#L59015:51
imdigitaljimthat accurate?15:51
imdigitaljimdo you run in permissive anyways?15:51
imdigitaljimsell who's solution15:51
imdigitaljimfor autoscaler? or exploit15:51
strigazifor the CVE: we run with selinux in persmissive. Do you have a reproducer for fedora atomic, with selinux in permissive?15:53
imdigitaljimno i havent tried for it15:53
imdigitaljimi believe in a few days they will release theirs15:53
imdigitaljimon the 20th15:53
imdigitaljimiirc15:53
imdigitaljimyou can try it then15:54
imdigitaljimmaybe?15:54
imdigitaljimwe dont use fedora at all now so ill probably not be able to test it out15:54
strigazifor the CVE: per redhat, with selinux enforcing you are safe.15:54
imdigitaljimbut i can try to pass along the publically avail exploit15:54
imdigitaljimif i see it15:54
imdigitaljimdo you run enforcing now?"15:54
imdigitaljimyour ^ master shows its disabled15:54
strigazi16:53 < strigazi> for the CVE: we run with selinux in persmissive.15:54
imdigitaljimoh ok yeah so its not enalbed15:55
imdigitaljimwell 'not enforcing'15:55
strigazithat is why i asked: Do you have a reproducer for fedora atomic, with selinux in permissive?15:55
*** itlinux has quit IRC15:55
imdigitaljimoh yeah, i dont15:56
imdigitaljimnot much need to mess with atomic15:56
*** itlinux has joined #openstack-containers15:56
imdigitaljimjust offering extra info15:56
strigaziI'm trying to understand if the exploit is possible. from a container, you can do: mount -o remount,rw /usr, you are in different namespace.15:58
strigaziI'm trying to understand if the exploit is possible. from a container, you can NOT do: mount -o remount,rw /usr, you are in different namespace.15:58
imdigitaljimand by the way thanks for the CAS pull link, we'll get some comments on there15:59
strigaziI'm trying to understand if the exploit is possible. from a container, you can NOT do: mount -o remount,rw /usr, you are in a different mount namespace.15:59
imdigitaljimto clarify: the mount -o comment can only happen after the exploit happens (if it can)15:59
imdigitaljimthe container breakout has to happen first16:00
imdigitaljimidk the steps for that and they said they provided 7 days of notice before releasing the exploit16:00
imdigitaljimwhich expires in a few days16:00
strigaziok, makes sense. I think the ro fs, protects from the expoit. I need an expert to confirm16:01
imdigitaljimso maybe after that you can see if you're affected16:01
imdigitaljimyeah it might16:01
imdigitaljimi just wanted to throw some extra info to help the analysis16:01
strigaziyou can run: gitlab-registry.cern.ch/strigazi/containers/cve-2019-5736-poc16:01
imdigitaljimdo you get like ro fails?16:02
strigaziin ubuntu with an unpatched docker, I was able to touch runc16:02
strigaziin atomi, I couldn't not16:02
imdigitaljimthen it might be safe from this16:02
strigazicode here: https://github.com/q3k/cve-2019-5736-poc16:03
imdigitaljimim sure FA would backport/publish a runc update too16:03
imdigitaljim?16:03
strigaziI think they have16:04
strigaziin fa2916:05
strigazidims: any comments? shall we push for upstreaming this? https://github.com/kubernetes/autoscaler/pull/169016:06
*** itlinux has quit IRC16:07
*** ramishra has quit IRC16:07
*** hongbin has quit IRC16:09
*** ArchiFleKs has quit IRC16:13
*** ricolin_ has joined #openstack-containers16:14
dimsstrigazi : looking16:15
dimsstrigazi : awesome! yes please16:15
*** ricolin has quit IRC16:16
strigazidims: Thanks, we would like to have an implementaion for magnum. We don't want to block other implementaions of course.16:17
strigazidims: it would be nice if we can have it in the upstream repo16:17
*** ArchiFleKs has joined #openstack-containers16:17
dimsstrigazi : right agree16:18
*** itlinux has joined #openstack-containers16:19
*** hongbin has joined #openstack-containers16:24
*** itlinux has quit IRC16:36
*** itlinux has joined #openstack-containers16:38
*** trident has quit IRC16:46
*** _fragatina has quit IRC17:03
*** ianychoi has joined #openstack-containers17:06
*** _fragatina has joined #openstack-containers17:06
*** hongbin has quit IRC17:29
*** Florian has joined #openstack-containers17:34
*** itlinux has quit IRC17:35
*** ykarel is now known as ykarel|away17:54
*** jmlowe has quit IRC17:57
*** ykarel|away has quit IRC18:00
ricolin_dims, strigazi, just post my wip in https://github.com/kubernetes/autoscaler/pull/1691, I guess we can move following work on 169018:02
*** ricolin_ has quit IRC18:19
*** ricolin_ has joined #openstack-containers18:20
*** ricolin_ has quit IRC18:25
*** henriqueof has joined #openstack-containers18:44
*** dims has quit IRC18:47
*** henriqueof has quit IRC19:15
*** henriqueof has joined #openstack-containers19:15
*** henriqueof has quit IRC19:18
*** henriqueof has joined #openstack-containers19:19
*** dave-mccowan has quit IRC19:42
*** _fragatina has quit IRC20:02
*** jmlowe has joined #openstack-containers20:07
*** Florian has quit IRC20:09
*** flwang1 has left #openstack-containers20:12
*** dave-mccowan has joined #openstack-containers20:35
*** spiette has quit IRC20:38
*** spiette has joined #openstack-containers20:38
*** spiette has quit IRC20:39
*** henriqueof has quit IRC20:48
brtknrthats peculiar... 2 PRs for similar things?20:51
brtknrI'm pretty excited about autoscaling20:51
brtknrAlso node groups20:52
*** _fragatina has joined #openstack-containers21:15
brtknrstrigazi: okay I can confirm that stable/queens branch is erroring when creating nodes... heat_container_agent is failing to notify heat because of missing region_name somehow...21:31
-openstackstatus- NOTICE: Jobs are failing due to ssh host key mismatches caused by duplicate IPs in a test cloud region. We are disabling the region and will let you know when jobs can be rechecked.21:31
brtknr6.1.0 works fine21:38
brtknrthis is because iv.get('deploy_region_name') resolves to null21:38
*** dave-mccowan has quit IRC21:49
brtknror maybe for another reason21:57
-openstackstatus- NOTICE: The test cloud region using duplicate IPs has been removed from nodepool. Jobs can be rechecked now.22:13
*** hongbin has joined #openstack-containers22:20
*** dave-mccowan has joined #openstack-containers22:22
*** hongbin has quit IRC23:35

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!