Wednesday, 2020-09-23

*** zigo has quit IRC00:53
*** jrosser_ has joined #openstack-containers02:19
*** zigo has joined #openstack-containers02:26
*** sapd1 has joined #openstack-containers02:27
*** jrosser has quit IRC02:27
*** jrosser_ is now known as jrosser02:27
*** ramishra has joined #openstack-containers02:38
*** ykarel|away has joined #openstack-containers04:26
*** ykarel|away is now known as ykarel04:27
-openstackstatus- NOTICE: A failing log storage endpoint has been removed, you can recheck any recent jobs with POST_FAILURE where logs have failed to upload04:44
*** vishalmanchanda has joined #openstack-containers05:19
*** ianychoi_ is now known as ianychoi06:16
*** born2bake has joined #openstack-containers07:32
*** yolanda has joined #openstack-containers07:42
*** rcernin has quit IRC07:45
*** kevko has joined #openstack-containers07:51
brtknrborn2bake: hello08:00
*** flwang1 has quit IRC08:20
openstackgerritMerged openstack/magnum master: Update default values for docker nofile and vm.max_map_count  https://review.opendev.org/74916908:29
*** flwang1 has joined #openstack-containers08:41
flwang1brtknr: hello08:41
brtknrflwang1: hi08:42
*** k_mouza has joined #openstack-containers08:50
openstackgerritMerged openstack/magnum master: Remove cloud-config from k8s worker node  https://review.opendev.org/74910108:51
flwang1brtknr:09:03
flwang1#startmeeting magnum09:04
openstackMeeting started Wed Sep 23 09:04:04 2020 UTC and is due to finish in 60 minutes.  The chair is flwang1. Information about MeetBot at http://wiki.debian.org/MeetBot.09:04
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.09:04
*** openstack changes topic to " (Meeting topic: magnum)"09:04
openstackThe meeting name has been set to 'magnum'09:04
flwang1brtknr: meeting?09:04
brtknrsure09:04
brtknro/09:04
flwang1o/09:04
flwang1Spyros said he will tried, but seems he is not online09:04
flwang1as for the hyperkube vs binary, i'm thinking if we can keep support both09:05
jakeyipo/09:05
flwang1hi jakeyip09:05
brtknro/ jakeyip09:05
brtknri am happy to support both09:05
*** strigazi has joined #openstack-containers09:06
strigaziflwang1: o/09:06
flwang1strigazi: hey09:06
flwang1#topic hyperkube vs binary09:06
*** openstack changes topic to "hyperkube vs binary (Meeting topic: magnum)"09:06
flwang1strigazi: i'm thinking if we can support both, do you think it's a bad idea?09:07
strigaziflwang1: no, i think we have too09:07
brtknro/ strigazi09:07
strigaziflwang1: well it is bad, but we have too :)09:08
strigazibrtknr: hello09:08
flwang1when i say support both, i mean something like we did before for atomic system container and the hyperkube09:08
flwang1however, that will make the upgrade script a bit messy09:09
brtknrdo  we allow the possibility of going from hyperkube -> binary?09:09
strigazibrtknr: for running cluster we can't do it09:09
flwang1strigazi: why we can't?09:09
brtknrbecause the upgrade scripts are baked into the heat template?09:10
strigazibrtknr: yes09:10
flwang1strigazi: i see the point09:11
strigaziflwang1: brtknr: what we can do is deploy a software_deployment from another template but we will lose track so much09:11
strigazithe good part is that we NGs at least, new NGs in running clusters will get the binary09:11
flwang1if that's the case, we will have to support hyperkube09:12
brtknri noticed it yesterday when i tried to bootstrap some new nodes with the virtio-scsi fix: https://review.opendev.org/#/c/753312/09:12
flwang1and from an operation pov, the cloud provider can decide when to fully switch to binary09:12
brtknrit didnt work with existing cluster, only with a new cluwter09:12
strigaziflwang1: sounds reasonable, at least for new clusters we should avoid hyperkube09:13
flwang1ok, so can we make an agreement that we will have to support both?09:13
flwang1like atomic system container and hyperkube09:14
brtknrcan we allow the possibility of choosing hyperkube source? i dont mind defaulting to k8s.gcr.io09:14
flwang1that's a bit funny for me09:14
flwang1brtknr: i don't mind supporting that09:15
brtknrit doesnt affect the function, just gives operators more flexibility09:15
strigazibrtknr: flwang1: we can add new label for rancher09:15
flwang1brtknr: yep, i understand09:15
strigazifor people with their own registry it is pointless09:16
flwang1strigazi: yep, that's basically what brtknr proposed09:16
flwang1strigazi: true09:16
strigaziit is just that it doesn't work with the private registry override we have. like who wins?09:16
strigazibrtknr: you will honestly rely on rancher?09:17
strigazibrtknr: maybe do it like helm? repository and tag?09:18
brtknrstrigazi: do you see problems with using rancher?09:19
strigazibrtknr: it is a competitor project and product, not sure about the license and how community driven it is. From our (magnum) side, we can make the option generic and not default to it09:20
strigaziif someone opts in for it, they are on their own09:21
brtknrstrigazi: i understand, i will revert the change to default to it09:21
brtknrwhat do you mean by > maybe do it like helm? repository and tag?09:21
strigazibrtknr: instead of prefix, have the full thing as a label09:21
strigazirepository: index.docker.io/rancher/foo09:22
strigazitag: vX.Y.Z-bar09:22
brtknrand not limit to calling it hyperkube?09:22
strigazithe tag we have it as kube_tah09:22
strigazithe tag we have it as kube_tag09:22
flwang1i think we can reuse our current kube_tag, just need a new label for the repo, isn't it?09:23
openstackgerritMerged openstack/magnum-ui stable/victoria: Update .gitreview for stable/victoria  https://review.opendev.org/75317909:23
openstackgerritMerged openstack/magnum-ui stable/victoria: Update TOX_CONSTRAINTS_FILE for stable/victoria  https://review.opendev.org/75318009:23
strigazibrtknr: yeah, since it a new label, why limit it? the community has a high rate of changing things09:23
strigaziflwang1: yes09:23
brtknrokay okay okay sdasdasd09:24
strigazismth like hyperkube_image? hyperkube_repo or _repository09:24
brtknrsdasdasds09:24
brtknrsdsds09:24
brtknrqweqwewqeqe09:24
brtknroops sorry09:24
strigazicat?09:24
jakeyiplol09:24
openstackgerritMerged openstack/magnum-ui master: Update master for stable/victoria  https://review.opendev.org/75318109:24
openstackgerritMerged openstack/magnum-ui master: Add Python3 wallaby unit tests  https://review.opendev.org/75318209:24
brtknrmy keyboard was unresponsibe09:24
strigaziI wanted to be cat :)09:25
brtknrlol09:25
brtknrokay i'll change it to hyperkube repo09:25
strigaziSo we agree on it right?09:25
flwang1strigazi: +1 are you going to polish the patch?09:26
strigaziSorry team, I have time for one more quick subject, I need to leave in 10'. A family thing, everything is ok.09:26
strigaziflwang1: yes09:26
flwang1strigazi: cool, i will take a look the upgrade part09:27
flwang1strigazi: i have a interesting topoic09:27
flwang1the health monitor of heat-contianer-agent09:27
flwang1i have seen several cases of dead heat-container-agent on our production09:27
flwang1which caused upgrade failed09:27
strigaziflwang1: I can change to brtknr's suggestion to use the server tarball.09:27
flwang1who will build the tarball?09:28
flwang1and where to host it?09:28
strigaziflwang1: will the monitor help?09:28
strigaziflwang1 already there, built by the community09:28
flwang1podman does support health-check command, but i have no idea what's the good command we should use09:28
brtknrand use kube-* binaries from the server tarball?09:28
flwang1strigazi: if there is a tarball from community, then i'm happy to use it09:29
brtknrwhat about kube-proxy? you mention you had issues getting kube-proxy ?working09:29
strigazibrtknr: only kubelet binary, for the rest the images. They are included in the tarball. I will push, we can take it in gerrut09:29
flwang1the tarball will includes kubelet and those container images of kube components, is it?09:29
strigazibrtknr: works now09:29
strigaziflwang1: yes09:29
flwang1then i will be very happy09:30
brtknrcool!09:30
flwang1because it can resolve the digest issue09:30
flwang1properly09:30
strigaziflwang1: exactly09:30
flwang1very good09:30
flwang1move on?09:30
strigaziyes09:30
flwang1#topic health check of heat-container-agent09:30
*** openstack changes topic to "health check of heat-container-agent (Meeting topic: magnum)"09:30
strigaziflwang1: you want smth to trigger the unit restart?09:31
flwang1strigazi: exactly09:31
strigazisounds good to me09:31
brtknrseems non-controvertial09:31
brtknrnext topic? :P09:31
flwang1because based on my experience, a restart just fix it09:31
strigazi"did you turn it off and on?"09:31
flwang1turn what off and on?09:32
strigaziyeap, the fix for everything09:32
brtknrvm09:32
flwang1no, i don't want to restart the vm09:32
flwang1just heat-contaiiner-agent09:32
flwang1because the node is working ok09:32
brtknrplease propose a patch, i dont know how this is health check?09:33
strigaziflwang1: was the process dead?09:33
strigazithe container was still running?09:33
brtknrwhich heat container agent tag are you using?09:33
flwang1i only have a rough idea, podman does support health-check command, but i don't have any idea how to check the health of heat-container-agent09:34
flwang1the process is not dead09:34
strigaziin theory we shouldn't09:34
flwang1i can't remember the details, because it's a customer ticket, the only thing i can see is by 'journalctl -u heat-container-agent', there is nothing09:35
strigazii think improvements in the container or systemd are just welcome, right?09:35
flwang1train-stable-109:35
flwang1strigazi: i think so09:35
brtknr+109:35
flwang1as i said above, i just want to open my mind by getting some input from you guys09:36
brtknranything in the HCA logs before it dies?09:36
flwang1brtknr: how to see the log before the stuck?09:37
flwang1i don't think i can09:37
flwang1it's rotated i think09:37
brtknrinside /var/log/heat-config/09:37
flwang1that's normal i would say09:37
strigaziflwang1: just check if os-collect-config is running09:37
flwang1for some reasons, it just dead after a long time idle09:38
strigazips aux | grep os-collect-config09:38
flwang1i cann't do it now09:38
brtknrit is not dying during script execution?09:38
flwang1because the ticket has been resolved, it's a customer env09:38
flwang1brtknr: it's not09:38
*** yolanda has quit IRC09:38
strigaziflwang1: in a running cluster try to kill -9 the os-collect-config process09:38
flwang1it's an old cluster, something like 4-5 months09:38
strigazisee if the container/systemd unit will log smth09:39
*** yolanda has joined #openstack-containers09:39
strigaziwe can start there09:39
flwang1strigazi: good point, i will try09:39
flwang1i think i got something to start09:39
flwang1brtknr: strigazi: anything else you want to discuss?09:40
brtknryes09:40
brtknrcan you guys take a look at this? https://review.opendev.org/#/c/743945/09:40
strigaziflwang1: i'm good, I need to go guys, see you brtknr I will have a look09:41
brtknri am surprised you havent hit this, deleting cluster leaves dead trustee users behind09:41
brtknrcya strigazi09:41
flwang1strigazi: cya09:41
brtknrstrigazi: btw i will be off from next month until next year09:41
brtknron parental leave09:41
brtknrso will be quiet for a while09:41
flwang1brtknr: sorry, i will test it tomorrow09:42
brtknrthanks flwang109:42
flwang1brtknr: so you won't do any upstream work in the next 4 months?09:42
strigazibrtknr: oh, nice, family time :)09:42
brtknrflwang1: i will try but will be looking after 2 babies full time :P09:42
strigaziflwang1: brtknr: signing off, brtknr I will catch you before your leave :)09:43
flwang1i see. no problem09:43
flwang1it's a good idea to have more time with family during the covid19 time09:44
flwang1anything else, team?09:46
jakeyipI have a question09:46
jakeyipdoes anybody run out of ulimits using coreos?09:47
jakeyiphttps://github.com/coreos/fedora-coreos-tracker/issues/269#issuecomment-61520226709:47
flwang1jakeyip:  https://review.opendev.org/#/c/749169/09:48
flwang1is this you're looking for?09:48
jakeyiplol09:48
flwang1it's just merged today :)09:48
jakeyipyeah I know, what a coincidence09:49
flwang1jakeyip: put some time to do some code review will save you more time :D09:49
jakeyipgood idea ;)09:49
flwang1if there is no new topic, i'm going to close the meeting for today09:50
flwang1thank you for joining, my friends09:50
jakeyipdo you plan on backporting? I can do it in our repo anyway so no big deal09:50
flwang1jakeyip: we will do backport to Victoria09:51
jakeyipok09:51
openstackgerritFeilong Wang proposed openstack/magnum stable/victoria: Update default values for docker nofile and vm.max_map_count  https://review.opendev.org/75355909:51
openstackgerritFeilong Wang proposed openstack/magnum stable/victoria: Remove cloud-config from k8s worker node  https://review.opendev.org/75356009:51
flwang1#endmeeting09:52
*** openstack changes topic to "OpenStack Containers Team | Meeting: every Wednesday @ 9AM UTC | Agenda: https://etherpad.openstack.org/p/magnum-weekly-meeting"09:52
openstackMeeting ended Wed Sep 23 09:52:08 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)09:52
openstackMinutes:        http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-09-23-09.04.html09:52
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-09-23-09.04.txt09:52
openstackLog:            http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-09-23-09.04.log.html09:52
flwang1thank you for joining09:52
flwang1take care and enjoy your day/night09:52
jakeyipthanks for the work on magnum, and seeya09:53
*** sapd1 has quit IRC10:57
*** kevko has quit IRC11:37
*** kevko has joined #openstack-containers11:56
*** sapd1 has joined #openstack-containers13:10
*** kevko has quit IRC13:37
*** kevko has joined #openstack-containers14:04
*** sapd1 has quit IRC14:12
*** kevko has quit IRC14:42
*** sapd1 has joined #openstack-containers14:59
*** k_mouza has quit IRC15:04
*** k_mouza has joined #openstack-containers15:10
*** k_mouza has quit IRC15:38
*** sapd1 has quit IRC15:55
*** ykarel is now known as ykarel|away16:09
*** dioguerra has quit IRC16:26
*** ykarel|away has quit IRC16:37
*** born2bake has quit IRC17:40
*** ramishra has quit IRC18:33
*** yolanda has quit IRC20:37
*** vishalmanchanda has quit IRC20:38
*** jrosser has quit IRC20:50
*** gmann has quit IRC20:50
*** mnasiadka has quit IRC20:50
*** Open10K8S has quit IRC20:50
*** NobodyCam has quit IRC20:51
*** PrinzElvis has quit IRC20:51
*** kklimonda has quit IRC20:52
*** NobodyCam has joined #openstack-containers20:52
*** mnasiadka has joined #openstack-containers20:52
*** kklimonda has joined #openstack-containers20:52
*** jrosser has joined #openstack-containers20:52
*** PrinzElvis has joined #openstack-containers20:52
*** Open10K8S has joined #openstack-containers20:53
*** gmann has joined #openstack-containers20:53
*** k_mouza has joined #openstack-containers22:20
*** k_mouza has quit IRC22:24
*** k_mouza has joined #openstack-containers22:39
openstackgerritFeilong Wang proposed openstack/magnum stable/ussuri: Remove cloud-config from k8s worker node  https://review.opendev.org/75386022:43
*** k_mouza has quit IRC22:44
*** rcernin has joined #openstack-containers22:46
*** PrinzElvis has quit IRC22:50
*** andrein has quit IRC22:50
*** mnasiadka has quit IRC22:50
*** mnaser has quit IRC22:50
*** NobodyCam has quit IRC22:51
*** PrinzElvis has joined #openstack-containers22:51
*** kklimonda has quit IRC22:51
*** andrein has joined #openstack-containers22:52
*** kklimonda has joined #openstack-containers22:52
*** mnaser has joined #openstack-containers22:53
*** jrosser has quit IRC22:54
*** mnasiadka has joined #openstack-containers22:54
*** jrosser has joined #openstack-containers22:56
*** andrein has quit IRC22:58
*** NobodyCam has joined #openstack-containers22:59
*** andrein has joined #openstack-containers23:00
*** k_mouza has joined #openstack-containers23:13
*** k_mouza has quit IRC23:18
*** rcernin has quit IRC23:34
*** rcernin has joined #openstack-containers23:35

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!