Wednesday, 2019-10-02

*** sapd1_x has joined #openstack-containers00:07
*** hongbin has joined #openstack-containers00:30
*** hongbin has quit IRC00:50
*** ricolin has joined #openstack-containers00:59
*** ricolin has quit IRC01:34
*** ricolin has joined #openstack-containers01:35
*** strigazi has quit IRC01:37
*** strigazi has joined #openstack-containers01:39
*** spsurya has joined #openstack-containers01:47
*** irclogbot_1 has quit IRC02:09
*** irclogbot_1 has joined #openstack-containers02:12
*** sapd1_x has quit IRC02:19
*** Jeffrey4l has quit IRC03:38
*** Jeffrey4l has joined #openstack-containers03:40
*** sapd1_x has joined #openstack-containers04:20
*** henriqueof has quit IRC04:33
*** henriqueof1 has joined #openstack-containers04:33
*** ricolin_ has joined #openstack-containers04:36
*** ricolin has quit IRC04:40
*** dasp has quit IRC04:40
*** pcaruana has joined #openstack-containers04:52
*** ykarel|away has joined #openstack-containers05:09
*** sapd1_x has quit IRC05:34
*** rcernin_ has joined #openstack-containers05:59
*** rcernin has quit IRC06:01
*** ttsiouts has joined #openstack-containers06:39
*** dasp has joined #openstack-containers06:40
*** rcernin_ has quit IRC06:45
*** rcernin has joined #openstack-containers06:45
openstackgerritTheodoros Tsioutsias proposed openstack/magnum master: ng-10: Fix cluster template conditions  https://review.opendev.org/68562006:54
openstackgerritTheodoros Tsioutsias proposed openstack/magnum master: ng-11: API microversion 1.9  https://review.opendev.org/68608906:54
ttsioutsbrtknr: goodmorning. ^ ng-11 just bumps the API microversion. I forgot to do it in the previous API patch.06:56
*** rcernin has quit IRC07:04
*** ttsiouts has quit IRC07:26
*** ttsiouts has joined #openstack-containers07:26
*** ttsiouts has quit IRC07:31
*** ivve has joined #openstack-containers07:56
*** ttsiouts has joined #openstack-containers07:56
*** ttsiouts has quit IRC08:12
*** ttsiouts has joined #openstack-containers08:12
*** flwang1 has joined #openstack-containers08:13
*** ttsiouts has quit IRC08:17
*** ttsiouts has joined #openstack-containers08:18
strigazibrtknr: flwang flwang1 meeting in 40 mins?08:18
flwang1strigazi: i'm around08:18
openstackgerritTheodoros Tsioutsias proposed openstack/magnum master: ng-11: API microversion 1.9  https://review.opendev.org/68608908:20
*** ykarel|away has quit IRC08:28
strigazibrtknr: flwang flwang1 https://etherpad.openstack.org/p/magnum-weekly-meeting08:29
*** ykarel has joined #openstack-containers08:30
*** ykarel is now known as ykarel|away08:30
*** lpetrut has joined #openstack-containers08:30
flwang1strigazi: yep, i'm going to add items on that link08:33
strigazikube-2smpqggxkrje-master-0   Ready    master   2m53s   v1.16.0   10.0.0.101    172.24.4.140   Fedora 30.20190905.0 (CoreOS preview)   5.2.11-200.fc30.x86_64   docker://Unknown08:34
flwang1strigazi: i'm a bit confused about https://review.opendev.org/#/c/686033/08:40
flwang1why are you trying to change the fedora atomic driver?08:41
*** ricolin_ is now known as ricolin08:41
strigaziflwang1: less code, essentially only the agent config changes08:43
strigaziflwang1: we can do a new one, easy to cp -r k8s_fedora_atomic_v1 k8s_fedora_coreos_v108:45
flwang1does the fedora atomic support ignition?08:47
strigazino08:47
openstackgerritFeilong Wang proposed openstack/magnum master: [fedora-atomic][k8s] Support operating system upgrade  https://review.opendev.org/66959308:47
strigazibut http://paste.openstack.org/show/780655/08:48
flwang1strigazi: hmm... but we still need a driver.py and template_definition.py for fedora_coreos08:50
flwang1for the heat templates which can be shared by fedora_atomic and fedora_coreos, could we put them under common/templates?08:51
strigaziflwang1: we can work around that too08:51
strigazithe distro thing08:51
strigaziflwang1: I don't mind a new driver.08:52
strigaziflwang1: F29AH is EOL this year. We only want this driver to keep current clusters running08:52
brtknrmorning, i'm here too08:52
flwang1strigazi: personally, i prefer a clean new driver, given we will "deprecate" the fedora atomic one 'soon'08:53
strigaziflwang1: ok08:53
flwang1as you said above08:53
flwang1#startmeeting magnum09:00
openstackMeeting started Wed Oct  2 09:00:14 2019 UTC and is due to finish in 60 minutes.  The chair is flwang1. Information about MeetBot at http://wiki.debian.org/MeetBot.09:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.09:00
*** openstack changes topic to " (Meeting topic: magnum)"09:00
openstackThe meeting name has been set to 'magnum'09:00
flwang1#topic roll call09:00
*** openstack changes topic to "roll call (Meeting topic: magnum)"09:00
brtknro/09:01
ttsioutso/09:01
strigazio/09:01
jakeyipo/09:01
flwang1wow09:01
flwang1hello folks, thanks for joining09:01
flwang1#topic agenda09:02
*** openstack changes topic to "agenda (Meeting topic: magnum)"09:02
flwang1https://etherpad.openstack.org/p/magnum-weekly-meeting09:02
flwang1let's go through the topic list one by one09:02
flwang1anyone you guys want to talk first?09:02
strigaziI start?09:02
flwang1strigazi: go for it09:03
strigaziI completed the work for moving to 1.16 here: https://review.opendev.org/#/q/status:open+project:openstack/magnum+branch:master+topic:v1.1609:03
strigaziminus the FCOS patch09:03
strigaziwe don't care about that for 1.1609:03
strigazithe main reason for moving to podman is that I couldn't make the kubelet start with atomic install https://review.opendev.org/#/c/685749/09:04
strigaziI can shrink the patch above ^^ and start only kubelet with podman09:05
flwang1strigazi: yep, that's my question. should we merge them into one patch?09:05
flwang1to  get the review easier if we have to have all of them together for v1.1609:05
strigazithe benefit is that we don't have to build atomic containers, just maintain the systemd units09:06
strigazifor me the review is easier in many but I won't review so I can squash09:06
brtknrI'd  prefer they are separate as its easier to follow what each patch is doing09:07
flwang1the podman one is so big09:07
flwang1ok, then let's keep it as it is09:07
flwang1i will start to review the podman patch first09:08
strigazithe podman one is 7 systemd units basically09:08
brtknrthe podman patch is also necessity for moving to coreos so good that we have it :)09:08
strigaziI can help you review it09:08
brtknrthanks for the work, that was a very quick turnaround! i was expecting it would take much longer09:09
flwang1i'm happy to get the small patches merged quickly09:09
flwang1actually, i have reviewed them without leaving comments09:09
strigaziI can break it the big one per component if it helps09:10
flwang1strigazi: no, that's OK09:10
flwang1i just want to be focus, all good09:11
flwang1thanks for the great work09:11
strigaziflwang1: brtknr jakeyip do yo need any clarfication on any of the patches?09:11
flwang1strigazi: i do have some small questions09:11
strigazittsiouts: can passes by my office when he has questions and vice versa :)09:11
flwang1for https://review.opendev.org/#/c/685746/3   i saw you change from version 0.3 to 0.2 why?09:12
strigazis/can//09:12
brtknrstrigazi: I think its clear to me, I agree with flwang1 about merging small patches quickly09:12
brtknrperhaps the rp filter one can go before the podman one09:12
jakeyiphmm I have some question, e.g. for etcd service why is kill and rm in ExecStartPre and not ExecStop ?09:12
strigaziflwang1 because upstream flannel said so https://github.com/coreos/flannel/pull/1174/files09:12
flwang1strigazi: thanks09:13
strigazibrtknr +1 for rp filter09:13
strigazijakeyip: I can make it the same as the other ones09:14
strigazijakeyip: maybe rm is useful, eg when you want to change image09:14
strigaziI'll consolidate with the same practice everywhere09:14
brtknrstrigazi: so labelling via kubelet arg is no longer valid? --node-labels=node-role.kubernetes.io/master=\"\""09:15
strigazinope09:15
strigazithe kubelet does not accept node-role09:15
brtknrsince 1.16?09:16
flwang1strigazi: as for https://review.opendev.org/#/c/685748/3   the reference issue is still open, https://github.com/kubernetes/kubernetes/issues/75457 then why do we have to drop the current way?09:16
flwang1brtknr: we're asking same question, good09:16
strigaziflwang1: yes since 1.16 the change is in the kubelet09:17
flwang1strigazi: good, thanks for clarification09:17
jakeyipstrigazi:  yes understand. Why did you choose to put it in ExecStartPre instead of ExecStop? seems more reasonable to cleanup in stop stage. is this a standard from somewhere?09:17
flwang1shall we move to next topic? the fcos driver09:18
strigazijakeyip I copied it from some redhat docs09:18
flwang1for the podman patch, i prefer to leave comments on that one since it's bigger09:18
brtknrjakeyip: i suppose it ensures that the cleanup always happens before the start09:20
strigazijakeyip: https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/09:20
strigazimove to fcos?09:21
brtknryes09:21
jakeyipthanks, yes09:21
strigaziI made the agent work after flwang1's excellent work in heat09:22
brtknr\o/ nice work both!09:22
flwang1strigazi: thanks, i will try to push heat team to allow us backport it to train09:22
strigazithe only diff with the atomic driver is, how the agent starts and docker-storage-setup that is dropped.09:22
flwang1strigazi: are we trying to get fcos driver in train?09:23
strigaziflwang1: it will be crazy if they say no09:23
flwang1:)09:23
strigaziflwang1: I think we must do it09:23
flwang1yep, i think so09:23
flwang1just wanna make sure we all on the same page09:23
strigaziI can finish it as a new driver today09:24
flwang1strigazi: it would be great, if you can start it as a new driver, i will test it tomorrow09:25
brtknr+109:25
flwang1strigazi: you're welcome to propose patchset on my one or start with a new patch09:25
strigaziif the heat team doesn't merge ignition support, what we do?09:26
brtknrcry?09:26
strigazipatch ignition?09:26
flwang1strigazi: i will blame them in heat channel everyday for the whole U release09:26
strigazibrtknr: well at CERN we will cherry-pick to rocky even, but for other people may not be an option09:27
flwang1strigazi: we will cherrypick as well09:27
flwang1it's a such simple patch09:27
flwang1i can't see any reason they can't accept it for train09:27
brtknrcherry-pick in heat or magnum?09:27
brtknror both?09:27
strigaziheat09:28
flwang1brtknr: heat09:28
brtknrwe can try the upstream route in parallel too09:28
strigaziwhat does this mean ^^09:29
brtknrbut may be time for upgrade before a release is cut :)09:29
brtknrcherry-pick upstream back to rocky09:29
flwang1brtknr: i'm a bit confused09:30
flwang1we're talking about the ignition patch in heat09:30
brtknryes, i'm proposing cherry-picking back to rocky upstream09:30
brtknris that clear?09:30
flwang1hmm... it doesn't really matter for us, TBH09:30
brtknralthough its not too relevent for us as we are planning to upgrade to stein soon09:31
strigaziQ,R,S could have it09:31
flwang1we care about cherrypick to train, because, otherwise, our fcos driver won't work in Train09:31
strigazitrain sound enough for me09:31
flwang1i will follow up that, don't worry, guys09:31
flwang1strigazi: so action on you: a new patch(set) for fcos driver09:32
strigazi+109:32
flwang1thanks09:32
flwang1next topic?09:34
strigaziσθρε09:34
strigazisure09:34
flwang1brtknr: do you want to talk about yours?09:36
brtknrsure09:36
brtknrabout the compatibility matrix09:36
brtknrI'd like feedback on the accuracy of information09:36
brtknri am confident about stein and train but not so sure about compatibility of kube tags before then09:37
strigazilet's what we will mamage to merge :)09:37
brtknri tried to sift through the commit logs and derive answer but only got so far09:37
strigazifor stein ++09:37
brtknrwhen is the dealine?09:37
brtknrdeadline09:37
brtknrbefore the autorelease is cut?09:38
strigazitwo weeks? but we can do kind if late releases09:38
strigazitwo weeks? but we can do kind of late releases09:38
flwang1next week09:38
flwang1https://releases.openstack.org/train/schedule.html09:38
brtknrwill it be rc2 or final?09:38
flwang1both work09:39
jakeyipI don't think we need pike and queens in compat matrix, pike is unmaintained and queens is till 2019-10-25 which is soon https://releases.openstack.org/09:40
flwang1jakeyip: +109:40
flwang1just stein and rocky are good enough09:40
brtknrjakeyip: fine, i'll remove those09:40
jakeyipI can help test R, see how it goes with 1.15.x09:42
openstackgerritBharat Kunwar proposed openstack/magnum master: Add compatibility matrix for kube_tag  https://review.opendev.org/68567509:43
brtknrplease check this is accurate^09:43
flwang1this does need testing to confirm09:45
flwang1let's move to next topic?09:45
strigazi+109:46
flwang1i would like to discus the rolling upgrade09:47
flwang1so far the k8s version rolling upgrade runs very good on our cloud, we haven't seen issue with that09:47
flwang1but magnum does need to support the os upgrade as well09:48
flwang1with current limitation, i'm proposing this solution https://review.opendev.org/#/c/686033/09:48
flwang1using the ostree command to do upgrade09:49
flwang1before we fix the master resizing issue, this seems the only working solution09:49
flwang1strigazi: brtknr: jakeyip: thoughts?09:49
strigazi+109:49
brtknrflwang1: is that the right link?09:50
flwang1strigazi: i have fixed the issue your found09:50
flwang1sorry https://review.opendev.org/#/c/669593/09:51
strigaziwill test today09:51
flwang1strigazi: and it could work for the fcos driver with minion changes in the future09:51
flwang1since fcos does use ostree as well09:52
strigazimaybe yes09:52
brtknrwe will need to find alternative way to do upgrades if we are dropping ostree09:52
strigaziwith nodegroups now we can add a new ng and drop the old one09:53
flwang1strigazi: but we still need to fix the master09:53
flwang1node is easy09:53
strigazithis will replace the VMs and you get 100% gurantee that the nodes will work09:53
jakeyipI dunno if I like the idea of in-place upgrade.09:53
flwang1and the most important thing is, how to get 0 downtime09:53
strigazichanging a kernel is always a risk09:54
flwang1jakeyip: we don't have good choice at this moment09:54
strigazi0 downtime of what?09:54
flwang1your k8s service09:54
brtknri think he means the cluster?09:54
flwang1sorry, your k8s cluster09:54
strigazia node down doesn't mean clutser down09:55
brtknris that assuming multimaster?09:55
flwang1strigazi: ok, i mean the service running on the cluster09:55
flwang1my bad brain09:55
brtknris there a constraint of "only allow upgrades when its a multimaster"09:55
jakeyipI rather we blow away old VMs and build new ones This is why we are doing k8s instead of docker ya?09:55
strigazi+1 ^^09:55
jakeyipis in-place upgrade only for master09:55
flwang1guys, we need to be on the same page about the 0 downtime09:56
strigazithat is what I meant with using nodegroups09:56
flwang1let me find some docs for you guys09:56
flwang1trust me, i spent quite a lot time on this09:56
flwang1https://blog.gruntwork.io/zero-downtime-server-updates-for-your-kubernetes-cluster-902009df5b3309:57
flwang1there are 4 blogs09:57
strigazithere is nothing in this world with 0 downtime, there are services with good SLA09:57
flwang1strigazi: fine, you can definitely argue that09:57
brtknrI am a little confused as fedora atomic is EOL so there won't be anything to upgrade to09:58
jakeyip^ +1 :P09:58
flwang1brtknr: that's not true09:58
flwang1for example, our customer are using fa29 -2019080109:59
flwang1and then there is a big security issue happened today 2019100209:59
flwang1how can you upgrade that?09:59
flwang1the upgrade is not just for fa28-fa29, it does support fa29-0001 to fa29-000210:00
jakeyipflwang1: when you mention node is easy, it's master we need to care about - are we just solving this problem for clusters without multi master?10:00
strigaziworks for single master too with some api downtime, not service10:01
flwang1jakeyip: no, when i say master is not easy, i mean currently magnum doesn't support resize master, which means we can't just add new master and remove an old one10:01
flwang1may be 0 downtime is not a good word10:01
flwang1but the target is get a minimum downtime for services running on the cluster10:02
strigazito get somewhere10:02
flwang1when we say minimum, that means, at least, we should be able to do drain before doing upgrade10:02
strigazifor train we will have inplace uprgade for OS and k9s10:02
strigazifor train we will have inplace uprgade for OS and k8ss10:02
flwang1strigazi: yes10:03
flwang1i don't like the in place either10:03
strigazifor U will have node-replacement10:03
flwang1for node-replacement, we still need a way to drain the node before delete it10:03
openstackgerritMerged openstack/magnum master: Set cniVersion for flannel  https://review.opendev.org/68574610:03
strigazifor T users will be able to do node-replacement by adding a new NG and migrate the workload10:03
strigaziisn't this good enough for train ^^10:04
strigaziusers will have maximum control10:04
flwang1that's ok for train10:04
flwang1i mean it's good for train10:04
strigazi+110:05
strigazifor U we have two options10:05
flwang1for U, i will still insist on having a drain before removing node10:05
*** sapd1_x has joined #openstack-containers10:05
strigaziwe can do it with the agent with the DELETE SD10:06
brtknr+1 to drain before delete10:06
strigazior we can have  controller that does this in the cluster10:06
strigazior the magnum-controller10:07
strigaziso three options10:07
openstackgerritMerged openstack/magnum master: Add hostname-override to kube-proxy  https://review.opendev.org/68574710:07
flwang1strigazi: we already discussed the 'controller' in cluster in catalyst10:07
flwang1strigazi: we just need to extend the magnum-auto-healer10:08
flwang1to magnum-controller10:08
flwang1that's my goal for U :)10:08
strigaziI think the magnum-auto-healer needs to come closer to the us. the cloud-provider repo is a horrible place to host it10:09
flwang1strigazi: i agree10:09
strigazi-2 to have it in that repo10:09
flwang1we need a better place10:09
flwang1:D10:09
flwang1we don't have a good golang based code repo in openstack10:09
flwang1we can try to push the CPO team to split the repo10:10
flwang1anything else team?10:10
flwang1i need to go10:10
brtknrthats all10:10
flwang1thank you very much10:11
strigazisee you10:11
jakeyipgood discussion10:11
flwang1strigazi: thank you10:11
flwang1strigazi: i look forward your fcos driver10:11
brtknrsame10:11
strigazisure10:11
strigazithis after noon10:11
flwang1brtknr: can i get your bless on https://review.opendev.org/#/c/675511/ ?10:12
flwang1brtknr: i'd like to get it in train10:12
brtknrflwang1: np, I'll quickly test it first10:13
flwang1brtknr: thank you very much10:13
flwang1see you, team10:13
strigazibye10:14
openstackgerritMerged openstack/magnum master: k8s_fedora: Label master nodes with kubectl  https://review.opendev.org/68574810:14
*** sapd1_x has quit IRC10:14
strigaziflwang1: brtknr https://review.opendev.org/#/c/686028/110:15
brtknrstrigazi: what about it?10:16
strigazimerge? xD10:16
brtknri left a comment10:16
strigazialso end meeting?10:16
brtknrit needs to come before podman no?10:16
openstackgerritSpyros Trigazis proposed openstack/magnum master: k8s_fedora: Move rp_filter=1 for calico up  https://review.opendev.org/68602810:16
strigazibrtknr: it is before now10:17
brtknri've +2ed10:17
strigaziflwang1: ?10:18
brtknrto you flwang1 if you're still tere10:18
brtknrover to you flwang1 if you're still there10:18
strigazihe also needs to end the meeting10:18
strigaziI'll try10:18
strigazi#endmeetinh10:18
strigazi#endmeeting10:18
*** openstack changes topic to "OpenStack Containers Team"10:18
openstackMeeting ended Wed Oct  2 10:18:38 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)10:18
openstackMinutes:        http://eavesdrop.openstack.org/meetings/magnum/2019/magnum.2019-10-02-09.00.html10:18
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/magnum/2019/magnum.2019-10-02-09.00.txt10:18
openstackLog:            http://eavesdrop.openstack.org/meetings/magnum/2019/magnum.2019-10-02-09.00.log.html10:18
* strigazi goes for lunch10:18
*** rcernin has joined #openstack-containers10:25
*** ykarel|away has quit IRC10:31
*** ttsiouts has quit IRC10:32
*** ivve has quit IRC10:32
*** ttsiouts has joined #openstack-containers10:33
*** ttsiouts has quit IRC10:37
*** sapd1_x has joined #openstack-containers11:03
*** damiandabrowski[ has quit IRC11:07
*** openstackstatus has quit IRC11:09
*** ttsiouts has joined #openstack-containers11:14
*** sapd1_x has quit IRC11:44
*** ivve has joined #openstack-containers11:56
*** goldyfruit_ has quit IRC12:03
*** goldyfruit_ has joined #openstack-containers12:11
*** goldyfruit_ has quit IRC12:17
*** ianychoi has quit IRC12:21
*** ianychoi has joined #openstack-containers12:25
openstackgerritMerged openstack/magnum master: k8s_fedora: Move rp_filter=1 for calico up  https://review.opendev.org/68602812:26
brtknrstrigazi: are we aiming to support fcos in train?12:29
openstackgerritBharat Kunwar proposed openstack/magnum master: k8s_atomic: Run all syscontainer with podman  https://review.opendev.org/68574912:39
strigazibrtknr: yes12:39
openstackgerritBharat Kunwar proposed openstack/magnum master: WIP FCOS as drop-in of replacement atomic  https://review.opendev.org/68603312:39
strigazibrtknr: I'll finish today12:39
brtknrstrigazi:  ok i'll hold off testing for now then12:40
brtknrim happy with ng-10 and ng-1112:40
strigazibrtknr: you can test the podman patches for atomic12:41
brtknrstrigazi: sure thing, without the WIP patch presumably12:42
strigaziwithout the WIP FCOS patch12:43
brtknrstrigazi: its flawless13:11
*** ivve has quit IRC13:12
brtknrstrigazi: I think as a channel operator, you should be able to change the topic by doing "/topic OpenStack Containers Team | Weekly Meetings: Wednesday @ 9AM UTC | Agenda: https://etherpad.openstack.org/p/magnum-weekly-meeting"13:43
*** ykarel has joined #openstack-containers13:45
*** henriqueof1 has quit IRC13:51
*** goldyfruit has joined #openstack-containers14:09
strigaziI am not14:10
strigazibrtknr: ttx is14:10
openstackgerritMerged openstack/magnum master: Improve log of k8s health status check  https://review.opendev.org/67551114:10
*** goldyfruit_ has joined #openstack-containers14:15
brtknrstrigazi: okay I'll ask him14:15
*** goldyfruit has quit IRC14:17
openstackgerritSpyros Trigazis proposed openstack/python-magnumclient master: Allow cluster config for any cluster state  https://review.opendev.org/68617114:21
strigazibrtknr: this drives us mad ^^14:22
strigazican you have a look? it is a silly client side restriction14:22
brtknrstrigazi: us too!14:22
*** openstackstatus has joined #openstack-containers14:27
*** ChanServ sets mode: +v openstackstatus14:27
*** rcernin has quit IRC14:30
*** goldyfruit___ has joined #openstack-containers14:33
*** goldyfruit_ has quit IRC14:35
brtknrstrigazi: hmm14:39
brtknrit needs server information available for it to be any useful though14:40
brtknrit seems to create kubeconfig without this at the beginning14:40
strigazither api_address?14:40
strigazithe api_address?14:40
brtknryeag14:41
strigaziwell, the improtant part is the failed14:41
strigazibut even like this14:41
strigaziit is useful, the user can fill it after14:41
brtknrit yields `server: None`14:42
brtknrstrigazi: we have kube_masters address available in heat stack output quite early... why do we wait till stack creation is complete?14:43
brtknrI think there should at least be a warning14:44
strigazido we have api_address too?14:44
strigaziI'm checking14:45
strigazibut, yes we can print a warning for this14:45
brtknryeah in the stack output, i can see master address.. but in magnum, i believe stack output is only retrieved after cluster creation is complete14:46
strigaziwe can do that improvement too14:47
strigaziI see kube_masters but not api_address in the stack14:48
strigazianyway, I'll do the warning14:48
strigazithen we see for the server side14:48
strigazithanks14:48
brtknrstrigazi: np14:51
*** henriqueof has joined #openstack-containers14:55
strigazibrtknr: is it me or with podman is faster?15:21
openstackgerritSpyros Trigazis proposed openstack/python-magnumclient master: Allow cluster config for any cluster state  https://review.opendev.org/68617115:26
*** goldyfruit_ has joined #openstack-containers15:30
*** goldyfruit___ has quit IRC15:33
brtknrstrigazi: can we measure that my looking at heat container agent logs?15:37
*** ttsiouts has quit IRC15:39
*** ttsiouts has joined #openstack-containers15:40
brtknrstrigazi: approved: https://review.opendev.org/#/c/685875/15:43
*** ttsiouts has quit IRC15:44
*** ianychoi has quit IRC15:51
*** lpetrut has quit IRC15:52
*** ianychoi has joined #openstack-containers15:54
*** ykarel is now known as ykarel|away16:01
*** pcaruana has quit IRC16:10
*** pcaruana has joined #openstack-containers16:12
*** ykarel|away has quit IRC17:23
*** spiette has quit IRC17:32
*** spiette has joined #openstack-containers17:34
*** pcaruana has quit IRC17:45
*** jmlowe has quit IRC17:50
*** ykarel|away has joined #openstack-containers17:57
brtknrstrigazi: im surprised its faster because 47+23+187+111+76+30=474 and hyperkube image is 613...18:03
brtknrmaybe gcr.io is faster?18:03
brtknrstrigazi: btw if you do "podman images", there is already an etcd container in there:18:07
brtknrk8s.gcr.io/etcd                                  3.2.26      1ab4081ed64a   8 months ago    220 MB18:08
brtknrquay.io/coreos/etcd                              latest      61ad63875109   15 months ago   39.7 MB18:08
*** jmlowe has joined #openstack-containers18:10
brtknrflwang: i asked the people in #openstack-infra to change the topic to include meeting time and agenda and they made you a channel operator instead :)18:11
*** ykarel|away has quit IRC18:16
*** flwang1 has quit IRC18:31
*** jmlowe has quit IRC18:33
*** dave-mccowan has joined #openstack-containers19:02
*** spsurya has quit IRC19:09
*** ykarel has joined #openstack-containers19:14
*** ykarel is now known as ykarel|away19:30
*** jmlowe has joined #openstack-containers19:40
*** ykarel|away has quit IRC19:46
*** dave-mccowan has quit IRC20:04
flwangbrtknr: well done20:47
flwangre the heat ignition patch for train20:47
brtknrflwang: you too mostly!20:51
flwangbrtknr: i'm trying to change the channel topic20:52
brtknrflwang: any luck?20:53
flwang(09:54:18) You're not a channel operator20:54
flwangi can't issue the /topic command20:54
brtknrhmm20:56
brtknryour name definitely comes up when i type /msg chanserv access #openstack-containers list20:56
brtknryou might need to ask #openstack-infra people why you cant do it20:58
*** lpetrut has joined #openstack-containers21:06
*** lpetrut has quit IRC21:15
brtknrflwang: sounds like you need to do /op before you set the topic21:21
brtknrthen /deop after you're done21:21
brtknrflwang: if that doesnt work, you can try /msg ChanServ OP #openstack-containers flwang21:24
*** goldyfruit_ has quit IRC22:33
*** rcernin has joined #openstack-containers22:48
*** ChanServ sets mode: +o flwang22:55
*** flwang changes topic to "OpenStack Containers Team | Meeting: every Wednesday @ 9AM UTC | Agenda: https://etherpad.openstack.org/p/magnum-weekly-meeting"22:55
flwangbrtknr: it works. thank you very much22:58
*** flwang has quit IRC23:16
*** flwang has joined #openstack-containers23:44

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!