Tuesday, 2018-12-04

*** salmankhan has quit IRC00:07
*** munimeha1 has quit IRC00:09
*** kevko_ has joined #openstack-containers00:37
kevko_flwang: thank you for +2 :)00:37
flwangkevko_: sure00:40
kevko_flwang: i remember that i already fixed block of code with x509 keypair ..but i am not sure if included in this review ...will check tommorrow and if no ..i will post another one with py3 compatibility ...we in debian already using py3 ocata i think (maybe queens)00:41
kevko_flwang: this was tested with barbican and worked ..00:41
flwangcool00:42
kevko_flwang: hh, don't be happy ..i was still trying with rawhide image of heat-container00:42
kevko_flwang: magnum 7.0.2 , heat-container-agent:rawhide, enable_cert_api true in template have worked ...00:42
flwangagain00:43
flwangyou can't use rawhide if you have multi regions00:43
kevko_flwang: i don't have ..00:44
kevko_flwang: you adviced me to try x509keypair and rocky-stable ..now i'm running it00:44
kevko_flwang: i know it :) ..i am trying to have all functional ... all choices :)00:45
kevko_flwang: we want to put it into production ..so want to have tested all combinations ..00:45
openstackgerritMohammed Naser proposed openstack/magnum master: wip: k8s: add boot volume support  https://review.openstack.org/62173400:45
flwangso does the rocky-stable work for you now?00:46
flwangmnaser: seems the minion yaml is missing something ?00:48
mnaserflwang: that was just a wip i threw up without much testing just to see how gates look, havent tested locally yet, let me see00:49
mnaserah you're right00:49
mnaserdidnt hit ctrl+s00:49
mnaserconditions will help clean up our code if we start using them, get rid of all that magnum::optional stuff that's loaded by env00:50
flwangmnaser: that's a nice feature, i discussed it with strigazi at Berlin00:52
mnaserflwang: its a bug in our case :) we do bfv only and we are more recently discovering how breaks :<00:53
kevko_flwang: now rocky stable with x509 keypair master passed all 19 scripts ..waiting for minion :)00:53
mnaserso lets call it a bug so we can backport... :;p00:53
mnaser:P00:53
kevko_flwang: but i have to test with barbican also00:53
flwangkevko_: pls open a bug if you found it works and barbican case not00:54
flwangmnaser: i don't think i understand the bug00:54
mnaserflwang: if your cloud uses boot from volume only, then magnum boots a vm in a flavor with root_gb=000:54
mnaserin that case, nova actually boots a vm with a local disk the size of the image.. so 5GB with atomic00:55
mnaserthat's a security bug and in rocky it was patched to give a warning but in stein it is no longer possible to boot a vm with root_gb=0 and not provide a volume00:55
flwangmnaser: got it, thanks. it would be nice if you can file a bug/story to track it00:58
mnaserflwang: ack00:59
openstackgerritFeilong Wang proposed openstack/magnum master: Support Keystone AuthN and AuthZ for k8s  https://review.openstack.org/56178301:00
kevko_flwang: hmm, do you really have working config with x509keypair ? my master node is completed , notified to heat , but i see that resource enable_cert_manager api is still create in progress... but i have cert_manager_api = false in template ...so why this resource is creating in deployment  ?01:03
flwangkevko_: we're running magnum on our production01:04
flwangwelcome to register Catalyst Cloud if you want to try01:05
flwangkevko_: what's your heat version?01:05
openstackgerritMohammed Naser proposed openstack/magnum master: wip: k8s: add boot volume support  https://review.openstack.org/62173401:06
*** lbragstad has quit IRC01:07
kevko_flwang: http://paste.openstack.org/show/736604/ check01:11
kevko_flwang: i have also password there ..but doesn't matter ... will destroy test env ..and create new ...maybe i'm missing something in configuration ?01:12
kevko_flwang: interesting is end of paste .... publicURL in region null01:13
flwangdid you check the log of heat-container-agent?01:16
flwangall good?01:16
kevko_flwang: no error , everything goog01:16
kevko_good01:16
kevko_flwang: same symptoms always ..publicURL in region null .. if i switch to rawhide ..it just works01:17
flwangwhat do you mean publicURL in region null ?01:18
kevko_flwang: oh, paste.openstack.org cut the end ..01:19
kevko_flwang: wait01:19
kevko_flwang: http://paste.openstack.org/show/736605/01:21
kevko_flwang: this01:21
kevko_flwang: all other things looks good ...so, i'm wondering if i am not missing some config in magnum ..01:22
flwangi can see this 'cluster_user_trust = True'01:25
flwangkevko_: my conf http://paste.openstack.org/show/736606/01:26
*** jaewook_oh has joined #openstack-containers01:31
*** jaewook_oh has quit IRC01:31
kevko_flwang: hmm, ok ...i will try and let you know ..01:32
kevko_flwang: probably tomorrow , now i'm going to sleep ..here is 2:33 am :)01:33
openstackgerritMerged openstack/magnum master: Add Octavia python client for Magnum  https://review.openstack.org/61559101:34
openstackgerritLingxian Kong proposed openstack/magnum master: Support hook mechanism for cluster deletion  https://review.openstack.org/49714401:35
openstackgerritLingxian Kong proposed openstack/magnum master: Add load balancer hook for cluster pre-deletion  https://review.openstack.org/62076101:35
*** hongbin has joined #openstack-containers01:43
kevko_bye bye guys01:52
*** kevko_ has quit IRC01:52
*** dodo_o has quit IRC02:11
openstackgerritJason SUN proposed openstack/magnum master: Change openstack-dev to openstack-discuss  https://review.openstack.org/62183602:14
*** itlinux has joined #openstack-containers02:23
*** itlinux has quit IRC02:33
*** itlinux has joined #openstack-containers02:34
*** itlinux has quit IRC02:36
openstackgerritJason SUN proposed openstack/python-magnumclient master: Change openstack-dev to openstack-discuss  https://review.openstack.org/62188702:55
*** hongbin_ has joined #openstack-containers03:11
*** hongbin has quit IRC03:13
openstackgerritGuo Jingyu proposed openstack/magnum master: Change openstack-dev to openstack-discuss  https://review.openstack.org/62192203:17
*** hongbin has joined #openstack-containers03:22
*** jmlowe has joined #openstack-containers03:22
*** hongbin_ has quit IRC03:23
*** ricolin has joined #openstack-containers03:44
*** ramishra has joined #openstack-containers03:46
*** dave-mccowan has quit IRC03:53
*** ykarel|away has joined #openstack-containers03:58
*** itlinux has joined #openstack-containers04:14
*** Nel1x has quit IRC04:15
*** ykarel|away has quit IRC04:24
*** ykarel|away has joined #openstack-containers04:25
*** ykarel|away is now known as ykarel04:27
*** janki has joined #openstack-containers05:22
openstackgerritHuang.Xiangdong proposed openstack/python-magnumclient master: Add "--labels-override" boolean option when creating a cluster  https://review.openstack.org/62199405:24
*** hongbin has quit IRC05:39
*** dodo_o has joined #openstack-containers07:08
*** pcaruana has joined #openstack-containers07:10
*** ykarel_ has joined #openstack-containers07:29
*** ykarel has quit IRC07:31
*** rcernin has quit IRC07:38
*** ttsiouts has joined #openstack-containers07:40
*** ttsiouts_ has joined #openstack-containers07:58
*** ttsiouts has quit IRC08:01
*** belmoreira has joined #openstack-containers08:01
*** ttsiouts_ has quit IRC08:17
*** ttsiouts has joined #openstack-containers08:18
*** ttsiouts has quit IRC08:22
tobias-urdinstrigazi: will new clusters pull in the v1.11.5-1 image without any intervention, so i only have to worry about those currently running?08:23
*** ykarel_ is now known as ykarel08:29
*** ianychoi has quit IRC08:30
*** ianychoi has joined #openstack-containers08:30
openstackgerritShangXiao proposed openstack/magnum master: Add release notes link to README  https://review.openstack.org/56082208:50
*** ykarel is now known as ykarel|lunch08:51
*** ttsiouts has joined #openstack-containers08:52
openstackgerritShangXiao proposed openstack/magnum master: Change bugs link for README  https://review.openstack.org/56082208:54
strigazitobias-urdin: yes09:04
tobias-urdinthank you09:20
*** lbragstad has joined #openstack-containers09:24
*** ykarel|lunch is now known as ykarel09:26
*** salmankhan has joined #openstack-containers10:34
*** ttsiouts has quit IRC10:36
*** ttsiouts has joined #openstack-containers10:37
*** salmankhan1 has joined #openstack-containers10:38
*** ttsiouts_ has joined #openstack-containers10:38
*** salmankhan has quit IRC10:38
*** salmankhan1 is now known as salmankhan10:38
*** ttsiouts has quit IRC10:42
*** ttsiouts_ has quit IRC11:01
*** ttsiouts has joined #openstack-containers11:01
*** ttsiouts has quit IRC11:06
*** ttsiouts has joined #openstack-containers11:45
*** ttsiouts has quit IRC11:54
*** ttsiouts has joined #openstack-containers11:55
*** dodo_o has quit IRC11:58
*** ttsiouts has quit IRC11:59
*** vabada has joined #openstack-containers11:59
*** vabada has quit IRC11:59
*** vabada has joined #openstack-containers12:00
*** dave-mccowan has joined #openstack-containers12:12
*** lbragstad has quit IRC12:21
*** lbragstad has joined #openstack-containers12:23
*** lbragstad has quit IRC12:23
*** ttsiouts has joined #openstack-containers12:23
*** lbragstad has joined #openstack-containers12:24
*** lbragsta_ has joined #openstack-containers12:26
*** ykarel is now known as ykarel|afk12:29
*** shrasool has joined #openstack-containers12:29
*** lbragsta_ has quit IRC12:31
*** lbragstad has quit IRC12:31
*** lbragstad has joined #openstack-containers12:37
*** ykarel|afk is now known as ykarel12:43
*** shrasool has quit IRC12:47
*** ttsiouts has quit IRC12:49
*** ttsiouts has joined #openstack-containers12:50
*** ykarel is now known as ykarel|afk12:51
*** shrasool has joined #openstack-containers12:52
*** udesale has joined #openstack-containers13:00
*** mgariepy has quit IRC13:03
*** mgariepy has joined #openstack-containers13:08
*** ttsiouts has quit IRC13:24
*** ttsiouts has joined #openstack-containers13:25
*** ttsiouts_ has joined #openstack-containers13:29
*** ttsiouts has quit IRC13:30
*** shrasool has quit IRC13:31
*** munimeha1 has joined #openstack-containers13:45
*** janki has quit IRC13:50
*** udesale has quit IRC14:08
*** irclogbot_0 has quit IRC14:15
*** janki has joined #openstack-containers14:29
*** zul has quit IRC14:37
*** zul has joined #openstack-containers14:41
*** ttsiouts_ has quit IRC14:44
*** janki has quit IRC14:44
*** salmankhan has quit IRC14:44
*** ttsiouts has joined #openstack-containers14:44
*** ttsiouts_ has joined #openstack-containers14:46
openstackgerritMichal Arbet proposed openstack/magnum master: Fix python3 compatibility  https://review.openstack.org/61875614:47
*** hongbin has joined #openstack-containers14:47
*** ttsiouts has quit IRC14:49
*** ttsiouts_ has quit IRC14:50
*** udesale has joined #openstack-containers14:53
*** irclogbot_0 has joined #openstack-containers14:55
*** zul has quit IRC14:58
*** salmankhan has joined #openstack-containers14:59
*** itlinux has quit IRC15:18
*** jmlowe has quit IRC15:26
*** zul has joined #openstack-containers15:41
*** jmlowe has joined #openstack-containers15:50
*** shrasool has joined #openstack-containers15:50
*** ykarel|afk is now known as ykarel15:53
*** jmlowe has quit IRC15:57
*** jmlowe has joined #openstack-containers15:57
*** shrasool has quit IRC16:02
*** hongbin has quit IRC16:06
*** pcaruana has quit IRC16:12
*** hongbin has joined #openstack-containers16:13
*** itlinux has joined #openstack-containers16:18
*** shrasool has joined #openstack-containers16:19
*** itlinux has quit IRC16:29
*** itlinux has joined #openstack-containers16:36
*** munimeha1 has quit IRC16:48
*** shrasool has quit IRC17:02
*** ykarel is now known as ykarel|away17:03
*** shrasool has joined #openstack-containers17:03
*** ramishra has quit IRC17:06
*** pcaruana has joined #openstack-containers17:12
*** shrasool has quit IRC17:17
*** ykarel|away has quit IRC17:22
*** munimeha1 has joined #openstack-containers17:24
*** shrasool has joined #openstack-containers17:32
*** jmlowe has quit IRC17:38
*** pcaruana has quit IRC17:56
*** salmankhan1 has joined #openstack-containers18:13
*** salmankhan has quit IRC18:14
*** salmankhan1 is now known as salmankhan18:14
*** shrasool has quit IRC18:26
*** shrasool has joined #openstack-containers18:27
openstackgerritmelissaml proposed openstack/magnum-tempest-plugin master: Change openstack-dev to openstack-discuss  https://review.openstack.org/62253918:32
*** salmankhan has quit IRC18:40
*** jmlowe has joined #openstack-containers18:50
*** itlinux has quit IRC19:07
*** itlinux has joined #openstack-containers19:10
openstackgerritVieri proposed openstack/magnum-ui master: Change openstack-dev to openstack-discuss  https://review.openstack.org/62256819:15
openstackgerritVieri proposed openstack/magnum-specs master: Change openstack-dev to openstack-discuss  https://review.openstack.org/62256919:17
*** itlinux has quit IRC19:23
*** jmlowe has quit IRC19:23
*** itlinux has joined #openstack-containers19:53
*** shrasool has quit IRC19:55
*** jmlowe has joined #openstack-containers20:23
*** jmlowe has quit IRC20:33
*** shrasool has joined #openstack-containers20:55
strigaziAnyone here for meeting?20:56
colin-o/21:00
strigazi#startmeeting containers21:01
openstackMeeting started Tue Dec  4 21:01:06 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.21:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:01
*** openstack changes topic to " (Meeting topic: containers)"21:01
openstackThe meeting name has been set to 'containers'21:01
strigazi#topic Roll Call21:01
*** openstack changes topic to "Roll Call (Meeting topic: containers)"21:01
strigazio/21:01
cbrumm_o/21:02
colin-hello21:02
strigazihello guys21:02
strigazi#topic Announcements21:03
*** openstack changes topic to "Announcements (Meeting topic: containers)"21:03
strigaziFollowing CVE-2018-1002105 https://github.com/kubernetes/kubernetes/issues/7141121:03
strigaziI've pushed imaged for 1.10.11 and 1.11.521:03
strigazi#link http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000501.html21:04
cbrumm_thanks21:04
strigaziAnd some quick instructions for upgrading.21:04
strigaziAfter looking into the CVE, it seems that magnum clusters are suffering only from the anonymous-auth=true on the API server issue21:05
strigaziThe default config with magnum is not using the kube aggregator API and in kubelet this option is to false.21:05
colin-understood21:06
cbrumm_that's good, we've enabled the aggregator though :(21:06
flwango/21:07
strigaziWe need to this anyway, by so far we haven't.21:07
flwangstrigazi: does the v1.11.5 include the cloud provider support?21:07
strigaziflwang: v1.11.5-1, yes it does21:08
flwangstrigazi: cool, thanks21:08
strigaziFYI, at today at cern I counted kubernetes 141 clusters, we plan to set anonymous-auth=false in the API and then advise users to upgrade manuallu or migrate to new clusters with v1.12.321:10
cbrumm_141, nice!21:11
strigaziAll clusters are inside our private network, so only critical services are advised to take action.21:11
strigazito be on the safe side21:12
*** jmlowe has joined #openstack-containers21:12
strigazifinal comment about the CVE that monopolised by day and last night,21:12
strigazimulti-tenant clusters are also more vulnerable since non owners might run custom code in the cluster.21:13
strigazi#topic Stories/Tasks21:14
*** openstack changes topic to "Stories/Tasks (Meeting topic: containers)"21:14
cbrumm_luckily we don't have any multi-tenant clusters21:14
strigazilast week, was too busy for me, I don't have any updates, I think I missed an issue with the heat-agent in queens or rocky, flwang ?21:15
strigazicbrumm_: we all might have with keystone-auth? it is easier to give access21:16
flwangstrigazi: for me, i worked on the keystone auth feature21:16
flwangand it's ready for testing21:16
cbrumm_I haven't looked into that, great question21:16
flwangit works for me21:16
flwangand now i'm working on the client side to see if we can automatically generate the config21:17
strigaziflwang: shall we add one more label for authz?21:17
flwangstrigazi: can you remind me the user case to split into authN and authZ?21:18
strigaziflwang: the user might want to manage RBAC only with k8s, with keystone authz you need to add the rules twice, one in the keystone policy one in k8s21:19
flwangbut my point is if we need two labels here, because if user just want to manage RBAC with k8s, they can don't update the configmap, and leave what is where is21:20
flwangkeep the default one21:20
flwangi'm just hesitate to introduce more labels here21:21
strigaziI'll check again if the policy is too restrictive, in general lgtm, thanks21:21
flwangstrigazi: i'm trying to set a very general policy here, but i'm open for any comments21:23
flwangstrigazi: me and lxkong are working on https://review.openstack.org/#/c/497144/21:24
strigaziflwang: I'll leave in comment in gerrit if needed. Looks good as a first iteration, we can take it chnage smth if need it. I'll just need to test from the last time.21:24
flwangstrigazi: cool, thanks21:24
flwangthe delete resource feature is also an important one21:25
strigaziflwang: I'll review it21:25
*** salmankhan has joined #openstack-containers21:25
flwangnow we're getting many of tickets saying can't delete cluster21:25
strigazithere is not hook that does smth yet, correct?21:27
flwanglxkong will submit a patch for LB soon21:27
flwangthe current patch is just the framework21:27
strigaziI'm happy to include in this patch or merge the two together21:28
lxkongflwang: strigazi  the patch was already there, working on fixing the CI https://review.openstack.org/#/c/620761/21:28
lxkongi've already tested in the devstack environment21:28
lxkongbut need to figure out the ut21:28
lxkongfailure21:28
strigazilxkong: that is the issue in the CI?21:28
lxkongstrigazi: just unit test21:28
strigaziok21:29
lxkongin the real functionality test, and lbs created by the services can be properly removed before the cluster deletion21:29
strigaziok21:30
strigaziI'll test in devstack, we don't have octavia in our cloud so all my input will come from devstack.21:30
lxkongconsidering the differetnt k8s version and octavia/neutron-lbaas other people are using, that hook machinism is totally optional, it's up to the deployer to config it or not21:30
lxkongstrigazi: yeah, that patch is for octavia21:31
strigazigot it21:31
lxkongstrigazi: you also need to patch k8s with https://github.com/kubernetes/cloud-provider-openstack/pull/22321:31
lxkongwhich will add the cluster uuid into the lb's description21:32
strigazilxkong: does this work with the out-of-tree cloud-provider?21:32
lxkongwe will include that PR in our magnum images21:32
lxkongstrigazi: yeah, sure21:32
lxkonglatest CCM already has that fix21:32
strigazicool21:33
strigazilxkong: kind of relevant question, when using the CCM you need the cloud config in the worker nodes too?21:35
flwangstrigazi: btw, besides the keystone auth, i'm working on the ccm integration21:35
lxkongstrigazi: no21:35
lxkongkubelet should have --cloud-provider=external21:35
strigazionly?21:35
lxkongyeah21:36
strigazicool21:36
flwanglxkong: where does the pod read the cloud config?21:36
lxkongit doesn't talk to cloud stuff any more21:36
lxkongby com21:36
flwangtalk to apiserver?21:36
lxkongcm21:36
lxkongconfigmap21:36
lxkongyou need to create a cm with all the cloud config content21:37
lxkongand pass that cm to CCM21:37
strigaziit only makes sense.21:37
flwangok21:37
*** shrasool has quit IRC21:38
cbrumm_just make sure that if your cm has cloud credentials that you lock it down policies around accessing it.21:38
strigaziflwang: lxkong cbrumm_ for better security, if the ccm runs as a DS in the master nodes, it can mount the config from the node21:39
flwanglxkong: cbrumm_: does the cloud config cm need to be created manually? or it will be read by something and created on behalf?21:39
strigazithis way the creds are not accessible via any api21:39
flwangstrigazi: i'm going to make the ds only running on master21:40
strigaziflwang: you can mount the config from the host then21:40
strigazi*cloud config21:40
flwangyes, but i'm not sure if ccm can still read the cloud config file or only happy with configmap now21:41
cbrumm_strigazi: we do that too. Just saying that if creds are in cms that they must also be protected.21:41
strigazicbrumm_: +121:41
strigaziflwang:  it is the same from the ccm's point of view21:41
strigaziflwang:  the pods will see file21:41
strigaziflwang:  the pods will see a file that may come from the host or the config map21:42
flwangstrigazi: cool, will double check21:42
strigaziflwang: it is better to not put passwords in config maps or even secrets without a KMS21:43
flwangstrigazi: ack21:43
strigazifyi, without the cloud provider that I tested, 1.13.0 works without issues with rocky.21:47
flwangstrigazi: why don't test the cloud provider? ;)21:49
strigaziI have one last question for the cloud provider, is the external any better in terms of number of API calls?21:50
flwangstrigazi: technically yes21:50
flwangbecause instead of sending api calls from each kubelet, there is only one caller, the ccm21:50
flwanglxkong: correct me if i'm wrong21:50
strigaziflwang: I tested in our production cloud, we don't have a use case for it there.21:50
flwangstrigazi: fair enough21:50
* lxkong is reading back the log21:51
strigaziI'd like to have, but we don't :( not lbaas not cinder, only manila, cvmfs and our internal dns lbaas.21:51
flwangstrigazi: right, make sense21:52
lxkongflwang: you are right21:53
lxkongthis picture will help you understand better https://paste.pics/fe51956a0c2605edeaf2d42617fe108e21:53
strigaziAnything else for the meeting?21:55
cbrumm_not today21:55
flwanglxkong: thanks for sharing, good diagram21:56
flwangstrigazi: all good for me21:56
flwangstrigazi: are you going to skip next meeting?21:56
flwanguntil the new year?21:56
strigazino, I can do the next two21:57
strigazi11 and 18 of Dec21:57
cbrumm_me and my team will miss the 25th and 1st meetings21:57
strigazime too21:57
strigaziI'll put it in the wiki21:58
flwangwe won't have 25 and 1st meeting anyway :D21:59
flwangcool, i will work next week too21:59
flwangstrigazi: thank you22:01
*** shrasool has joined #openstack-containers22:02
strigazi#link https://wiki.openstack.org/wiki/Meetings/Containers#Weekly_Magnum_Team_Meeting22:02
strigazithanks for joining the meeting everyone22:03
strigazi#endmeeting22:03
*** openstack changes topic to "OpenStack Containers Team"22:03
openstackMeeting ended Tue Dec  4 22:03:52 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:03
openstackMinutes:        http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-12-04-21.01.html22:03
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-12-04-21.01.txt22:03
openstackLog:            http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-12-04-21.01.log.html22:03
jakeyiphi all, late to the party, didn't want to crash your meeting, just want to intro myself. I'm Jake from Nectar Research Cloud (Australia)22:04
strigazihello jakeyip22:05
cbrumm_Hi Jake22:05
flwangjakeyip: wave from NZ22:06
jakeyipWe've deployed Magnum recently in prod. works well. Many thanks for the hard work you've put into it.22:06
cbrumm_You're already in prod with users? Nice22:07
strigazijakeyip: excellent, which release?22:07
jakeyipwe are at queens.22:07
flwangjakeyip: you guys have only 1 region, right?22:08
jakeyipyes22:09
brtknro/ hey all22:10
brtknrive been following the autoscaling work under cernops... looking good22:11
strigazibrtknr :)22:11
jakeyipwe have very limited 'pilot' users. need to work through some operational issues quickly before more people start jumping on22:12
brtknrlooks like the meeting time has moved again for me... i thought was starting at 22:0022:13
strigazibrtknr: winter time22:13
strigaziit is 2100 UTC22:14
jakeyipI was reading past logs, flwang: we are having cluster DELETE_FAIL issue too. I think this is due to LBaaS created by k8s external LB not being deleted before deleting cluster. looks like there's a patch for now22:14
strigazibrtknr: check the link in the wiki https://wiki.openstack.org/wiki/Meetings/Containers for the conversion22:15
flwangjakeyip: there are 2 cases22:15
brtknrstrigazi: thanks :)22:15
flwangwe can discuss offline if you want22:15
flwangjakeyip: lb of master nodes and lb running on top of k8s22:15
strigazijakeyip: if this is your current issue you are in a pretty good state.22:17
jakeyipflwang: pretty good state is what I like to hear. :P22:18
jakeyipanother thing I've found is that magnum-conductor periodically polls k8s API? I find that it generates lots of logs of ERROR level if someone mistakenly delete the instance. Filled up our logs one time.22:19
jakeyipis this a known issue, or should I file a bug?22:20
strigazijakeyip: [drivers] send_cluster_metrics = False22:20
strigazilet's say it is false alarm.22:20
flwangjakeyip: just turn off the config mentioned by strigazi22:21
jakeyipthanks strigazi, flwang!22:22
flwangjakeyip: for lb deletion of master nodes, you also need to update you heat22:22
flwangjakeyip: you need this one https://review.openstack.org/#/c/619941/22:22
jakeyipok. I haven't got the master nodes lb config working yet though22:22
strigaziflwang: there is an issue with the LBs of the api too?22:22
flwangjakeyip: then for the lb on k8s, just monitor the progress we're doing22:22
flwangstrigazi: sometimes, it's very hard to delete22:23
flwanghave to delete the lb manually and then retry to delete the cluster22:23
flwangit's a heat problem, not magnum22:23
jakeyipflwang: for lb on k8s is this the right bug? https://review.openstack.org/#/c/620761/22:23
flwangjakeyip: correct22:23
flwangjakeyip: and this https://review.openstack.org/#/c/620761/22:24
strigaziflwang: jakeyip If I were you, in our internal docs I would put with the red letters, delete your Lbs first with kubeclt. I think it is a reasonable workaround, of course not ideal.22:24
flwangstrigazi: we did, but customer won't read your docs22:24
flwangand when they see issues, they just raise ticket and complain your service22:25
jakeyipstrigazi: yeah plan to do that. but  depending on users to read docs... :P22:25
strigaziflwang: when the tickets arrives you can point them to the docs, at least you will have a point.22:25
jakeyipI've done some operator tooling to help me hunt down the offending resources to delete22:25
strigaziflwang: jakeyip of course they won't read the docs, at least the first time.22:26
jakeyipflwang: why do you say it's a heat issue? afaik the resources created by external lb in k8s doesn't show up in heat?22:27
strigazijakeyip: I'm going to bed, flwang knows everything :), for real! If you need anything, feel free to ping me as well. Also, you are more than welcome to join the meeting next week.22:30
strigaziflwang: jakeyip have a nice day22:30
jakeyipsure, have a good night strigazi. thanks!22:30
colin-welcome jakeyip22:31
jakeyipflwang: also, change https://review.openstack.org/#/c/620761/ fixes it in magnum, so a bit confused on why you say it's a heat problem not magnum.22:32
jakeyiphi colin- :)22:32
flwangstrigazi: night22:32
openstackgerritLingxian Kong proposed openstack/magnum master: Add load balancer hook for cluster pre-deletion  https://review.openstack.org/62076122:32
flwangjakeyip: sometimes, heat may have a race condition issue, or something like that22:33
flwangwhich cause heat try to delete the network/subnet before the lb is fully deleted22:33
flwangand then it will fail because there is port on that network/subnet22:33
*** dave-mccowan has quit IRC22:33
*** rcernin has joined #openstack-containers22:34
jakeyipbut heat doesn't know that there's an lb on that network/subnet because it's not a resource created by heat?22:34
flwangno, i'm talking about the case of lb of master nodes22:35
flwangnot lb on k8s22:35
jakeyipah ok. I'm referring to lb on k8s22:35
jakeyipflwang: sorry sorry22:35
jakeyipflwang: hmm so I'm looking at this patch - https://review.openstack.org/620761 - for lb on k8s.22:36
jakeyipflwang: it seems like you are trying to find the octavian lb to delete it before deleting the cluster.22:37
flwangjakeyip: for lb on k8s, yes22:37
jakeyipflwang: I wonder if you have tried getting k8s to delete everything instead?22:38
flwangjakeyip: that's another option when your k8s api still works22:38
flwangfor some cases, if the k8s api is down, you have trouble22:38
jakeyipyou are right22:39
jakeyipflwang: this way sounds like a force-delete option. whereas if we can get k8s api to delete what it created it's much cleaner22:40
flwangagree22:41
jakeyipflwang: could be both are necessary22:42
jakeyipflwang: I am also wondering if this will mean more code in magnum; does magnum have to do this for each cloud provider resource?22:43
jakeyipwe can bring this discussion offline if you want22:44
flwangthat's alright22:44
flwangso far, i think only for resource LB22:45
flwangi don't think we need to do samethng for PV22:45
flwangbut the hook can give you flexibility22:45
*** itlinux has quit IRC22:45
*** rcernin_ has joined #openstack-containers22:45
*** rcernin has quit IRC22:45
*** shrasool has quit IRC22:46
*** jmlowe has quit IRC22:46
jakeyipflwang: I agree. can do this as first pass to fix your problem. I can explore the k8s api route in my free time. :)22:48
*** jmlowe has joined #openstack-containers22:48
jakeyipflwang: it doesn't work for me cos it's Octavian :(22:48
flwangjakeyip: oops ;)22:48
jakeyipflwang: all good. we have to migrate anyway; it's just a long and hard road22:51
flwangjakeyip: good luck22:52
jakeyipthanks22:52
*** salmankhan has quit IRC23:07
*** rcernin_ has quit IRC23:12
*** etp has quit IRC23:12
*** etp has joined #openstack-containers23:13
*** rcernin has joined #openstack-containers23:13
*** munimeha1 has quit IRC23:24
*** shrasool has joined #openstack-containers23:29
*** shrasool has quit IRC23:30
*** udesale has quit IRC23:40
*** hongbin has quit IRC23:57

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!