Wednesday, 2020-05-06

*** ttsiouts has joined #openstack-containers00:00
*** ttsiouts has quit IRC00:33
openstackgerritjacky06 proposed openstack/magnum-tempest-plugin master: Remove six  https://review.opendev.org/72565301:20
*** xinliang has joined #openstack-containers01:39
*** sapd1_x has quit IRC01:41
*** xinliang has quit IRC02:03
*** sapd1_x has joined #openstack-containers02:19
*** sapd1_x has quit IRC02:29
*** ttsiouts has joined #openstack-containers02:30
*** ttsiouts has quit IRC02:39
*** ttsiouts has joined #openstack-containers02:39
*** sapd1_x has joined #openstack-containers02:43
openstackgerritMerged openstack/magnum master: [ci] Remove unnecessary container build tasks  https://review.opendev.org/71977402:57
*** vishalmanchanda has joined #openstack-containers03:03
*** ttsiouts has quit IRC03:04
*** ramishra has joined #openstack-containers03:37
*** ykarel|away is now known as ykarel03:54
*** openstack has joined #openstack-containers04:22
*** ChanServ sets mode: +o openstack04:22
*** udesale has joined #openstack-containers04:40
*** ttsiouts has joined #openstack-containers05:01
*** ttsiouts has quit IRC05:33
*** belmoreira has joined #openstack-containers06:53
*** ttsiouts has joined #openstack-containers07:04
*** yolanda has joined #openstack-containers07:14
*** openstackstatus has quit IRC07:46
*** openstack has joined #openstack-containers07:49
*** ChanServ sets mode: +o openstack07:49
*** ttsiouts has quit IRC08:07
*** ttsiouts has joined #openstack-containers08:07
LarsErikPso.. this was removed from stein? https://opendev.org/openstack/magnum/commit/d95ba4d1fff69df506928339bb9eb3472bb4f3d108:09
LarsErikPin addition to what I mentioned yesterdat, when I try to run a k8s cluster, flannel is never installed it seems08:10
openstackgerritMerged openstack/magnum-ui master: Imported Translations from Zanata  https://review.opendev.org/72549108:27
*** ykarel is now known as ykarel|lunch08:45
*** flwang1 has joined #openstack-containers08:58
flwang1anybody around?09:00
flwang1shall we cancel this meeting?09:00
*** strigazi has joined #openstack-containers09:00
flwang1strigazi: ping09:01
strigaziflwang1: o/09:01
flwang1strigazi: Bharat won't join today09:01
strigaziI saw the etherpad09:01
flwang1so I assume just you and me, we can have open discussion or the meeting, up to you09:01
strigazilet's do the open discussion logged as a meeting :)09:02
flwang1#startmeeting09:02
openstackflwang1: Error: A meeting name is required, e.g., '#startmeeting Marketing Committee'09:02
flwang1#startmeeting magnum09:02
openstackMeeting started Wed May  6 09:02:37 2020 UTC and is due to finish in 60 minutes.  The chair is flwang1. Information about MeetBot at http://wiki.debian.org/MeetBot.09:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.09:02
*** openstack changes topic to " (Meeting topic: magnum)"09:02
openstackThe meeting name has been set to 'magnum'09:02
flwang1strigazi: i just approved the labels spec09:02
flwang1ttsiouts can start the work now09:03
strigazi#topic labels override09:03
ttsioutso/09:03
flwang1strigazi: i kind of agree with you that we shouldn't use the config option to 'break' the api09:03
strigaziI left a comment about the name, I did some deep reseach on wording09:04
flwang1so i'm ok leave it as False09:04
flwang1strigazi: yep, i saw that. and I appreciate your work09:04
strigaziIt turns out merge is more to the point... I think I initially proposed override :(09:05
openstackgerritMerged openstack/magnum-specs master: Magnum Labels Override  https://review.opendev.org/71657109:05
strigaziBased on method overriding09:05
ttsioutsso should we go with 'merge' instead?09:05
ttsioutsif so, I need to change the spec also09:06
strigaziI think since we are just on the name discussion and for the "real" issues we have agreement, we can start polishing the implementation09:06
flwang1strigazi: are you saying that you're voting 'merge'?09:06
strigazithe name will be just a sed09:06
flwang1strigazi: +109:06
flwang1ttsiouts: pls focus on the code part09:07
ttsioutsflwang1: sure09:07
flwang1we can continue the name discussion there and update the spec later09:07
ttsioutssince we are discussing about this now, could we we decide on the name now also?09:07
ttsioutsI think we all need this to get merged asap09:08
strigazinaming is "hard" but it's a detail. Without brtknr we will go back to drawing when he joins09:08
flwang1ttsiouts: i have approved the spec already09:08
strigaziSo it is megred too \o/09:09
ttsiouts\o/09:09
flwang1yep, i think strigazi and I agree to use 'merge', but we'd like to get a 'yes' from brtknr anyway :)09:09
ttsioutsgreat!09:09
strigaziso for you it's a go :)09:10
ttsioutsflwang1, strigazi: great! thanks a lot for your effort09:10
flwang1ttsiouts: thank you!09:11
flwang1strigazi: move on?09:11
strigaziyes09:11
strigaziFor nodes list09:12
flwang1yes09:12
strigaziflwang1: we go with ng node list09:12
strigaziand then we see?09:12
flwang1i have got new comments from Thomas and i'm happy with it09:13
flwang1did you see it?09:13
flwang1strigazi: IIUC, currently, we just need the nodes list based on a given NG09:14
strigaziflwang1: exactly what I was typing09:14
flwang1and i think we have no choice, we need to build it in memory but query heat and nova09:14
strigaziflwang1: for an nodegroup, you can get the nodes with one heat call09:15
flwang1the tricky part maybe the pagination, but i prefer to leave the pagination later, are you ok with that?09:15
flwang1strigazi: yep, i know, i should say 'may be need call nova (or not)'09:15
flwang1because Thomas mentioned he need the status and reason of the instance, so i'm not 100% sure if heat can provide that info or not09:16
strigaziflwang1: we don't need calls to nova at this point. 1. heat does many calls to nova already. 2. We cut through the stack09:16
flwang1that's details09:16
flwang1i will figure out09:16
*** dioguerra has joined #openstack-containers09:17
flwang1are you ok do the pagination later?09:17
strigaziflwang1: We can get status from heat too.09:17
strigaziyes09:17
flwang1wonderful09:17
strigaziassuming we support up to 1000 nodes?09:17
strigazior there is no max?09:18
strigaziflwang1: ^^ we just return all the info we have09:19
flwang1what's the max size cluster in cern?09:19
flwang1k8s can support 5k i think09:19
flwang1but i don't think it's a common case in prod09:19
flwang1especially given it's a one NG, not the whole cluster09:20
strigaziI don't think we go over 500 atm09:20
strigazionly for testing. Anyway, pagination lter09:20
strigaziI'll leave a comment on gerrit09:21
flwang1cool09:21
flwang1thanks, man09:21
strigaziOne more from me: Support Helm v3 https://review.opendev.org/#/c/720234/09:22
strigaziDo you agree since we refactor to do a metachart?09:22
strigaziI have all info about it in gerrit09:22
flwang1i haven't gone through all the comments there, but as  long as we don't break v2, i'm ok with that09:23
flwang1do you have any concern?09:25
strigaziit is fully compatible with v209:25
strigazithe charts we use are simple. Users/Operators could pick v3 or v309:26
strigazithe charts we use are simple. Users/Operators could pick v3 or v209:26
flwang1then i'm happy09:26
*** ykarel|lunch is now known as ykarel09:27
strigaziNext, this can be merged09:28
strigazihttps://review.opendev.org/#/c/725391/09:28
*** k_mouza has joined #openstack-containers09:29
strigazisome eventlet nonsense but needed09:29
flwang1done09:29
strigaziI think that's it09:29
strigaziI'll finish the storyboard clean up now, and we are fully on with reviews and tasks.09:30
flwang1strigazi: if you have time, it would be nice if you can take a look https://review.opendev.org/#/c/714347/09:32
strigaziThis is a mapping?09:34
flwang1what do you mean 'mapping'?09:34
strigazi[az1, az2, foo] master-0 -> master-1 -> az2 master-2 -> foo09:34
flwang1yes09:34
flwang1master0-> az1, master 1> az2, master2 -> foo09:35
strigaziThere is no other way? server group spread on AZs?09:37
flwang1can server group spread on AZs automatically?09:37
flwang1i don't know,i need to do some research09:37
flwang1i have another question about multi masters09:38
strigaziI checked, not09:38
strigaziI checked, no09:38
flwang1IIRC, you mentioned that there is no LB (eg. octavia ) in CERN09:38
strigaziThis changed two weeks afo09:39
strigaziThis changed two weeks ago09:39
flwang1so how do you use multi masters?09:39
strigaziToday we are opening multimaster to all users.09:39
flwang1so user can use  ip0, ip1 and ip2 to access the k8s api?09:40
strigazione virtual ip09:40
flwang1e.g. a 3 master nodes09:40
flwang1can magnum support the virtual IP without lb?09:40
flwang1i even don't know that09:41
strigazi there is no LB (eg. octavia ) in CERN -> This changed two weeks ago09:41
flwang1ok09:41
flwang1i mean before that09:41
strigazior 4 weeks? ttsiouts when we opened LB09:41
strigazibefore that single master only09:42
flwang1aaaaaaaaahhh09:42
flwang1fair enough09:42
flwang1i'd like to introduce a new field for cluster creation09:42
strigazimaster-lb-enabled ?09:42
flwang1clever boy09:43
strigaziexcellent09:43
strigazi+209:43
flwang1do you want to understand more background?09:43
flwang1in CC, we're providing 2 templates for each  version09:43
strigazinot differnt CTs for single VS multi-master09:43
flwang1dev and prod09:44
flwang1one of the main difference is the lb09:44
strigaziyeap, got it09:44
flwang1we'd like to merge it to one09:44
strigazisame here09:44
flwang1and along with the labels merging, it would make our life much easier09:45
strigazisounds good to me09:45
flwang1fantastic09:45
strigaziI think ttsiouts implement the two features with the biggest influence to remove user pain09:46
strigazi* implemented09:46
strigazilabels are coming fast09:46
flwang1which 2?09:46
flwang1labels and ?09:46
strigazinodegroups and labels09:46
flwang1right09:46
flwang1yes09:46
flwang1nodegroup is  a very good one09:47
strigazidifferent flavors with ngs, no hacks for labels09:47
flwang1i really appreciate the contribution from CERN09:47
strigazittsiouts++09:47
flwang1kudos on ttsiouts09:47
ttsioutsstrigazi, flwang1: thank you guys for all the reviews and ideas!09:48
flwang1strigazi: can i book you for another 12 mins?09:48
strigazittsiouts: :) flwang1: Anything else? I mean to discuss not for ttsiouts09:48
strigazitell me09:49
flwang1i proposed a patch about the security group hardening09:49
strigaziright, you restored it09:49
flwang1but my original one was too strict, i'd like to improve it to set the security group rules based on the FIP and LB setting09:49
flwang1or maybe  a new label from user to set the security rules dynamically09:50
flwang1in short, by default, it will be same as now09:51
flwang1but it can be hardened if user prefer to09:51
flwang1i know it may be not sounds very charming for CERN or StackHPC, but i believe mnaser will like it09:52
flwang1strigazi: ^09:52
flwang1and CityNetworks09:52
strigazi+1, I think we can have some reasonably secure default sec-groups. What do you want to pass on cluster creation?09:52
strigaziextra rules?09:54
flwang1for example, if lb enabled, then master nodes shouldn't open port to 0.0.0.0/0, but only the fixed network range of the cluster09:54
strigazior in th CT09:54
flwang1i'm still in the design09:54
flwang1in CC, we just hardcode the rules now09:54
flwang1but it doesn't fit for upstream09:54
flwang1but before i put more effort to dig, i'd like to get a general approval from you guys09:55
flwang1to make sure community like the improvement direction09:55
strigazidoes this help? https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports09:55
flwang1yes, and actually we know all the port clearly so far. and we verified with sonobuoy09:56
flwang1i just need a new design to update the security group rules on the air09:56
flwang1and it would be a bit hard, i can imagine09:57
strigaziIf we follow what this link says plus a way to pass a security group per nodegroup you don't need anything else, no?09:57
flwang1let me give you an example, if user is using flannel, should we allow calico port?09:58
strigazino09:58
flwang1or it's ok if it's only allowed within the network scope?09:58
flwang1is it ok09:58
*** ttsiouts has quit IRC09:58
strigazias mentioned here, no: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#worker-node-s09:59
strigazithe bgp ports are closed.09:59
*** ttsiouts has joined #openstack-containers09:59
strigaziSo they need to open within the cluster network only if calico is used10:00
flwang1right, that's easy10:00
flwang1ok10:00
flwang1i will refactor the patch and invite you guys to review it10:00
flwang1thank you all your valuable input10:00
strigazisure thing10:01
flwang1have a nice day, ttyl10:01
strigazibye10:01
flwang1o/10:01
strigazidon't forget to end the meeting10:01
strigaziflwang1: ^^10:01
strigazi#endmeeting10:02
strigaziI don't think I can do it10:02
*** flwang1 has quit IRC10:05
*** born2bake has quit IRC10:10
*** born2bake has joined #openstack-containers10:10
*** xinliang has quit IRC10:24
*** ttsiouts has quit IRC10:27
*** ttsiouts has joined #openstack-containers10:30
*** frickler has joined #openstack-containers10:31
openstackgerritMerged openstack/magnum master: Monkey patch original current_thread _active  https://review.opendev.org/72539110:31
frickler#endmeeting10:31
*** openstack changes topic to "OpenStack Containers Team | Meeting: every Wednesday @ 9AM UTC | Agenda: https://etherpad.openstack.org/p/magnum-weekly-meeting"10:31
openstackMeeting ended Wed May  6 10:31:15 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)10:31
openstackMinutes:        http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-05-06-09.02.html10:31
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-05-06-09.02.txt10:31
openstackLog:            http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-05-06-09.02.log.html10:31
*** pcaruana has quit IRC10:31
*** ttsiouts has quit IRC10:48
*** pcaruana has joined #openstack-containers10:50
*** pcaruana has quit IRC10:57
*** livelace has joined #openstack-containers11:01
*** pcaruana has joined #openstack-containers11:02
*** jmlowe has quit IRC11:03
*** jmlowe has joined #openstack-containers11:05
*** ttsiouts has joined #openstack-containers11:25
*** ttsiouts has quit IRC11:30
*** ttsiouts has joined #openstack-containers11:32
*** xinliang has joined #openstack-containers11:46
*** sapd1_x has quit IRC11:59
*** dioguerra has quit IRC12:03
*** livelace has quit IRC12:13
*** xinliang has quit IRC12:20
*** udesale_ has joined #openstack-containers12:23
*** udesale has quit IRC12:26
*** ykarel is now known as ykarel|afk12:38
*** livelace has joined #openstack-containers13:34
*** ttsiouts has quit IRC13:39
*** ykarel|afk is now known as ykarel13:54
*** livelace has quit IRC13:59
*** livelace has joined #openstack-containers14:01
*** ttsiouts has joined #openstack-containers14:08
*** ttsiouts has quit IRC14:13
*** ttsiouts has joined #openstack-containers14:22
*** jhesketh has quit IRC14:22
*** belmoreira has quit IRC14:25
*** udesale_ has quit IRC14:38
*** ttsiouts has quit IRC14:38
*** belmoreira has joined #openstack-containers14:44
*** jhesketh has joined #openstack-containers14:51
*** belmoreira has quit IRC14:54
*** belmoreira has joined #openstack-containers14:56
*** jmlowe has quit IRC15:08
*** jmlowe has joined #openstack-containers15:09
*** belmoreira has quit IRC15:11
*** ttsiouts has joined #openstack-containers15:13
*** ttsiouts has quit IRC15:18
*** belmoreira has joined #openstack-containers15:23
*** ykarel is now known as ykarel|away15:23
*** belmoreira has quit IRC15:36
*** belmoreira has joined #openstack-containers15:40
*** ttsiouts has joined #openstack-containers16:05
*** ttsiouts has quit IRC16:23
*** ttsiouts has joined #openstack-containers16:41
*** born2bake has quit IRC16:45
*** born2bake has joined #openstack-containers16:45
*** ttsiouts has quit IRC16:51
*** born2bake has quit IRC16:51
*** k_mouza has quit IRC17:10
*** irclogbot_2 has quit IRC17:20
*** irclogbot_1 has joined #openstack-containers17:23
*** livelace has quit IRC17:44
*** ttsiouts has joined #openstack-containers17:45
*** ttsiouts has quit IRC17:49
*** born2bake has joined #openstack-containers17:49
*** livelace has joined #openstack-containers18:03
*** k_mouza has joined #openstack-containers19:25
*** yolanda has quit IRC19:33
*** k_mouza has quit IRC19:57
*** belmoreira has quit IRC19:58
*** belmoreira has joined #openstack-containers20:10
*** redcavalier has joined #openstack-containers20:10
*** flwang1 has joined #openstack-containers20:13
flwang1#endmeeting20:13
flwang1brtknr: ping20:14
brtknrflwang1: hi20:14
brtknrhow's it going20:14
flwang1brtknr: yesterday, we discussed if we should use 'merge' instead of 'override', strigazi and I are happy to go for 'merge', is it ok for you (as a native  speaker)?20:15
flwang1did you see the last comments from strigazi on the spec?20:15
brtknrI was in favour of merge early on :)20:15
brtknrBut someone the consensus moved towards override so glad we're back at merge again20:16
brtknrI am not 100% native speaker20:16
flwang1brtknr: wonderful20:16
brtknrMy first language is Nepalese :)20:16
flwang1that's OK, as a Chinese, I'm sure you're English is better than me :D20:17
flwang1your20:17
brtknrHow long have you lived in NZ?20:17
flwang16 years20:17
flwang1but before that, I worked for IBM for almost 5 years, and use English daily20:18
brtknrCool20:20
brtknrSorry I gotta run, didnt get much sleep last night because of little babies crying all night, I'm exhausted today20:22
brtknrHope you and your family are keeping safe, good day!20:22
redcavalierHi, I realize this is a development channel but I have a quick question regarding openstack-magnum which may sound dumb, but there doesn't seem to be any obvious answer in the documentation. May I ask here?20:22
*** belmoreira has quit IRC20:25
flwang1redcavalier: go ahead20:31
flwang1i'm listening20:31
redcavalierflwang1: alright, on my openstack setup, my provider external network is completely isolated from the openstack API network. When provisionning a kubernetes cluster, it appears that the instances need to connect to the magnum api endpoints as well as other components and magnum needs to contact the kubernetes https cluster. Is there a way around this, or is it simply how magnum was designed?20:33
redcavaliercontact the kubernetes cluster through https* sorry20:34
flwang1redcavalier: do you mean your provider network can't talk to the openstack control plane?20:34
redcavalierexactly20:34
redcavalierI mean I could setup a proxy between each network, but I want to know if magnum itself as a solution for this or not.20:36
flwang1i don't think there is a workaround i'm aware of20:37
redcavalierAlright20:37
flwang1redcavalier: is it a private cloud deployment btw?20:37
redcavalierhybridcloud. We have customers, but they only have access to the instances, not the openstack plane. We're a hosting company.20:38
flwang1right, that makes sense20:38
flwang1i think a proxy is the good way20:39
flwang1and you can set the proxy in the template20:39
flwang1which has been well tested20:39
redcavalierOh? I'll have to read up on this because I didn't think it would fix my issue when I saw the option. However, thank you for the hint, that gives me something to tell my boss.20:41
flwang1redcavalier: by default, the proxy is used for image pulling21:22
redcavalierflwang1: I see, that's probably why I didn't see it as an option.21:25
flwang1but i think it can be also used for the purpose of talking to openstack control plane21:26
flwang1if it can't work, feel free to propose a patch to fix it :)21:26
redcavalierSure, if I can suggest or contribute somehow I will.21:29
flwang1redcavalier: may i know which company you're working for?21:33
redcavalierWe're called PlanetHoster. We're a fairly small hosting company based in Montreal, Canada. We've been using openstack for several years now and we just started looking into containers on top of openstack.21:34
jrosserthe instances only need to be able to contact the openstack public api endpoint. I fixed a bug in heat specifically to make this work with an isolated control plane21:55
jrosserredcavalier: ^ that’s info for you21:58
redcavalier@jrosser Ok, but doesn't magnum need to connect to the kubernetes cluster? I was under that impression because right now I've set it so that communication only works one way, and my kubernetes setup, while provisionned, is in a tainted state. I thought magnum needed to do some configuration itself? That said, I'm running this on rocky.22:01
jrosserthere are two things, the heat agent in the created vm needs to talk to the openstack public api endpoint, that can be made to work with a config setting in heat22:02
redcavalieryea, that works22:02
jrosserand if the magnum service needs to talk https from the control plane to the vm the. you need a outbound proxy from the control plane (or nat)22:03
jrossertwo different problems imho22:03
jrosserI can’t remember if that control plane to vm communication is needed, it’s been a while22:04
jrosserbut I have had this working with control plane and provider networks not routable to each other, just an outbound proxy from the control plane22:05
redcavalierI see. Right now on my staging environment I have a proxy setup (put a haproxy on my network node) so everything beside https communication works. What I understand is that there should be a way to provision my kubernetes cluster without the nodes showing up as tainted when cluster provisionning is finished.22:08
jrosserI meant a forward proxy like squid,  not haproxy22:10
jrosserif your control plane can NAT our to the VM then that’s not an issue22:10
jrosserit depends how isolated your isolated networks really are22:10
redcavalieryea, I understand, that's something I'll have to bring up with the person in charge of that.22:13
jrosserfwiw I did this with squid and made it work, it’s non trivial though22:13
redcavalierTruth be told, the only thing I really needed to know if it was possible to keep the full isolation. As soon as I was told I needed a proxy, I had my answer. I'm happy to know that it's possible to keep a certain level of isolation though, thank you.22:16
*** threestrands has joined #openstack-containers22:20
*** redcavalier has quit IRC22:35
openstackgerritFeilong Wang proposed openstack/magnum master: [WIP] Add master_lb_enabled to cluster  https://review.opendev.org/72601722:52
*** livelace has quit IRC23:07
*** born2bake has quit IRC23:18

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!