Wednesday, 2020-09-16

*** ianychoi__ is now known as ianychoi00:44
*** wuchunyang has joined #openstack-containers02:13
*** k_mouza has joined #openstack-containers02:25
*** k_mouza has quit IRC02:30
*** rcernin has quit IRC02:46
*** rcernin has joined #openstack-containers02:57
*** k_mouza has joined #openstack-containers03:24
*** k_mouza has quit IRC03:28
*** k_mouza has joined #openstack-containers03:32
*** k_mouza has quit IRC03:36
*** dave-mccowan has joined #openstack-containers03:48
*** wuchunyang has quit IRC04:09
*** wuchunyang has joined #openstack-containers04:18
*** wuchunyang has quit IRC04:23
*** ykarel|away has joined #openstack-containers04:43
*** ykarel|away is now known as ykarel04:46
*** dave-mccowan has quit IRC05:00
*** k_mouza has joined #openstack-containers05:18
*** rcernin has quit IRC05:19
*** k_mouza has quit IRC05:23
*** rcernin has joined #openstack-containers05:28
*** vishalmanchanda has joined #openstack-containers06:40
*** k_mouza has joined #openstack-containers07:40
*** k_mouza has quit IRC07:44
*** kevko has joined #openstack-containers07:46
*** kevko has quit IRC07:47
*** kevko has joined #openstack-containers07:47
*** rcernin has quit IRC08:06
*** yolanda has quit IRC08:22
*** k_mouza has joined #openstack-containers08:22
*** yolanda has joined #openstack-containers08:22
*** k_mouza has quit IRC08:26
*** k_mouza has joined #openstack-containers08:31
*** k_mouza has quit IRC08:35
*** ykarel_ has joined #openstack-containers08:43
*** k_mouza has joined #openstack-containers08:44
*** ykarel has quit IRC08:46
*** ykarel_ is now known as ykarel08:47
*** flwang1 has joined #openstack-containers09:01
flwang1brtknr: meeting?09:02
brtknrflwang1: hello!09:02
brtknryes09:02
flwang1#startmeeting magnum09:03
openstackMeeting started Wed Sep 16 09:03:59 2020 UTC and is due to finish in 60 minutes.  The chair is flwang1. Information about MeetBot at http://wiki.debian.org/MeetBot.09:04
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.09:04
*** openstack changes topic to " (Meeting topic: magnum)"09:04
openstackThe meeting name has been set to 'magnum'09:04
flwang1#topic roll call09:04
*** openstack changes topic to "roll call (Meeting topic: magnum)"09:04
flwang1i think only brtknr and me?09:04
flwang1jakeyip: around?09:04
jakeyipflwang1:  hi o/09:05
flwang1jakeyip: hello09:05
brtknro/09:05
flwang1as for your storageclass question, the easiest way is using the post_install_manifest config09:05
flwang1#topic hyperkube09:06
*** openstack changes topic to "hyperkube (Meeting topic: magnum)"09:06
flwang1brtknr: i'd like to discuss the hyperkube first09:06
brtknrsure09:06
flwang1i contacted with the rancher team who is maintaining their hyperkube image, i was told it's a long term solution for their RKS09:06
brtknrthats good to hear09:07
flwang1personally, i don't mind moving to binary, what i thinking is we need to make the decision as a team09:08
brtknrI tested hyperkube with 1.19 and it works very well09:08
flwang1and i prefer to do it in next cycle09:08
brtknrwith rancher container09:08
flwang1brtknr: that's good to know09:08
flwang1maybe we can build in our pipeline with heat-container-agent09:09
brtknrI think a good short term solution is to introduce a new label, e.g. kube_source:09:09
brtknrand we can override hyperkube source with whatever we use09:09
brtknror kube_prefix09:09
flwang1i'm wondering if it's necessary09:10
openstackgerritMerged openstack/magnum master: [goal] Prepare pep8 testing for Ubuntu Focal  https://review.opendev.org/75059109:10
flwang1because for prod usage, they always have their own container registry09:10
flwang1and they will keep the hyperkube image there09:10
flwang1instead of download it from the original source everytime09:10
flwang1the only case we need a new label is probably for the devstack09:11
brtknrwho is they?09:11
flwang1most of the company/org using magnum09:11
flwang1what's the case for stackHPC?09:12
flwang1do you set the "CONTAINER_INFRA_PREFIX"?09:12
brtknrwe have some customers who use container registry, others dont09:13
flwang1don't they have concern if any image has any change?09:13
flwang1anyway, i think we need to get input from Spyros as well09:14
brtknrsure09:14
jakeyipare the 'current' versions of hyperkube compatible ? e.g. 1.15 - 1.1.709:15
flwang1but at least, we have a solution09:15
flwang1jakeyip: until v1.18.x09:15
flwang1there is no hyperkube since v1.19.x09:15
brtknrthere is no official hyperkube09:16
jakeyipsorry I meant, is rancher hyperkube 1.15 - 1.17 compatible with k8s 's09:16
brtknronly third party09:16
jakeyipwas thinking change the default registry at the next release? this change can be a backport09:16
jakeyipto train or whatever to use rancher's hyperkube09:16
flwang1jakeyip: maybe not, they're using a suffix for the image name09:18
brtknrflwang1: rackspace also build hyperkube: https://quay.io/repository/rackspace/hyperkube?tab=tags09:21
flwang1brtknr: but i can't see a v1.19.x image09:23
flwang1from your above link09:23
brtknrwell https://github.com/rancher/hyperkube/releases09:25
brtknrwe can use these releases anyways09:25
jakeyiphmm what about taking the rancher one with suffix and putting it into docker.io/openstackmagnum/09:26
flwang1brtknr: yes, we can. we just need to figure out how, for those are not using CONTAINER_INFRA_PREFIX09:27
flwang1jakeyip: we can do that. we just need to make the decision how can we keep supporting v1.19.x and keep the backward compatibility09:28
jakeyiphmm, will using CONTAINER_INFRA_PREFIX work? because of the -rancher suffix in tags?09:28
flwang1the only way to support that is probably like brtknr proposed, adding a new label to allow passing in the full image URL09:29
flwang1include name and tag09:29
flwang1jakeyip: if you donwload and retag, then upload to your own registry, then it should work without any issue09:29
jakeyipbut what will the default be?09:30
flwang1what do you mean the default?09:31
jakeyipthe one in templates09:32
jakeyipe.g. `${CONTAINER_INFRA_PREFIX:-k8s.gcr.io/}hyperkube:\${KUBE_TAG}`09:32
flwang1we need some workaround there09:32
flwang1e.g. if label hyperkube_source passed in, it will replace above image location09:33
flwang1brtknr: is that your idea?09:33
brtknryep09:33
brtknror just kube_prefix09:33
brtknrsimilar to kube_tag09:34
jakeyipif not?09:34
flwang1jakeyip: something like that09:34
flwang1i will send an email to spyros to get his comments09:34
flwang1let's move on?09:34
jakeyipI was thinking if it is possible to change this to docker.io/openstackmagnum/? for <=1.18 we mirror k8s.gcr.io09:35
jakeyipfor >1.18 we mirror rancher and don't put suffix. so it won't break for users by default?09:35
flwang1which means we have to copy everything patch versions to docker.io/openstackmagnum?09:39
flwang1s/everything/every09:39
jakeyiponly those that falls between min max version in wiki?09:40
flwang1we probably cannot09:40
flwang1we only have 20 mins, let's move on09:41
flwang1i will send an email and copy you guys09:41
jakeyipsure09:41
flwang1to spyros to discuss this in a mail thread09:41
brtknrok sounds good09:42
flwang1#topic  Victoria release09:42
*** openstack changes topic to "Victoria release (Meeting topic: magnum)"09:42
flwang1brtknr: we need to start to wrap this release09:42
flwang1that said, tagging patches we want to include in this release09:42
flwang1like we did before09:42
flwang1the final release will be around mid of Oct09:45
flwang1so we have about 1 month09:45
flwang1that's all from my side09:47
flwang1brtknr: jakeyip: anything else you want to discuss?09:47
jakeyip~09:47
flwang1brtknr: ?09:48
flwang1jakeyip: any feedback from your users about k8s?09:48
flwang1i mean about magnum09:48
jakeyipi have a question on storageclass / post_install_manifest - we have multiple az so I don't think it'll work for us?09:48
jakeyipideally it should be tied to a template. I think the helm thing CERN is doing will help?09:49
flwang1why?09:49
brtknri was hoping one day that would have a post_install_manifest that is tied to a cluster template09:50
brtknrshould be quite easy to implement09:50
jakeyipfor Nectar, we cannot cross attach nova and cinder az. so e.g. we have a magnum template for AZ A, which spins up instances in AZ A. the storageclass needs to point to AZ A also.09:50
flwang1jakeyip: there is az parameter in the storageclass09:50
flwang1i see09:51
flwang1because the two az are sharing the same control plan, is it?09:52
flwang1plane09:52
flwang1maybe you can put az as a prefix or suffix for the storage class09:52
jakeyipthe two AZs are in different institutions, so different network and everything09:52
flwang1are they having different magnum api/conductor?09:53
jakeyipsame09:53
flwang1ok, right09:53
flwang1jakeyip: if so, we probably need a label for that09:53
flwang1jakeyip: you can propose a patch for that09:54
flwang1brtknr will be happy to review it09:54
jakeyipI was wondering if https://review.opendev.org/#/c/731790/ will help?09:54
flwang1don't know, sorry09:55
flwang1anything else?09:56
jakeyipa bit of good news - we are finally on train09:56
flwang1jakeyip: that's cool09:56
flwang1we're planning to upgrade to Victoria09:56
*** SecOpsNinja has joined #openstack-containers09:56
flwang1now we're on Train09:56
jakeyipcool09:56
brtknrI dont think Nice!09:57
jakeyipdoes anyone of plans of setting up their own registry due to the new dockerhub limits?09:57
brtknrNice!09:57
brtknrjakeyip: yes we are exploring it09:57
flwang1so did us09:57
brtknri have written a script to pull retag and push image to private register09:57
jakeyipI was thinking of harbor?09:57
brtknralthough insecure registry is broken in magnum09:58
brtknrso i have proposed this patch: https://review.opendev.org/#/c/749989/09:58
flwang1jakeyip: do you mean this https://docs.docker.com/docker-hub/download-rate-limit/ ?09:58
jakeyipflwang1: yes09:58
brtknrflwang1: thats right09:58
brtknr100 pulls per IP address per 6 hours09:59
jakeyipwe basically cache all images magnum needs to spin up in https://hub.docker.com/u/nectarmagnum (for our supported CT)09:59
brtknras anonymous user09:59
jakeyipcopy, not cache09:59
jakeyipso brtknr I have a pull/push script too :P10:00
flwang1Anonymous users100 pulls10:00
brtknrjakeyip: is yours async ?10:00
jakeyipno... :(10:01
brtknr:)10:01
flwang1jakeyip: can't nectar just upgrade to "Team" plan?10:01
brtknris there an option for docker auth in magnum?10:02
jakeyiphmm, does 'Team' plan gives you unlimited anonymous transfers? the chart is unclear10:03
flwang1i will ask Docker company if we can get a nonprofit discount10:03
flwang1let me close the meeting first10:03
flwang1#endmeeting10:04
*** openstack changes topic to "OpenStack Containers Team | Meeting: every Wednesday @ 9AM UTC | Agenda: https://etherpad.openstack.org/p/magnum-weekly-meeting"10:04
openstackMeeting ended Wed Sep 16 10:04:03 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)10:04
openstackMinutes:        http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-09-16-09.03.html10:04
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-09-16-09.03.txt10:04
openstackLog:            http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-09-16-09.03.log.html10:04
jakeyipI don't know how this affects magnum. I can see many containers in template pointing to docker.io10:05
brtknrso anonymous pulls are limited to 100 containers/6hrs/IP address10:06
flwang1it will slow my devstack testing :)10:08
brtknrin total, we have 16 images are rely on docker hub10:08
flwang1i have to leave, thank you for joining10:08
jakeyipbrtknr: yeah. so a user playing with it might quickly exhaust the limits. and break magnum for next user using that IP.10:08
jakeyipalso, what's going to happen to gate etc10:08
flwang1jakeyip: true10:08
brtknrhttps://brtknr.kgz.sh/7mnu/10:09
brtknr9 images on master nodes: https://brtknr.kgz.sh/aomh/10:09
flwang1o/10:09
jakeyipbye flwang110:10
brtknrhttps://brtknr.kgz.sh/b9xk10:10
flwang1take care, my friends10:10
brtknr7 images on worker10:10
brtknrokay bye then10:10
jakeyipbrtknr: 100 / 16 = 6 clusters in 6 hours? :P10:12
brtknrwell 100/7 = 14 worker nodes10:13
brtknr100/9 = 11 master nodes10:14
brtknrper 6 hrs10:14
brtknrif your cluster has the same public facing ip10:14
jakeyipyeah we have 1 public ip per cluster10:15
jakeyipI guess the cheap/easy option is to get a plan at quay.io10:17
jakeyipbrtknr: I am looking at https://review.opendev.org/#/c/743945/10:19
jakeyipdo you have a way to clean up any dead trustees?10:25
*** wuchunyang has joined #openstack-containers10:29
*** wuchunyang has quit IRC10:32
brtknrjakeyip:openstack user delete `openstack user list | grep -v "$(openstack coe cluster list -c uuid -f value)" | grep <project-id> | cut -f4 -d" "`10:48
brtknrsomething like that10:48
brtknras admin user10:49
jakeyipah yeah thanks. just noticed the name of user has cluster id in it10:50
*** ramishra has quit IRC11:59
*** ramishra has joined #openstack-containers12:21
*** dave-mccowan has joined #openstack-containers12:41
*** vishalmanchanda has quit IRC13:04
*** sapd1_x has joined #openstack-containers13:19
*** ricolin has quit IRC13:20
*** ykarel_ has joined #openstack-containers13:35
*** ykarel has quit IRC13:37
*** ricolin has joined #openstack-containers13:45
*** k_mouza has quit IRC13:49
*** k_mouza has joined #openstack-containers13:56
*** ykarel_ is now known as ykarel|14:04
*** ykarel| is now known as ykarel14:04
openstackgerritBharat Kunwar proposed openstack/magnum master: Support kube_prefix label to override hyperkube source  https://review.opendev.org/75225414:49
openstackgerritBharat Kunwar proposed openstack/magnum master: Support kube_prefix label to override hyperkube source  https://review.opendev.org/75225414:53
*** k_mouza has quit IRC14:55
*** vishalmanchanda has joined #openstack-containers14:58
*** kevko has quit IRC15:01
*** k_mouza has joined #openstack-containers15:01
*** ykarel is now known as ykarel|away15:12
*** ykarel|away has quit IRC15:37
*** k_mouza has quit IRC15:39
*** k_mouza has joined #openstack-containers15:40
*** mgariepy has quit IRC15:51
*** mgariepy has joined #openstack-containers15:51
SecOpsNinjaWhen a heat stack fails is there any way to force to force a retry of stack? or do i need to delete k8s cluster and force the recriation with magnum ?16:26
*** ykarel|away has joined #openstack-containers16:43
*** k_mouza has quit IRC16:48
*** ykarel|away has quit IRC16:50
*** tobberydberg has quit IRC16:51
*** tobberydberg_ has joined #openstack-containers16:51
*** sapd1_x has quit IRC16:52
*** johanssone_ has quit IRC16:59
*** johanssone has joined #openstack-containers17:03
*** SecOpsNinja has left #openstack-containers18:07
*** vishalmanchanda has quit IRC18:28
*** rcernin has joined #openstack-containers22:10
*** rcernin has quit IRC22:18
*** rcernin has joined #openstack-containers22:33
*** rcernin has quit IRC22:33
*** rcernin has joined #openstack-containers22:33

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!