Wednesday, 2020-04-08

*** ttsiouts has joined #openstack-containers00:52
*** ttsiouts has quit IRC01:25
*** ricolin has joined #openstack-containers01:58
openstackgerritFeilong Wang proposed openstack/magnum master: [k8s][fcos] Fix docker storage  https://review.opendev.org/71829602:29
*** k_mouza has joined #openstack-containers02:31
*** k_mouza has quit IRC02:35
*** ttsiouts has joined #openstack-containers03:22
*** vishalmanchanda has joined #openstack-containers03:53
*** ttsiouts has quit IRC03:56
*** ykarel|away is now known as ykarel04:24
*** udesale has joined #openstack-containers05:22
*** udesale has quit IRC05:23
*** udesale has joined #openstack-containers05:23
*** ttsiouts has joined #openstack-containers05:53
*** ttsiouts has quit IRC06:27
cosmicsoundgood day06:40
flwang1cosmicsound: 18:40 here ;)06:40
cosmicsoundit seems there is a issue with addig prometheus on v1.18.0 coreos06:40
cosmicsoundgood evening flwang106:40
cosmicsoundgot these labels  https://mdb.uhlhost.net/uploads/961096b6a643538c/image.png06:41
flwang1cosmicsound: are you using montoring_enabled? or prometheus_montoring?06:41
cosmicsoundif i add prometheus_monitoring=true will fail06:41
cosmicsoundyes it is enabled06:41
flwang1no06:41
flwang1can you see the tiller job?06:42
*** ttsiouts has joined #openstack-containers06:45
brtknrflwang1: hii06:47
flwang1brtknr: hello06:47
flwang1brtknr: do you still have concern with https://review.opendev.org/710384 ?06:48
brtknrNo I thought I +2 it06:49
cosmicsoundflwang1 , the tiller job on k8s?06:50
brtknrflwang1: looks like I didn’t, dorryt06:50
brtknrsorry06:50
flwang1cosmicsound: yes06:50
cosmicsoundlet me check06:51
flwang1brtknr: and could you please review https://review.opendev.org/#/c/718296/ ?06:51
flwang1brtknr: would you like to join today's meeting?06:52
cosmicsoundNAMESPACE       NAME                                        COMPLETIONS   DURATION   AGE06:55
cosmicsoundmagnum-tiller   job.batch/install-metrics-server-job        0/1           9h         9h06:55
cosmicsoundmagnum-tiller   job.batch/install-prometheus-adapter-job    0/1           9h         9h06:55
cosmicsoundmagnum-tiller   job.batch/install-prometheus-operator-job   0/1           9h         9h06:55
cosmicsound8:55:26 [cosmic:~/Devstack/zun] %06:55
cosmicsoundmagnum-tiller   replicaset.apps/tiller-deploy-cdf847b8c              1         1         1       9h06:55
cosmicsoundmagnum-tiller   pod/install-metrics-server-job-wkg5j           1/1     Running   0          9h06:56
cosmicsoundmagnum-tiller   pod/install-prometheus-adapter-job-ccpwr       1/1     Running   0          9h06:56
cosmicsoundmagnum-tiller   pod/install-prometheus-operator-job-l8rqc      1/1     Running   0          9h06:56
cosmicsoundmagnum-tiller   pod/tiller-deploy-cdf847b8c-7lvlx              1/1     Running   0          9h06:56
cosmicsoundsry for the long paste06:56
*** xinliang has joined #openstack-containers06:59
cosmicsoundflwang1 , should it be that it is either montoring_enabled or prometheus_montoring?07:00
cosmicsoundnot both together?07:00
flwang1monitoring_enabled will need tiller to install the prometheus by helm07:01
cosmicsoundright07:02
cosmicsoundso in this case i am good07:02
cosmicsoundsince. the prometheur operator and adapter are there07:03
brtknrflwang1: yes I’ll be there07:14
*** born2bake has joined #openstack-containers07:37
cosmicsoundflwang1 , seems tiller did not installed on coreos prometheus07:54
cosmicsoundjust made some pods to install it07:55
cosmicsoundor i cant seem to find the ip of prometheus among services07:55
brtknrflwang1: anything to add to the agenda07:56
cosmicsoundchecked https://docs.openstack.org/magnum/latest/user/ kube_dashboard_version is not documented any idea why?08:03
brtknrcosmicsound: you're right, flwang1 you should document it in your patch: https://review.opendev.org/#/c/714021/2/magnum/drivers/k8s_fedora_coreos_v1/templates/kubecluster.yaml08:06
*** dioguerra has joined #openstack-containers08:10
cosmicsoundkube_dashboard_version=v1.8.3 its only working seems i canot overwrite it with latest08:13
cosmicsoundtried v2.0.0-rc708:14
cosmicsoundstill sees old one08:14
brtknrcosmicsound: 2 is not supported yet, the patch hasnt merged yet08:14
brtknrcosmicsound: btw how did you get magnum working in the end?08:15
brtknrare you using master branch?08:15
cosmicsoundseems magnum -> magnum-base-source/magnum-9.1.0.dev212 did the job08:16
cosmicsoundand also had to change from scsi in virtio08:16
cosmicsoundsince disks were not read properly08:16
brtknrwhat is magnum-base-source/magnum-9.1.0.dev212?08:17
brtknrkolla image?08:17
cosmicsoundthats the deployment via master in kolla08:17
cosmicsoundwhen i used train tag i had 9.2.008:17
cosmicsoundwith master i get this08:17
brtknrcosmicsound: hmm interesting08:19
*** rcernin has quit IRC08:27
*** xinliang has quit IRC08:32
cosmicsoundbrtknr , if i updated the docker of magnum with the patched version for latest kubernetes dashboard08:44
cosmicsoundshould i restart the container to take effect or?08:45
cosmicsoundbrtknr , does magnum needs a separate cert manager like https://cert-manager.io/docs/ or it handles it all in barbican?08:48
cosmicsoundi see a cert_manager_api yet not sure on it08:49
brtknrcosmicsound: yes restart should do it08:52
*** ykarel is now known as ykarel|lunch08:53
cosmicsoundI will document now all labels that work08:55
cosmicsoundand will share them maybe we can update a better compatibility matrix08:55
cosmicsoundwith main most used labels out there and their counterpart services08:55
cosmicsoundthis i found big mess in past to match right versions maybe for me was also the way i used to have the scsi issues yet anyhow it could be improved08:56
flwang1strigazi: are you around?08:59
flwang1brtknr: seems we don't have strigazi today09:00
flwang1brtknr: are you around?09:00
brtknrflwang1: yes im here09:01
flwang1let's cancel the meeting and we can have a casual discussion09:01
brtknrno problem09:02
flwang1brtknr: anything you want to discuss?09:03
brtknrflwang1: yesterday we were discussing the specs: http://eavesdrop.openstack.org/irclogs/%23openstack-containers/%23openstack-containers.2020-04-07.log.html#t2020-04-07T12:07:5009:04
brtknrthe label one09:04
flwang1ok, what's your idea?09:05
flwang1did i miss a comment from your side on that spec?09:05
brtknrflwang1: just wondering if you are okay with a flag?09:05
flwang1what's the flag?09:06
brtknri will pass on your thoughts to ttsiouts when he comes back later09:06
brtknrinstead of having a secondary labels field09:07
brtknre.g. --override-labels flag if you want to merge labels provided on cluster scope with labels provided in cluster template scope09:08
brtknrit would default to false to emulate the current behaviour09:08
brtknrbut if --override-labels flag is provided, the server updates the cluster template labels with cluster labels and cluster labels with nodegroup labels09:09
brtknrflwang1: make sense?09:09
flwang1did you see my comments in https://review.opendev.org/#/c/621611/ ?09:10
brtknrwhich comments? there are many :)09:11
flwang1see my first one09:11
cosmicsoundwhat tags do i need to update cluster nodes?09:11
*** k_mouza has joined #openstack-containers09:11
cosmicsoundwhen i try to update from 1 nnode to 3 it says it cannot via horizo09:11
cosmicsoundshould i use cli?09:11
cosmicsoundor i need some labels for this to work09:11
flwang1cosmicsound: you can use resize command09:12
flwang1or the latest magnum ui can do that as well09:12
brtknrflwang1: with the label_merging_policy?09:12
flwang1brtknr: yep, it's basically samething as the --override-labels you mentioned above09:12
brtknryes, it is a similar concept except its a first class parameter rather than a label and handled on the server side09:13
flwang1but the policy can be vary to support different scenarios09:13
flwang1no it's a label09:13
flwang1but that's not the key point09:13
flwang1i mean I'm ok with that direction09:13
brtknrok cool09:13
flwang1brtknr: pls review https://review.opendev.org/#/c/718296/ when you have time, thanks09:14
flwang1anything else?09:15
brtknrflwang1: are you okay with an opt-in --override-labels to support the new behaviour09:15
flwang1i will have to offline earlier today cause we just done a big production release today09:15
brtknre.g. providing --override-labels without supplying --labels at the same time would throw an API error09:16
brtknrflwang1: i will review https://review.opendev.org/#/c/718296/ this friday, I have to prepare for a presention tomorrw09:16
brtknrflwang1: whats the production release?09:17
brtknri tried to create an account on catalyst cloud but didnt get accepted hehe09:17
flwang1brtknr: we're upgrading heat to latest stable version and a horzion release09:17
flwang1brtknr: heh, really? just try again ;)09:17
flwang1why we need an 400 when both labels provided?09:18
brtknrproviding --override-labels when there is nothing to override with... thats why09:19
brtknrif --override-labels is not provided, it evaluates to false in DB09:19
brtknrthe migration also defaults this field to false in the DB09:20
flwang1hold on09:20
brtknrand the field is only evaluated when --override-labels is provided09:21
flwang1i'm a bit confused09:21
brtknrthe difference with idea in the spec is that --override-labels is a boolean rather than a dict09:21
flwang1what's the flag you're talking about09:21
flwang1i know the override_labels are the new filed we need to introduce09:21
flwang1what's the flag then?09:21
brtknrin the CLI, this would be --override-labels09:22
dioguerrathe default behaviour should occur when --override-labels is True then. This label name seems that its enabling what the current --labels do09:22
brtknrdioguerra: override==merge==combine==update09:23
brtknrso default behaviour is override=False09:23
brtknrso default/current behaviour is override=False09:24
flwang1and the new --over-labels is a label or just a client side flag?09:24
brtknrits a client side flag which tells the server what to do with the provided labels09:25
brtknrthe labels do not get combined on the client side,09:25
brtknrthis is different to the spec ttsiouts has currently up but he is planning to update it09:26
flwang1then, when it's a merging, how can we know what's the lables user passed or we don't care?09:26
brtknrbut not if you disagree with the idea :)09:26
brtknrthe user passed labels are stored under the current labels field09:27
flwang1i would like to see it's proposed as the alternative solution in the spec09:27
brtknrbut the override_labels boolean tells the server what to do with the labels09:27
openstackgerritTobias Urdin proposed openstack/python-magnumclient master: Check response type in _extract_error_json  https://review.opendev.org/71835609:27
flwang1no, i mean the original labels user passed in09:27
brtknrwhether to merge it or to relace it09:27
flwang1i understand this idea now09:28
brtknr:)09:28
brtknrgreat!09:28
flwang1i'd like to see it's proposed in the spec and we can discuss it from there09:28
brtknrflwang1: sweet!09:28
flwang1i know this one is simpler ;)09:28
brtknrbut are you okay with this direction compared to adding a separate dict field?09:28
flwang1i can't make the decision now. because  i'd like to take this opportunity to get this issue done beautifully09:29
flwang1i know we need to get this one done quickly for several reasons09:29
*** ricolin_ has quit IRC09:29
brtknrok09:30
flwang1i think we can make the decision by this week or early next week09:30
*** ricolin_ has joined #openstack-containers09:30
flwang1sorry i can't give you the clear answer now09:30
*** ricolin has quit IRC09:30
brtknrok thats fine09:30
flwang1cheers, i have to go09:32
brtknrflwang1:09:32
brtknr1 last thing09:32
brtknrcan we cut another release with the zincati patch?09:32
*** k_mouza has quit IRC09:32
brtknr9.3.0 is basically unusable without it09:32
brtknrfor fedora coreos09:33
flwang1brtknr: we have to09:33
brtknr9.3.109:33
flwang1i'm waiting for the gate fixed09:33
brtknr?09:33
brtknror 9.4.0?09:33
brtknrwhat gate fix?09:33
flwang1at least 9.3.109:33
flwang1py209:33
flwang1https://review.opendev.org/#/c/717428/09:34
flwang1hopefully it can be fixed by https://review.opendev.org/#/c/718130/09:34
flwang1will see09:34
*** k_mouza has joined #openstack-containers09:35
brtknrflwang1: okay09:35
brtknrcool thanks for the updat09:35
brtknrspeak soon!09:36
flwang1brtknr: :) cheers, take care my friend09:36
brtknrflwang1: you too :) goodnight09:36
*** flwang1 has quit IRC09:41
openstackgerritTobias Urdin proposed openstack/python-magnumclient master: Check response type in _extract_error_json  https://review.opendev.org/71835609:47
*** ricolin_ has quit IRC09:57
*** ricolin_ has joined #openstack-containers09:57
*** ricolin has joined #openstack-containers10:00
*** ricolin_ has quit IRC10:02
*** ykarel|lunch is now known as ykarel10:12
*** dioguerra has quit IRC10:26
openstackgerritOpenStack Proposal Bot proposed openstack/magnum-ui master: Imported Translations from Zanata  https://review.opendev.org/71838010:30
*** ricolin has quit IRC10:39
*** k_mouza has quit IRC10:47
*** k_mouza has joined #openstack-containers10:55
born2bakecosmicsound did you deploy your cluster with lb?11:15
cosmicsoundborn2bake , not yet11:16
cosmicsoundnow working on building lb11:16
cosmicsounddo you have some hints/good guide for this11:16
born2bakenot really... I need to figure out if octavia is deployed correctly first11:16
cosmicsoundok so you did not made it yet working in k8s11:21
*** ttsiouts has quit IRC12:53
*** ykarel is now known as ykarel|afk12:56
*** ttsiouts has joined #openstack-containers12:57
*** ricolin has joined #openstack-containers12:57
cosmicsoundanyone managed login with kubelet config?  Not enough data to create auth info structure. is what i get13:17
*** udesale_ has joined #openstack-containers13:18
*** udesale has quit IRC13:20
*** ykarel|afk is now known as ykarel13:37
*** ttsiouts has quit IRC14:06
*** ttsiouts has joined #openstack-containers14:06
*** udesale_ has quit IRC14:46
*** ykarel is now known as ykarel|away15:04
tobias-urdinanybody else seen a issue where /etc/os-collect-config.conf gets15:13
tobias-urdin[heat]15:13
tobias-urdinauth_url = http://public:5000/v3/v315:13
tobias-urdinfault endpoint, something is appending my http://public:5000/v3 endpoint15:13
born2bakebrtknr hello, I ve tried calico version without lb and just 1 master-1node, its still failing :( https://seashells.io/v/27wRXmgg master https://seashells.io/v/qczaxkKa worker15:16
tobias-urdinhm heat stack says correct, auth_url: http://public:5000/v315:17
*** Pimpek has joined #openstack-containers15:35
born2bakebrtknr thats weird that magnum sends failed message :/ and I just login into master... k8s-calico-coreos-7fdoejglxqvo-master-0   Ready    master   36m   v1.17.4 k8s-calico-coreos-7fdoejglxqvo-node-0     Ready    <none>   30m   v1.17.415:46
brtknrborn2bake: sorry I cannot help today, got too much to do, will be back on Friday15:47
born2bakeok no worries :P15:47
brtknryou can add a list of issues to the etherpad15:47
brtknrbut /var/log/heat-config is your best friend15:48
brtknryou will see the reason in there15:48
brtknrif the cluster is deployed, it is probably a configuration issue15:48
brtknre.g. unsupported labe15:48
brtknre.g. unsupported label15:48
tobias-urdinbrtknr: do you know where os-collect-config gets content in /var/lib/os-collect-config/heat_local.json from?15:59
*** ttsiouts has quit IRC16:00
cosmicsoundbrtknr , about helm and tiller, is tiller running helm inside itself or using helm from the vm where hoster. in this case helm is not shipped in coreos16:22
cosmicsoundmagnum-tiller   job.batch/install-metrics-server-job        0/1           164m       164m16:22
cosmicsoundmagnum-tiller   job.batch/install-prometheus-adapter-job    0/1           164m       164m16:22
cosmicsoundmagnum-tiller   job.batch/install-prometheus-operator-job   0/1           164m       164m16:22
cosmicsoundi am not an expert just would say these jobs never completed16:23
*** k_mouza has quit IRC18:46
*** k_mouza has joined #openstack-containers18:47
*** k_mouza has quit IRC18:59
*** k_mouza has joined #openstack-containers19:27
*** vishalmanchanda has quit IRC19:32
*** k_mouza has quit IRC19:36
*** k_mouza has joined #openstack-containers19:37
*** k_mouza has quit IRC19:41
Pimpekhi everyone. I'm stuck creating a new kubernetes cluster20:01
Pimpeklong story short, I have one controller and one compute node20:01
Pimpeklog checks show that the reason for cluster creation failure is that neutron can't create the required port20:02
Pimpekbut it's creating the kube-master on the controller node instead of the compute node20:03
Pimpekwhere the networking is different20:03
Pimpekspinning up regular kvm VMs works fine20:03
Pimpekam I missing something? how do I get it to create the kube-master on the compute node instead of the master?20:04
*** k_mouza has joined #openstack-containers21:05
*** k_mouza has quit IRC21:06
*** k_mouza has joined #openstack-containers21:06
*** k_mouza has quit IRC21:19
*** k_mouza has joined #openstack-containers22:30
*** k_mouza has quit IRC22:31
*** rcernin has joined #openstack-containers22:32
*** born2bake has quit IRC22:51
*** threestrands has joined #openstack-containers23:10
openstackgerritMerged openstack/magnum-ui master: Imported Translations from Zanata  https://review.opendev.org/71838023:52

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!