Tuesday, 2019-04-09

*** ttsiouts has joined #openstack-containers00:38
*** ttsiouts has quit IRC00:47
*** ttsiouts has joined #openstack-containers00:48
*** ttsiouts has quit IRC00:52
*** ricolin has joined #openstack-containers01:02
*** hongbin has joined #openstack-containers01:33
*** ricolin_ has joined #openstack-containers01:50
*** ricolin has quit IRC01:50
openstackgerritFeilong Wang proposed openstack/magnum master: [fedora_atomic] Support auto healing for k8s  https://review.openstack.org/63137802:18
*** hongbin has quit IRC02:35
*** hongbin has joined #openstack-containers02:38
*** hongbin has quit IRC02:49
*** hongbin has joined #openstack-containers02:51
*** ykarel|away has joined #openstack-containers02:51
flwangjakeyip: ping03:01
flwangjakeyip: could you please help review  https://review.openstack.org/#/c/651027/ ? thanks03:01
*** ramishra has joined #openstack-containers03:59
*** udesale has joined #openstack-containers04:04
*** hongbin has quit IRC04:05
*** ykarel|away is now known as ykarel04:07
*** ykarel has quit IRC04:42
*** ykarel has joined #openstack-containers04:56
*** sidx64 has joined #openstack-containers05:08
*** udesale has quit IRC05:52
*** ykarel is now known as ykarel|afk05:58
*** ricolin_ has quit IRC05:58
*** ykarel|afk has quit IRC06:02
*** ykarel|afk has joined #openstack-containers06:13
*** ricolin has joined #openstack-containers06:22
*** anyrude10 has joined #openstack-containers06:25
anyrude10Hi Team, Is there amy workaround for the bug 1809254, https://bugs.launchpad.net/ubuntu/+source/magnum/+bug/180925406:26
openstackLaunchpad bug 1809254 in magnum (Ubuntu) "Cannot create kubernetes cluster with tls_disabled" [Undecided,Confirmed]06:26
anyrude10Thanks06:29
*** pcaruana has joined #openstack-containers06:30
*** udesale has joined #openstack-containers06:33
*** ykarel|afk is now known as ykarel06:35
*** ivve has joined #openstack-containers07:06
*** mgoddard has joined #openstack-containers07:10
*** sidx64_ has joined #openstack-containers07:12
*** sidx64 has quit IRC07:13
*** yankcrime has quit IRC07:15
*** sidx64 has joined #openstack-containers07:16
*** sidx64_ has quit IRC07:17
*** alisanhaji has joined #openstack-containers07:25
*** gsimondon has joined #openstack-containers07:29
*** ykarel is now known as ykarel|lunch07:31
*** sidx64 has quit IRC07:43
*** rcernin has quit IRC08:01
*** ttsiouts has joined #openstack-containers08:05
*** jaewook_oh has joined #openstack-containers08:05
*** johanssone has quit IRC08:18
*** flwang1 has joined #openstack-containers08:19
*** johanssone has joined #openstack-containers08:24
*** sidx64 has joined #openstack-containers08:28
*** yankcrime has joined #openstack-containers08:33
*** ttsiouts has quit IRC08:37
*** ttsiouts has joined #openstack-containers08:37
*** ttsiouts has quit IRC08:40
*** ttsiouts has joined #openstack-containers08:40
*** ykarel|lunch is now known as ykarel08:48
*** jaewook_oh has quit IRC08:57
*** flwang1 has quit IRC09:33
*** livelace has joined #openstack-containers09:33
*** sidx64 has quit IRC09:34
*** flwang1 has joined #openstack-containers09:34
flwang1dioguerra: hello, is Thomas  around?09:34
*** sidx64 has joined #openstack-containers09:45
*** yolanda_ has quit IRC09:47
*** sidx64 has quit IRC09:51
flwang1strigazi: could you please ask THomas login this channel?09:53
*** sidx64 has joined #openstack-containers09:55
dioguerraflwangl: yes he is, we going for lunch now. Can we talk in 1h?09:57
*** udesale has quit IRC09:57
*** udesale has joined #openstack-containers09:58
dioguerraflwang: The problem with the master node is also happening to me.09:58
flwang1dioguerra: sure10:01
flwang1i will wait until you guys back10:02
flwang1thanks10:02
anyrude10Hi Team, I am facing etcd issue in swarm-docker. Can someone please help https://ask.openstack.org/en/question/120717/swarm-magnum/10:15
*** ttsiouts has quit IRC10:25
*** ttsiouts has joined #openstack-containers10:26
*** ttsiouts has quit IRC10:30
*** sidx64 has quit IRC10:41
*** ykarel is now known as ykarel|afk10:41
*** ttsiouts has joined #openstack-containers10:46
flwang1dioguerra: when you test the auto scaling/healing patch, did you based on the latest code with node group support?10:52
brtknrflwang1: have you tried another container runtime with magnum?10:53
brtknre.g. cri-o10:53
flwang1brtknr: no10:53
flwang1my current focus is the auto scaling/healing feature and the rolling upgrade feature10:54
flwang1after that, i will do a big refactor for magnum ui10:54
flwang1and then probably Fedora CoreOS 30 and containerize K8S master nodes10:55
flwang1basically the plan i mentioned in my PTL nomination letter10:55
*** livelace has quit IRC10:56
*** udesale has quit IRC10:56
*** sidx64 has joined #openstack-containers11:05
*** ykarel|afk is now known as ykarel11:12
*** sidx64 has quit IRC11:15
dioguerraflwang: thomas is joining soon11:16
*** thartland has joined #openstack-containers11:17
dioguerraI think the scaling up/down i tested it arround commit a6c8c399e926ea42d52fc5ebf7715b74999a404c11:17
thartlandHi11:17
flwang1thartland: hi11:18
flwang1i'm pretty sure it worked before11:18
dioguerraYes it did11:18
flwang1but now after rebased on the node group patches11:19
flwang1it's getting weird which we need to fix11:19
dioguerraWe think it's rebuilding the master node because of the Security rules11:19
flwang1what's the security rules you're taking about?11:20
flwang1adding security rule from master to minion?11:20
*** sidx64 has joined #openstack-containers11:21
flwang1do you mean this one https://review.openstack.org/#/c/647942/ ?11:21
dioguerraI think it was this one? 1f5dc1aa91f145b8554deda3fed7265d33b3cb2211:22
dioguerraYes that one11:22
flwang1thartland: btw, i also got error when doing scale down http://paste.openstack.org/show/749051/11:22
flwang1dioguerra: did you try to remove the change?11:23
dioguerranot yet11:23
flwang1dioguerra: did you try scale down?11:24
flwang1now i got http://paste.openstack.org/show/749051/ when doing scale down11:24
dioguerraNo, it breaks on scale up and the cluster is lost11:25
dioguerrais your coreDNS up?11:26
*** sidx64 has quit IRC11:26
dioguerraor is that the minion name on openstack?11:26
flwang1dioguerra: on my testing, the cluster will be back finally11:26
thartlandflwang1: what are the values in the refs_map output_value of the minions stack? It should match one of the values printed in that error. openstack stack show cluster-kfr4yt3rm3mx-kube_minions-juk4gy42g2wo11:27
flwang1because the master is being rebuilt so it takes some time11:27
*** sidx64 has joined #openstack-containers11:27
flwang1thartland: http://paste.openstack.org/show/749052/11:28
thartlandflwang1: Ah, it's using the MachineID and the dashes are missing from the ID in the version kubernetes has11:29
flwang1thartland: yes11:30
flwang1thartland: MachineID:17578f36b3be4512926eced1a1c69eb3 ProviderID:openstack:///17578f36-b3be-4512-926e-ced1a1c69eb311:31
flwang1it's very interesting11:31
flwang1but the providerID is using the 'correct' format11:31
flwang1how is the MachineID generated?11:32
thartlandNot sure, there is also systemUUID which has the dashed but is capitalised e.g systemUUID: DE82824B-3E35-4214-970D-7C85B5F3C97A11:32
flwang1thartland: ok, i think at least we need to use the format with dashes, because that's the format Nova is using11:35
flwang1dioguerra: as for your testing, after the master rebuilt, did you see any other issue?11:36
dioguerramaster rebuilds but i cannot contact the cluster anymore.11:38
flwang1ok, that's not my case anyway11:38
thartlandflwang1: the autoscaler already imports a uuid package so I can use that to parse the uuid and it return it in the correct format11:39
flwang1dioguerra: could you pls test it again without the security group rules?11:40
flwang1thartland: yep, please do11:40
dioguerraalready on int11:42
dioguerrait11:42
flwang1dioguerra: awesome, really appreciate it11:45
flwang1thartland: should this function be revisited https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/magnum/magnum_manager_heat.go#L184 ?11:45
flwang1the patch has been merged as you may know11:45
thartlandflwang1: Possibly, I'm not sure if the autoscaler should get ahead of the release version of magnum but I can take a look at how it would eventually work11:49
flwang1thartland: that would be great11:51
flwang1thartland: i was told current patch will be in v1.14.111:51
flwang1but now we're hosting the autoscaler image on openstackmagnum dockerhub repo11:51
flwang1so it's not such rush11:51
flwang1i mean we don't have to be in the v1.14.111:51
thartlandflwang1: Once nodegroups and cluster resize are released I will start on a new magnum_manager interface to use those APIs and use all the new changes, and backport what makes sense to the old manager11:52
flwang1we can release it on openstackmagnum repo firstly/temporarily11:52
flwang1thartland: the cluster resize api has been released11:52
flwang1and i have done all the work in gophercloud11:52
flwang1you can start to integrate it now11:52
flwang1and i even add the apiversions support for magnum in gophercloud, so that you can check the microversion to support both resize and the old way (magnum+heat)11:53
flwang1thartland: ^11:53
thartlandflwang1: I saw the changes in gophercloud, thanks for those11:54
flwang1thartland: thank you!11:55
thartlandI have to leave for a few hours now, I'll be back later11:55
flwang1thartland: ttyl11:55
flwang1dioguerra: i have to offline, it's 11:56pm here11:56
flwang1dioguerra: will you join today's weekly meeting?11:56
*** sidx64 has quit IRC11:59
flwang1dioguerra: i have to go, please leave message based on your test result, thank you very much12:00
*** ttsiouts has quit IRC12:00
*** jmlowe has quit IRC12:01
*** ttsiouts has joined #openstack-containers12:01
*** jmlowe has joined #openstack-containers12:02
dioguerraflwangl: if i don't forget again12:12
*** sidx64 has joined #openstack-containers12:17
*** livelace has joined #openstack-containers12:20
dioguerraflwang: reverting 31c82625d6cae5b9cc8fae6f09c9107818dee9b7 does not work12:25
*** livelace has quit IRC12:32
*** gsimondo1 has joined #openstack-containers12:42
*** openstackgerrit has quit IRC12:44
*** gsimondon has quit IRC12:44
*** ykarel is now known as ykarel|afk13:02
*** sidx64 has quit IRC13:05
*** ykarel|afk has quit IRC13:09
*** ykarel has joined #openstack-containers13:32
*** udesale has joined #openstack-containers13:36
*** lpetrut has joined #openstack-containers13:42
*** lpetrut has quit IRC14:04
*** ttsiouts has quit IRC14:06
*** ttsiouts has joined #openstack-containers14:07
*** ttsiouts has quit IRC14:09
*** ttsiouts has joined #openstack-containers14:09
brtknrflwang1: What does auto-healing do which is distinct from auto-scaling?14:27
*** livelace has joined #openstack-containers14:42
*** lpetrut has joined #openstack-containers14:48
*** ttsiouts has quit IRC15:01
*** ttsiouts has joined #openstack-containers15:02
*** ttsiouts has quit IRC15:05
*** ttsiouts has joined #openstack-containers15:06
*** sapd1_x has joined #openstack-containers15:22
*** sapd1_x has quit IRC15:27
*** lpetrut has quit IRC15:29
*** ivve has quit IRC15:32
*** udesale has quit IRC15:37
*** gsimondo1 has quit IRC15:42
*** ttsiouts has quit IRC15:56
*** ttsiouts has joined #openstack-containers15:57
dioguerrabrtknr: NPD checks logs for erros and tags nodes with a specific condition DRAINO: will consider de node not healthy if defined conditions are met and Drains and later evicts pods CA: creates new nodes if there are pods in pending state of creation16:00
*** ttsiouts has quit IRC16:01
*** livelace has quit IRC16:06
*** ykarel is now known as ykarel|away17:12
*** ykarel|away has quit IRC17:17
*** ramishra has quit IRC17:19
*** livelace has joined #openstack-containers17:32
*** gmann is now known as gmann_afk17:40
*** sidx64 has joined #openstack-containers18:00
*** ricolin has quit IRC18:04
*** sidx64_ has joined #openstack-containers18:06
*** sidx64 has quit IRC18:06
*** alisanhaji has quit IRC18:16
*** gmann_afk is now known as gmann18:21
*** lpetrut has joined #openstack-containers18:25
flwang1dioguerra: thanks for the feedback18:26
*** lpetrut has quit IRC18:29
*** flwang1 has quit IRC18:31
*** sidx64_ has quit IRC19:27
*** gsimondon has joined #openstack-containers19:35
*** openstackgerrit has joined #openstack-containers19:52
openstackgerritSpyros Trigazis proposed openstack/magnum master: WIP: k8s_fedora_atomic minion upgrade  https://review.openstack.org/51496019:52
*** pcaruana has quit IRC20:31
*** pcaruana has joined #openstack-containers20:33
*** pcaruana has quit IRC20:36
*** pcaruana has joined #openstack-containers20:39
*** pcaruana has quit IRC20:47
brtknrMeeting tonight?20:54
strigazi+120:54
*** ttsiouts has joined #openstack-containers20:55
strigazidioguerra flwang and others, Can you summarize the disappearence of master with the CA in storyboard and how to reproduce? I could not reproduce it21:02
ttsioutsstrigazi: meeting now?21:03
strigaziflwang: meeting?21:03
brtknrttsiouts: strigazi: yep21:03
strigaziI'll start it21:04
strigazi#startmeeting containers21:04
openstackMeeting started Tue Apr  9 21:04:18 2019 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.21:04
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:04
*** openstack changes topic to " (Meeting topic: containers)"21:04
openstackThe meeting name has been set to 'containers'21:04
strigazi#topic Roll Call21:04
*** openstack changes topic to "Roll Call (Meeting topic: containers)"21:04
strigazio/21:04
colin-hello21:04
ttsioutso/21:04
*** imdigitaljim has joined #openstack-containers21:06
strigazi#topic Stories/Tasks21:06
imdigitaljimo/21:06
*** openstack changes topic to "Stories/Tasks (Meeting topic: containers)"21:06
brtknro/21:06
strigaziLast week I attempted to upgrade the default version of k8s to 1.14.0 but calico v2 wasn't passing21:06
strigaziwasn't passing the conformance  test21:07
strigaziI have the patch and results here:21:07
strigazihttps://review.openstack.org/#/c/649609/21:07
strigaziflwang: suggest that the latest calico, may work. I'll give it a go21:08
imdigitaljimwe use the latest calico21:08
colby_Hey Guys. Whats the latest version of kubernetes I can use on queens version of magnum (6.3.0). I tried with kube_tag=1.11.1-5 and 1.12 and both failed to build. The default 1.9.3 builds fine.21:08
strigaziimdigitaljim: i know, that is why I'm not asking :)21:08
imdigitaljimah :D21:08
colby_I mean kube_tag=v1.11.1-521:09
imdigitaljimconformance was passing as well21:09
imdigitaljimso you might be right21:09
*** schaney has joined #openstack-containers21:10
strigaziFor upgrades, I did some modifications for the worker nodes and with the heat API works pretty well for worker and it validates the passed nodegroup.21:10
strigaziSome more clean up and it will works with the API.21:10
imdigitaljimstrigazi: https://kubernetes.io/docs/setup/version-skew-policy/21:10
imdigitaljimhave you seen that for upgrades?21:11
imdigitaljimspecifically https://kubernetes.io/docs/setup/version-skew-policy/#supported-component-upgrade-order21:11
flwango/21:11
strigaziThe only missing part is the container registry on clusters21:11
strigaziimdigitaljim: yes, but it doesn't enforce it21:12
flwangsorry i'm late, NZ just had a daylight saving21:12
strigazithis madness with daylight will end soon, at least in the EU21:13
flwangstrigazi: yep21:13
flwangstrigazi: so are you still going to do the master upgrade in your existing patch?21:13
strigaziyes21:13
flwangor you will propose another one?21:14
strigazithis one21:14
strigaziflwang: do you want to the 1.14.0, it is calico related21:14
strigaziflwang: do you want to the 1.14.0 pathc, it is calico related21:14
strigazialso 1.14.1 is out21:14
flwangwant to (do)?21:15
strigaziflwang: do you want to take the 1.14.0 patch, it is calico related21:15
flwanghehe, sure i can21:16
flwangbut i'm busy on the auto scaling regression issue and the upgrade testing/review, is the v1.14.0 urgent for you?21:16
strigazinot really really urgent21:17
flwangstrigazi: ok, then i can take it, no problem21:17
strigazii said not :)21:17
strigaziregarding the *possible* regression with the autoscale. I wasn't able to reproduce. Can you describe it in  storyboard?21:18
flwangstrigazi: sure, are you using devstack or stable/rocky?21:18
strigazidevstack21:18
flwangand are you using the image from opentstackmagnum?21:19
strigazibut a in a good machine :)21:19
strigaziyes21:19
flwangare you using my patch or a home-made autoscaler yaml?21:19
strigazifrom the CA repo, not your patch21:19
strigaziI don't think this is the issue https://github.com/kubernetes/autoscaler/issues/187021:20
flwangmy code also from ca repo but i'd like to understand the difference, and i think it is a corner case, but we need to figure it out21:20
flwangstrigazi: not sure, and I also got a scale down issue which autoscaler and magnum/heat are using different format of UUID21:21
strigaziok, with your patch is it 100% reproducible?21:21
flwangi think it's reproduceible, but i don't think it's 100%, better give it a try by yourself21:21
flwangand that would be really appreciated21:22
strigaziok, where do you test? dsvm?21:22
strigazimaster branch?21:22
flwangmaster branch21:22
strigaziok21:22
flwangwith all latest code, including the patch NG-521:22
strigaziok21:23
flwangi will dig it today as well21:23
flwangback to your upgrade patch, did you see all my comments?21:23
strigazicool, I'll check gerrit tmr21:23
flwangnow i can the minion upgrade works with those changed i mentioned in the patch, but in my testing, the master node will be rebuilt though i didn't change the image21:24
strigaziI am lost in the comments, they are too many. what changes?21:25
strigazifor the additional mounts it is fixed.21:26
flwangi suggest you review all my comments, because that took me a lot of time for testing21:26
flwangthe additional mounts is for the minion side21:26
flwangi'm talking about the master21:26
strigazisure, I'll address them21:26
flwangso do you mean i shouldn't care about the master behaviour now since you haven't done it?21:27
strigazimaster is expected to fail atm.21:27
flwangstrigazi: it's not "fail", it's being rebuilt21:27
flwangafter rebuilt, master is using the new version of k8s21:28
strigazithat is kind of a failure :)21:28
strigaziI'll fix it21:28
flwanglet me explain a bit21:28
strigaziI know the issue, it is because of user data21:29
flwangafter rebuilt, all components except kubelet will be back soon, and i have to restart kubelet to get it back21:29
flwangit's really like the issue we're seeting for autoscaler's master rebuilt21:29
flwangs/seeting/seeing21:29
strigaziyeap, it is the issue we had with cluster_update some months ago and it was fixed21:30
flwangi just wanna highlight that to see if you have any idea21:30
strigaziyeap, it is the same issue we had with cluster_update some months ago and we fixed it21:30
flwangwhich patch fixed it?21:30
flwangwith the autoscaler testing, i'm using master21:31
flwangand i also rebased the upgrade patch locally for testing21:31
flwangso i'm wondering which patch you're talking about21:31
strigazino, I mean the cause is the same as in cluster_update in the past.21:31
flwangstrigazi: so you mean you fixed it in your existing patch?21:32
strigazihttps://github.com/openstack/magnum/commit/3f773f1fd045a507c3962ae509fcd57352cdc9ae21:32
strigazino21:32
strigaziflwang: let's take a step back.21:32
strigaziThe current patch for upgrades it is expected to "fail" for master.21:32
strigaziThe reason is "change of user_data of the vm"21:33
flwangi get that21:33
strigaziThis reason used to break cluster_update and it was fixed.21:33
strigaziI don't know what breaks autoscale, I''ll check.21:33
flwangThe reason is "change of user_data of the vm" --- we have to do same thing for master like we did for minion?21:34
flwangto put those scripts "into" heat-container-agent?21:34
flwangok, i understand21:34
strigaziin the current patch in gerrit I'll push a fix for master upgrade.21:34
strigaziyes21:34
flwangstrigazi: cool21:34
flwangbased on my understanding of heat update, the master nodes being rebuilt is still caused by something changed for the master21:36
strigazicorrect21:36
flwangwe just need to figure out what has been changed which slip out our eyes21:36
flwangcool, good to know we're on the same page21:36
strigazifor upgrades yes, for the autoscaler to be checked.21:36
flwangdioguerra was doubting the new security group rules from master to nodes21:37
flwangbut still failed after revert that one21:37
flwangdioguerra: can you give us more details?21:38
strigaziit comes from Ricardo too and I also mentioned it, but I didn't have enough courage to insist21:38
strigaziin the patches in gerrit I mentioned that it breaks the pattern we use for ingress21:38
strigazithis is the remove of pors 80/44321:39
strigazithe other port is ssh which change the default behaviour.21:39
strigazithe other port is ssh which changed the default behaviour.21:39
strigaziI mentioned this in gerrit as well.21:40
flwangif we confirmed the issue is caused by the security rules, i think we can revisit this part21:40
strigazias I mentione before, in clouds that don't have octavia (like ours) or even if they do, but users don't want to use it.21:41
strigaziingress works with a traefik or nginx or appscode/voyager21:41
flwangstrigazi: i can see your point, we maybe able to introduce a config to let cloud provider to config those rules?21:42
strigaziusing ports 80/443 in the workers21:42
strigaziFor this, we can open when open when traefik or nginx is used or with another label21:43
strigazisame for ssh21:43
flwangstrigazi: yep, that would be better and easier21:43
strigazican be cloud wide or with labels21:44
strigazican be cloud/magnum-deployment wide or with labels21:44
flwangyep, we can discuss this later for more details21:45
strigaziwe can put additional details in storyboard21:46
flwangsure21:46
brtknrwhat is the update on CRUD for nodegroups ttsiouts?21:47
flwangstrigazi: ttsiouts: is the ng-6 the last one we need for NG? on server side21:49
strigaziis there an NG-6 in gerrit?21:49
flwanghttps://review.openstack.org/#/c/647792/21:50
strigazibefore the new driver, i think this is the last one21:50
flwangok, good21:51
strigaziand client21:51
flwangi'm reviewing the client now21:51
ttsioutsbrtknr: I am refactoring the scripts for the deployment of the cluster21:51
strigaziI guess we need a microversion21:52
flwangstrigazi: do you think we can start to merge upgrade api now?21:52
ttsioutsbrtknr: in heat side21:52
strigaziflwang: we just need a check for master VS worker and we are good21:52
flwangstrigazi: ok, cool, i have done the api, ref, and client,  and it generally works with your functional patch21:53
strigaziyeap21:54
flwangso as long as your master upgrade work submitted, we can start to do integration testing and get things done21:54
flwangstrigazi: thank you for working on this, i know it's a hard one21:54
brtknrttsiouts: sounds good! let me know when its ready for testing :)21:54
strigazi:)21:55
strigazijust before closing, I want to make a shameless plug21:55
strigaziif you use barbican, you may like this one:21:56
strigazihttps://techblog.web.cern.ch/techblog/post/helm-barbican-plugin/21:56
flwanginstall barbican on k8s?21:56
strigaziRicardo wrote an excellent plugin21:56
strigaziit can be easily added as a kubectl plugin21:57
flwangstrigazi: ah, that's nice, so you still need to have barbican deployed already, right?21:57
strigaziyes, you need the barbican API21:57
flwangand then just use barbican as the secret backend for k8s?21:57
brtknrstrigazi: I like that Kustomize is mentioned :)21:58
flwangthat's cool, actually, we already have customer asking that21:58
strigazithis plugin is for client side usage.21:58
strigaziFor KMS there is an implementation in the CPO repo21:58
flwangstrigazi: cool21:59
strigazilet's end the meeting?22:01
strigaziSaid once22:02
strigazisaid twice22:02
flwangi'm good22:02
strigazithanks for joining everyone22:03
flwangthanks strigazi22:03
strigaziflwang: cheers22:03
strigazi#endmeeting22:03
*** openstack changes topic to "OpenStack Containers Team"22:03
openstackMeeting ended Tue Apr  9 22:03:11 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:03
openstackMinutes:        http://eavesdrop.openstack.org/meetings/containers/2019/containers.2019-04-09-21.04.html22:03
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/containers/2019/containers.2019-04-09-21.04.txt22:03
openstackLog:            http://eavesdrop.openstack.org/meetings/containers/2019/containers.2019-04-09-21.04.log.html22:03
flwangstrigazi: any ETA about the master upgrade work?22:03
strigazitomorrow if nothing bad happens22:03
flwangthat's fantastic22:04
flwangI love your efficiency when you're free ;)22:04
strigazi:)22:05
flwangthank you and have a good night22:05
strigazithanks, have a nic day!22:05
openstackgerritRicardo Rocha proposed openstack/magnum master: [k8s] Add nginx based ingress controller  https://review.openstack.org/64865522:06
brtknrstrigazi: are you still there?22:12
brtknrwould you mind replying to my questions on the multi-nic thread on gerrit when you get a chance?22:12
*** hongbin has joined #openstack-containers22:13
brtknri am not sure what you mean by you cant access logs when using floating ip...22:13
*** ttsiouts has quit IRC22:16
*** ttsiouts has joined #openstack-containers22:17
*** ttsiouts has quit IRC22:21
*** rcernin has joined #openstack-containers22:28
colby_Does anyone know the latest kubernetes version that works with queens magnum (6.3.0)? The default 1.9.3 builds fine, but I tried 1.11.1-5 and 1.12 and both fail to build, with kubernetes never starting on the master (fedora-atomic-27)23:00
*** livelace has quit IRC23:14
flwangcolby_: better let us know the error23:26
flwangcolby_: i think we tested 1.9.3 for queens, and for rocky, it has been upgrade to 1.11.x23:26
colby_ah I think I see. there is no 1.11.1-5 or 1.12 image. There is 1.11.5-1 and 1.12.523:27
colby_you would think that atomic install would give an error is the image does not exist instead of just no response23:27
flwangcolby_: you should be able to see the log from /var/log/cloud-init-output.log for sure23:30
colby_yea it was just empty after the atomic install. I tried running the command from command line and it just silently exits23:31
flwangcheck the images we have on openstackmagnum dockerhub23:31
colby_yea I did that...thats how I noticed they did not exist :)23:32
colby_I switched the 1 and 5 in 1.11.5-1 on accident23:32
*** hongbin has quit IRC23:48

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!