Tuesday, 2018-08-14

*** rcernin_ has joined #openstack-containers00:02
*** rcernin has quit IRC00:03
*** rcernin has joined #openstack-containers00:29
*** rcernin has quit IRC00:29
*** rcernin has joined #openstack-containers00:30
*** rcernin_ has quit IRC00:32
*** flwang1 has quit IRC00:57
*** v1k0d3n has quit IRC00:57
*** kgz has quit IRC00:57
*** yankcrime has quit IRC00:57
*** flwang1 has joined #openstack-containers01:04
*** v1k0d3n has joined #openstack-containers01:04
*** kgz has joined #openstack-containers01:04
*** yankcrime has joined #openstack-containers01:04
*** openstackgerrit has quit IRC01:06
*** Nel1x has joined #openstack-containers01:13
*** hongbin has joined #openstack-containers02:01
*** markguz_ has joined #openstack-containers02:19
*** jaewook_oh has joined #openstack-containers02:25
*** ricolin has joined #openstack-containers02:38
*** ykarel|away has joined #openstack-containers03:02
*** Bhujay has joined #openstack-containers03:04
*** markguz_ has quit IRC03:05
*** Nel1x has quit IRC03:12
*** Nisha_Agarwal has joined #openstack-containers03:31
*** ykarel|away has quit IRC03:34
*** hongbin has quit IRC03:39
*** Bhujay has quit IRC03:40
*** udesale has joined #openstack-containers03:53
*** Nisha_ has joined #openstack-containers03:59
*** Nisha_Agarwal has quit IRC04:01
*** dave-mccowan has quit IRC04:12
*** Bhujay has joined #openstack-containers04:21
*** janki has joined #openstack-containers04:28
*** flwang1 has quit IRC04:53
*** ykarel|away has joined #openstack-containers05:05
*** ykarel|away is now known as ykarel05:11
*** Nisha_ has quit IRC05:28
*** Nisha_away has joined #openstack-containers05:29
*** ricolin has quit IRC06:43
*** pcaruana has joined #openstack-containers06:44
*** adrianc has joined #openstack-containers06:44
*** mattgo has joined #openstack-containers07:01
*** rcernin has quit IRC07:02
*** jaewook_oh has quit IRC07:28
*** salmankhan has joined #openstack-containers07:29
*** salmankhan has quit IRC07:33
*** jaewook_oh has joined #openstack-containers07:35
*** openstackgerrit has joined #openstack-containers07:37
openstackgerritOpenStack Proposal Bot proposed openstack/magnum master: Imported Translations from Zanata  https://review.openstack.org/59012707:37
*** janki is now known as janki|lunch07:52
*** jaewook_oh has quit IRC07:55
*** ykarel is now known as ykarel|lunch07:56
*** Bhujay has quit IRC08:10
*** openstackstatus has quit IRC08:12
*** ktibi has joined #openstack-containers08:13
*** serlex has joined #openstack-containers08:17
*** Bhujay has joined #openstack-containers08:34
*** ykarel|lunch is now known as ykarel08:36
*** janki|lunch is now known as janki08:42
*** jaewook_oh has joined #openstack-containers08:50
*** jaewook_oh has joined #openstack-containers08:51
openstackgerritSpyros Trigazis proposed openstack/magnum master: [k8s] Set order in kubemaster software deployments  https://review.openstack.org/59159209:09
*** salmankhan has joined #openstack-containers09:10
*** salmankhan1 has joined #openstack-containers09:15
*** salmankhan has quit IRC09:16
*** salmankhan1 is now known as salmankhan09:16
*** openstackstatus has joined #openstack-containers09:40
*** ChanServ sets mode: +v openstackstatus09:40
*** mvpnitesh has joined #openstack-containers09:41
*** Nisha_away has quit IRC09:44
*** ykarel is now known as ykarel|afk09:58
mvpniteshstrigazi: Hi Strigazi. I'm trying to create a cluster, when the cloud init is executing "atomic install --storage ostree --system --system-package=no --name=kube-apiserver docker.io/openstackmagnum/kubernetes-apiserver:v1.9.3"  it is pulling the image and throwing this error " g-io-error-quark: fallocate: No space left on device (12)".yesterday as you said  I've tried to add this information in template --docker-storage-driver overlay210:22
mvpniteshand tried with it. No help, it is still failing at the same step10:22
*** flwang1 has joined #openstack-containers10:25
flwang1strigazi: around?10:25
*** adrianc has quit IRC10:25
*** adrianc has joined #openstack-containers10:26
mvpniteshflwang1: Hi, i'm creating a cluster with Fedora-26, in cluster creation,  "atomic install --storage ostree --system --system-package=no --name=kube-apiserver docker.io/openstackmagnum/kubernetes-apiserver:v1.9.3"  it is pulling the image and throwing this error " g-io-error-quark: fallocate: No space left on device (12)"10:28
flwang1mvpnitesh: which version of magnum you're using?10:29
mvpniteshmaster10:29
mvpniteshflwang1: i've tried to create a cluster with Fedora-27 the default, it is not getting created10:30
flwang1mvpnitesh: with master, just use fedora 2710:30
flwang1are you using devstack?10:30
mvpniteshflwang1: yes, i'm using devstack10:31
flwang1then, just use the built in 27 image10:31
flwang1it should work10:31
strigaziflwang1: hi10:42
flwang1strigazi: today i had a chat with hogepodge about the functional testing and certificated k8s10:43
flwang1hogepodge: will help find resource for the functional test and i will work on the certified k8s based on magnum10:43
flwang1just FYI10:43
strigaziflwang1: thanks, I hope that it will move forward10:44
flwang1strigazi: let's see ;)10:44
flwang1anything else you want to discuss10:44
strigazi59159210:45
strigazi572897 this is with the native client?10:45
flwang1yes10:46
strigazithe one we don't want, right?10:47
strigazido you get what 591592 does?10:47
flwang1not sure i understand that10:48
flwang1we need 572897 to update the health status10:49
strigaziwe do?10:49
strigaziwhy?10:49
strigaziI mean, we want to use requests right?10:49
flwang1yep, i haven't finish the patch10:50
flwang1personally, i'd like to reuse the k8s python client as much as possbile, if it doesn't work,i will just use requests10:51
strigaziit doesn't10:51
flwang1that part should be ok, i need your comments for the overall desing10:51
flwang1design10:51
strigaziin 572897 ?10:52
openstackgerritFeilong Wang proposed openstack/magnum master: [k8s] Update cluster health status by native API  https://review.openstack.org/57289710:52
flwang1yes, still need testing, but as i said above, either using requests or k8s python client, should be OK, we can manage10:53
flwang1i just upload a new patch set10:53
flwang1need your comments for the overall design10:53
flwang1currently, i'm reusing the existing k8s monitor to pull nodes info from k8s api and if there is any node is not ready, then we think the cluster is unhealthy10:54
flwang1and i'm returning all the conditions of all nodes, so that the auto healing algorithm can make a reasonable decision about how to heal10:55
flwang1does that make sense for you?10:55
strigazikind of10:57
strigazigive me 5' to go through again10:57
flwang1sure10:57
flwang1we will improve the healthy calculating algorithm later, for now, i think we can make it a little bit strict, any node is not ready, then the cluster is not healthy, means it needs to be repaired10:59
strigazi^^ make sense10:59
*** udesale has quit IRC10:59
strigaziI think we need one field11:00
strigazithe status of the API11:00
flwang1it's on my list11:00
strigazifirst check if the API is up11:00
strigazithen node status11:00
strigazimakes sense?>11:00
strigazimakes sense?11:00
flwang1but for api, we can't only reuse the /healthz result11:00
flwang1because there is no way to get the status for each master node11:01
strigaziyou are asking me or telling ?11:01
strigaziyou are asking me or telling me ? :)11:01
strigaziwell, that is true and makes sense11:01
flwang1but for api, we cat only reuse the /healthz result11:01
flwang1can11:01
flwang1sorry for fat finger11:01
flwang1in the future, we may need better way to monitor the master status11:02
strigaziok is enough11:02
flwang1because for a 3 master nodes cluster, 1 master dead, the cluster is still O11:02
flwang1OK11:02
flwang1but if there is another master node dead, the cluster is dead11:03
strigaziif master is down etcd will not return ok11:03
strigaziif 1 master is down etcd will not return ok11:03
flwang1hmm... that's a good point11:03
strigaziplus in case if calico11:03
strigaziwhere we already have the kubelet in the master11:03
strigaziyou will have the status of the node there too11:04
flwang1i like the point, thanks for lighting me11:04
strigaziwe are working with imdigitaljim to add kubelet everywhere11:04
flwang1i will add the master status/conditions to the health_status_reason dict11:04
strigaziI think:11:04
flwang1it would make the code more clean, re kubelet everywhere11:05
strigazi{api_status: {}, nodes: {}}11:05
flwang1i'm OK with that11:05
flwang1api_status will be the result from /healthz?11:06
strigaziyes11:06
flwang1deal11:06
*** mvpnitesh has quit IRC11:15
strigazisorry, distractions, working in a building with 200 people11:15
flwang1haha11:18
flwang1generally, there are only 10 people around me11:18
openstackgerritFeilong Wang proposed openstack/magnum master: [k8s] Update cluster health status by native API  https://review.openstack.org/57289711:19
strigaziflwang1: so, with pythoi-reqs this part will be the same?: https://review.openstack.org/#/c/572897/3/magnum/service/periodic.py@9011:19
flwang1yes11:19
flwang1only this part https://review.openstack.org/#/c/572897/3/magnum/conductor/k8s_api.py could be changed if we have to use 'reqeuests'11:20
flwang1hence why i'd like to only focus on the overall design instead of the api accessing11:21
strigaziflwang1 +11:21
strigaziflwang1 +111:21
strigaziflwang1: maybe s/pull_data/get_health_data/11:22
flwang1strigazi: i thought that tool11:22
flwang1too11:22
flwang1in Stein, i will totally drop the 'pod' stuff in the monitor11:22
flwang1as I discussed with you before, it doesn't make any sense11:23
flwang1i'd like to make the k8s monitor only focus on general status update and health11:23
*** serlex has quit IRC11:24
strigaziso what we do? I think we can do s/pull_data/get_health_data/11:25
strigazior add get_health11:25
strigazior just health11:25
flwang1the name doesn't matter, i agree to create a new function to get the health data11:27
flwang1i'm going to offline, anything else you want to discuss?11:30
strigaziabout the patch I pushed?11:30
strigazithe order of software deployments11:31
flwang1strigazi: what happened without the patch?11:31
strigazisince we added the check for the api11:32
strigazithe api must be up the deployments to move11:33
flwang1ok11:33
strigaziif the ca.key is not in place the api doesn start: deadlock11:33
flwang1got it11:33
strigazione last thing11:36
strigaziabout moving kubelet, we are very close, I think we can add it back11:36
flwang1strigazi: in rocky?11:36
strigaziyes11:37
flwang1hmm... ok11:37
strigaziit is in for calico anyway11:37
flwang1yes11:37
strigaziwe can also move flannel, that doesn't affect at all11:37
strigaziwe can also move flannel, that doesn't affect you at all11:37
flwang1but i assume it will introduce much change if we don't just simply add it back11:37
flwang1what do yo mean 'move flannel'?11:38
strigazito hosted on k8s11:38
flwang1ah11:39
flwang1i don't really care about flannel TBH, so i don't mind hold it until stein11:39
strigazihttps://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml11:40
flwang1we're going to use Rocky, so i'm conservative to introduce big changes ;)11:41
flwang1and btw, where did we get the prometheus config?11:41
strigaziI guess you will have a tests end for some time right?11:41
flwang1strigazi: yes, full test and sonobuoy11:42
strigaziwe usually need a couple of days to test for prod11:42
flwang1i mean this one https://review.openstack.org/#/c/508172/11:42
flwang1it doesn't work for me11:42
strigazisonobuoy doesn't test openstack and CERN filesystems, sonobuoy conformance is given11:43
flwang1since now magnum can support lb without floating ip, so it would be nice if we can drop the node port of prometheus11:43
flwang1which is the only addon of magnum still using node port11:43
strigaziwhat doesn't work?11:44
*** mvpnitesh has joined #openstack-containers11:44
flwang1the data resource doesn't work11:45
flwang1my env has been destroyed11:45
strigaziwhat?11:45
flwang1i will setup it again and give you more details11:45
strigaziby prometheus?11:45
flwang1yes11:45
strigazihow can this even happen?11:46
flwang1hence why i asked where did we get the original config11:46
flwang1i'd like to do some comparision11:46
flwang1pls revisit this corner if you have time recently11:47
*** ykarel|afk is now known as ykarel11:47
flwang1strigazi: i have to go, sorry11:49
flwang1it's late here11:49
strigaziflwang1: thanks Feilong, good night11:51
*** flwang1 has quit IRC11:52
*** slagle has joined #openstack-containers11:53
*** serlex has joined #openstack-containers12:04
mvpniteshstrigazi: I'm trying to create a cluster with Desvtack master setup. THe master node is getting create and non of the cloud init scripts are running12:10
*** udesale has joined #openstack-containers12:33
*** dave-mccowan has joined #openstack-containers12:43
*** mvpnitesh has quit IRC12:45
*** Bhujay has quit IRC13:07
*** Bhujay has joined #openstack-containers13:07
*** jaewook_oh has quit IRC13:11
*** savvas has joined #openstack-containers13:14
savvasanyone has any idea on why cluster creation could time out? http://paste.openstack.org/show/728012/ . Master node gets created and initial connectivity to it can be established, but at some point during the tasks it executes I lose connectivity and eventually the stack fails13:15
canori01savvas: Looks like it's not getting to the part where it notifies heat. You would need to check the logs on the master to see where it is erroring out13:27
savvasok ye I can no longer access it because it loses connectivity at some point during the install. The password on those atom images is random so I can't access it via console13:38
*** Bhujay has quit IRC13:40
*** pbourke has quit IRC13:54
*** pbourke has joined #openstack-containers13:56
*** brtknr has quit IRC14:05
*** ricolin has joined #openstack-containers14:15
*** hongbin has joined #openstack-containers14:21
*** brtknr has joined #openstack-containers14:22
*** brtknr has quit IRC14:24
*** Bhujay has joined #openstack-containers14:52
*** ktibi has quit IRC14:57
*** adrianc has quit IRC15:29
*** adrianc has joined #openstack-containers15:29
*** janki has quit IRC15:36
*** savvas has quit IRC15:36
*** ykarel is now known as ykarel|away15:41
*** itlinux has joined #openstack-containers15:54
*** ykarel|away has quit IRC15:59
*** pcaruana has quit IRC16:02
*** mattgo has quit IRC16:09
openstackgerritJim Bach proposed openstack/magnum master: cleanup config-k8s-masters.sh, added roles to nodes on startup  https://review.openstack.org/58921416:47
*** udesale has quit IRC16:54
*** serlex has quit IRC16:56
*** ricolin has quit IRC17:05
*** adrianc has quit IRC17:06
*** mattgo has joined #openstack-containers17:09
*** Bhujay has quit IRC17:22
*** hongbin has quit IRC17:35
*** hongbin has joined #openstack-containers17:35
*** hongbin has quit IRC17:35
*** hongbin has joined #openstack-containers17:36
*** janki has joined #openstack-containers17:45
*** salmankhan has quit IRC18:07
*** salmankhan has joined #openstack-containers20:20
imdigitaljimmeeting today?20:54
strigaziimdigitaljim: yes20:55
imdigitaljimgreat20:55
strigaziflwang: ping20:58
strigazi#startmeeting containers20:59
openstackMeeting started Tue Aug 14 20:59:04 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.20:59
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:59
openstackThe meeting name has been set to 'containers'20:59
strigazi#topic Roll Call20:59
imdigitaljimo/20:59
strigazio/20:59
colin-\o20:59
strigaziI guess flwang prefers working at midnight not midday :)21:00
strigazi#topic Announcements21:00
imdigitaljimhaha21:00
strigaziwe have stable/rocky and magnum 7.0.021:01
imdigitaljim\o/21:01
strigaziwe have still patches to add so 7.0.1 would be the release21:01
strigazi\o/ indeed :)21:02
strigazi#topic Blueprints/Bugs/Ideas21:02
strigaziI promised canori01 to tests coreos and I did21:02
strigaziand apparently it is me that kind of broke it21:03
strigaziI mean the coreos driver21:03
strigazidetails here:21:03
strigazi#link https://review.openstack.org/#/c/57902621:03
imdigitaljimdidnt you go to the fedora convention?21:03
imdigitaljimwhat are the atomic/coreos plans that you migth know of21:04
strigaziThe issue is passing certs with heat and using string replacement21:04
imdigitaljim#link https://review.openstack.org/#/c/590443/21:04
imdigitaljim#link https://review.openstack.org/#/c/590346/21:04
imdigitaljim#link https://review.openstack.org/#/c/589214/21:04
strigazicloud-config in coreos != cloud-init21:05
imdigitaljim#link https://review.openstack.org/#/c/577570/21:05
imdigitaljimare all ready for review21:05
strigazithanks21:05
colin-this reminds me, i was wondering strigazi or others if anyone has used hashicorp's Vault as a sidecar certificate authority in the cluster?21:05
colin-to ease some of these certificate management/distribution tasks?21:06
strigazicolin-: no, at least at CERN we have barbican21:06
strigazithere is work to integrate barbican and k8s21:07
colin-ok21:07
strigazieg barbican sdk was added to gophercloud21:07
*** canori02 has joined #openstack-containers21:07
colin-that's great, the interest is less in Vault in more in a more graceful way to manage all the TLS files21:07
imdigitaljimwere working on org approval to work on kubernetes org repo to contribute several PRs for the standalone openstack controller21:07
imdigitaljimwhich will also translate to some magnum PRs21:08
strigazicool :)21:08
canori02o/21:08
strigazicanori02: o/21:09
*** mattgo has quit IRC21:09
strigaziimdigitaljim: have you pushed a patch to add kube-proxy already?21:09
imdigitaljimid need the two files merged to apply the rest21:10
imdigitaljimmake-cert and configure-master21:10
imdigitaljimbecause it uses elements of those to complete it21:10
strigaziimdigitaljim: ok21:10
imdigitaljimhowever the work has been already done21:11
imdigitaljimjust a matter of sequence21:11
strigaziyou can push with depencies in gerrit, I guess you know this21:11
strigazi*dependencies21:11
*** canori02 has quit IRC21:12
imdigitaljimi did not actually know, im not super knowledgeable with gerrit21:12
imdigitaljimill check it out21:12
imdigitaljimand push it21:12
strigazi https://docs.openstack.org/infra/manual/developers.html#adding-a-dependency21:12
imdigitaljimoh awesome21:12
imdigitaljimthanks21:12
imdigitaljimthat will make it easy21:13
strigazithree more items from me:21:13
*** canori02 has joined #openstack-containers21:13
strigazi1. i'm working on the unit tests for the upgrade API, of course I love writing unit tests21:13
strigazi2. with flwang we made some progress in :21:14
strigazi#link https://review.openstack.org/#/c/572897/21:14
strigazipulling the k8s cluster for health status21:14
strigaziand return something like {api_health: {}, nodes: []}21:15
strigazi3. k8s 1.11.x works without any issue, all patches for the certificates are merged and the contianer images are updated21:16
imdigitaljimyep!21:16
imdigitaljimwe're also on 1.11.221:16
strigazireally easy now :)21:17
strigazithe patch is 6 character, it would be 1 if I used a variable :)21:17
imdigitaljimwe recently updated calico to 3.1.321:18
imdigitaljimwe're like 17 releases behind in magnum21:18
strigaziargs in dockerfiles require a newer docker version21:18
imdigitaljimprobably more relevant to flwang:21:18
strigaziimdigitaljim: what we have in magnum atm21:18
strigazi?21:18
imdigitaljim2.6.721:18
strigaziI wonder why flwang pushed for 2.6.x21:19
imdigitaljimthere were some 3.x issues at the time21:19
strigaziI think 3.x.y was out at that time21:19
imdigitaljimbut to us they appear to be resolved21:19
strigazioh, ok21:19
strigazimakes sense21:19
imdigitaljimyeah he was probably doing what was necessary21:19
strigaziFinally, a comment for Fedora CoreOS21:19
strigaziwe need at least 6 months fedora having something solid to use21:20
strigaziAll the builds they have now are experimental and none of them are public21:20
strigaziFedora Atomic 30 will be the last one21:20
strigaziFedora CoreOS will be based on rpms21:21
strigaziand will rpm-ostree21:21
strigaziand will use rpm-ostree21:21
imdigitaljimwill we still be able to use the same files that you know of? "atomic install etc"21:21
strigaziatomic cli no, but something similar21:21
imdigitaljimok so we'll need to make some changes21:22
imdigitaljimmaybe we can get a good way to prebuild the new images as well quickly21:22
strigazithey said since there are users like us they will take it into account21:22
imdigitaljimso we can inject extra stuff on the base fedora coreos image21:22
strigaziregaring that, we will need to work with ignition21:23
*** slagle has quit IRC21:23
strigaziwe can start investigating this with coreos and be ready21:23
strigaziwe need a way to compile the ignition json21:23
*** salmankhan has quit IRC21:23
strigaziwe can join the fedora coreos meeting on the 21st21:24
*** salmankhan has joined #openstack-containers21:24
imdigitaljimhttps://coreos.com/ignition/docs/latest/21:24
imdigitaljimgot it21:24
strigaziimdigitaljim: are you interested?21:24
imdigitaljimthat would be good actually21:24
strigazi#link https://apps.fedoraproject.org/calendar/workstation/2018/8/20/#m931521:25
strigazi7:30 for me, great...21:25
strigaziimdigitaljim: for you?21:25
strigaziI can ask for the other one, if it is better, I think they will alternate21:26
imdigitaljim1030PM21:26
imdigitaljimi should be able to make it if i remember :p21:27
strigaziimdigitaljim: I can ping you :)21:27
strigazispeaking of atomic, I tested F28AH, works fine with magnum 7.0.021:27
strigaziand queens actually, I'll push a patch21:28
strigaziI think that was all from me, imdigitaljim colin- canori02  do you want to add anything?21:29
canori02How far away is fedora coreos?21:30
strigaziat least 6 months21:30
canori02I ask since the work I've been doing makes the coreos driver work through cloud-init and not ignition21:31
canori02Should we just deprecate that?21:31
strigaziI would help21:31
strigaziIt would help21:32
strigaziI think we can add your patch in rocky and then work on ignition21:32
strigazicanori02: makes sense?21:32
canori02Yeah, makes sense21:33
imdigitaljimnothing else here21:33
canori02Also, had another question. Sorry if you covered this already since I was late21:33
canori02What were your guy's thougts on the discovery.etcd.io issues they were having?21:34
strigazidiscovery.etcd.io is back, but they plan to deprecate it, but without a clear timeline21:34
imdigitaljimcanori02: if you want to put the effort with it, this can be run internally/locally21:35
strigazimagnum has the option to use a local discovery so we can use that21:35
flwangstrigazi: re calico21:35
strigaziI posted a link in the channel last week on how to do it21:36
flwangwhen I worked on calico, GKE is also using 2.6.721:36
canori02Yeah, I did deploy one internally. Wasn't too bad21:36
flwangso I trust google so I assume 2.6.7 is stable at least21:36
strigazisounds good ^^21:36
flwangand given we have calico node tag, so user can easily upgrade if they want21:37
strigaziflwang: we could change the default?21:37
flwangnew version is cool, but i'm always a old man style, so....21:37
flwangstrigazi: we can21:37
flwangi can test with 3.x21:37
flwangand propose the version upgrade if it can pass the sonobuoy testing21:38
colin-do you remember what they were version locking on 2.6.x to support flwang? i seem to recall it being related to IPVS use for kube-proxy but could be wrong21:39
strigazicanori02: we can start an issue in etcd repo to ask what they recommend21:39
strigaziI imagine they will say, run your own discovery21:39
strigazibootstraping etcd without knowing the ips beforehand is tedious21:40
canori02I saw they were asking for feedback on how we use the service.  So we can give them that21:42
flwangcolin-: i can't remember, sorry21:44
strigaziyeap, you can reply, I can follow it up too21:44
flwangfor etcd discovery issue, at least, we can add a retry for the function21:44
imdigitaljimalso the calico_tag wont work entirely21:45
imdigitaljimthe yaml format changed21:45
imdigitaljimspecifically plugins21:45
imdigitaljimminor changes though21:45
imdigitaljimflwang:^21:45
imdigitaljimflwang: we've also gotten sonobuoy set up as well21:46
strigaziimdigitaljim: flwang it always takes one hour to run?21:48
imdigitaljimyeah21:48
imdigitaljimits like 67 minutes for us21:48
imdigitaljimalthough sonobuoy make some more assumptions on a fwe things but overall its pretty good21:49
strigaziwhat assumptions? example?21:49
imdigitaljim1 sec21:51
*** itlinux has quit IRC21:54
flwangsorry, i was in a meeting and going into another one21:55
flwangstrigazi: yes, 1 hour21:55
strigaziflwang: enjoy :)21:55
flwangbut i'm going to dig to see if we can have a smoke test set21:55
imdigitaljimi was gonna try to link the codeline but i cant find it right this second21:56
strigaziimdigitaljim: are you still there?21:56
flwangimdigitaljim: yep, that's a good point, i will check if we should upgrade to 3.x21:56
imdigitaljimbut basically its assuming a master node is labeled in a specific way21:57
imdigitaljimand its not even a good way21:57
imdigitaljimi think sonobuoy pulls from the kk e2e anyways so it might actually just be a bad kk test21:57
imdigitaljimso it skips like 100s of tests21:57
imdigitaljimbased solely on that21:57
strigaziimdigitaljim: do you have the name of the test? I didn't see anything in the logs about the label21:58
imdigitaljimyeah ill dig it up again21:58
strigazimaybe I missed ti21:58
flwangimdigitaljim: i'm interested in too21:58
strigazithanks, when I run again I'll look closer21:59
imdigitaljimi could have totally missed something too so it would be good to find out21:59
strigazilet's end the meeting then22:00
strigazisee you next week everyone22:00
strigazi#endmeeting22:00
openstackMeeting ended Tue Aug 14 22:00:47 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-08-14-20.59.html22:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-08-14-20.59.txt22:00
openstackLog:            http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-08-14-20.59.log.html22:00
*** janki has quit IRC22:01
imdigitaljimhttps://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/metrics/metrics_grabber.go#L7922:01
imdigitaljimstrigazi:flwang: here this was part of it22:01
imdigitaljimhttps://github.com/kubernetes/kubernetes/blob/master/pkg/util/system/system_utils.go#L2922:02
imdigitaljimthats the assumption22:02
imdigitaljimhwev5mpebvv4-master-0 is the end of the master node name22:04
imdigitaljimnot ending in master22:04
*** salmankhan has quit IRC22:04
imdigitaljimu remember what they were version locking on 2.6.22:04
imdigitaljimhttps://golang.org/pkg/strings/#HasSuffix22:04
imdigitaljimignore copy/paste fial22:04
imdigitaljimfail*22:04
*** canori02 has quit IRC22:28
flwangimdigitaljim: thanks, that's helpful22:51
*** hongbin has quit IRC22:53

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!