Tuesday, 2018-07-10

*** jmlowe has joined #openstack-containers00:13
*** markguz has joined #openstack-containers00:15
*** yamamoto has joined #openstack-containers00:34
*** spiette has quit IRC00:36
*** markguz_ has joined #openstack-containers00:36
*** spiette has joined #openstack-containers00:36
*** yamamoto has quit IRC00:39
*** markguz has quit IRC00:40
*** markguz_ has quit IRC00:40
*** chhagarw has quit IRC00:56
*** ricolin has joined #openstack-containers01:00
*** hongbin has joined #openstack-containers01:08
*** masuberu has quit IRC01:09
*** masber has joined #openstack-containers01:29
*** yolanda__ has joined #openstack-containers01:33
*** yolanda_ has quit IRC01:36
*** chuck_ is now known as zul01:54
*** masuberu has joined #openstack-containers02:12
*** jmlowe_ has joined #openstack-containers02:13
*** spotz_ has joined #openstack-containers02:14
*** ricolin_ has joined #openstack-containers02:15
*** jento_ has joined #openstack-containers02:17
*** masber has quit IRC02:19
*** sgordon has quit IRC02:19
*** brtknr has quit IRC02:19
*** ricolin has quit IRC02:19
*** spotz has quit IRC02:19
*** jmlowe has quit IRC02:19
*** jento has quit IRC02:19
*** sgordon` has joined #openstack-containers02:19
*** sgordon` is now known as sgordon02:19
*** brtknr has joined #openstack-containers02:21
*** yolanda_ has joined #openstack-containers02:28
*** yolanda__ has quit IRC02:31
*** ramishra has joined #openstack-containers02:33
*** vijaykc4 has joined #openstack-containers03:10
*** ricolin_ has quit IRC03:17
*** ricolin has joined #openstack-containers03:17
*** hongbin has quit IRC03:39
*** masuberu has quit IRC03:43
*** vijaykc4 has quit IRC03:47
*** udesale has joined #openstack-containers03:48
*** lpetrut has joined #openstack-containers03:48
*** Bhujay has joined #openstack-containers03:49
*** yamamoto has joined #openstack-containers03:52
*** vijaykc4 has joined #openstack-containers03:53
*** ianychoi_ has quit IRC03:54
*** ianychoi has joined #openstack-containers04:08
*** lpetrut has quit IRC04:08
*** lpetrut has joined #openstack-containers04:09
*** mdnadeem has joined #openstack-containers04:11
*** hongbin has joined #openstack-containers04:12
*** hongbin has quit IRC04:20
*** ykarel|away has joined #openstack-containers04:24
*** jappleii__ has quit IRC04:25
*** threestrands has joined #openstack-containers04:25
*** janki has joined #openstack-containers04:35
*** lpetrut has quit IRC04:35
*** vijaykc4 has quit IRC04:41
*** yamamoto_ has joined #openstack-containers04:42
*** yamamoto has quit IRC04:45
*** yolanda__ has joined #openstack-containers04:46
*** yolanda_ has quit IRC04:49
*** pcichy has joined #openstack-containers05:15
*** yamamoto_ has quit IRC05:16
*** lpetrut has joined #openstack-containers05:18
*** pcichy has quit IRC05:20
*** ykarel|away is now known as ykarel05:21
*** lpetrut has quit IRC05:33
*** yamamoto has joined #openstack-containers05:35
*** armaan has joined #openstack-containers05:45
*** jento_ has quit IRC05:46
*** jento has joined #openstack-containers05:47
*** vijaykc4 has joined #openstack-containers05:58
*** mjura has joined #openstack-containers05:59
*** vijaykc4 has quit IRC06:01
*** sidx64 has joined #openstack-containers06:08
*** ispp has joined #openstack-containers06:29
*** armaan has quit IRC06:30
*** threestrands has quit IRC06:30
*** armaan has joined #openstack-containers06:31
*** armaan has quit IRC06:35
*** mdnadeem has quit IRC06:35
*** yolanda_ has joined #openstack-containers06:36
*** yolanda__ has quit IRC06:39
*** mdnadeem has joined #openstack-containers06:48
*** lpetrut has joined #openstack-containers06:53
*** mago_ has joined #openstack-containers06:55
*** armaan has joined #openstack-containers07:00
*** mgoddard has joined #openstack-containers07:07
*** sidx64 has quit IRC07:09
*** sidx64 has joined #openstack-containers07:11
openstackgerritAndrei Ozerov proposed openstack/magnum master: Provide a region to the K8S Fedora Atomic config  https://review.openstack.org/57835607:19
*** peereb has joined #openstack-containers07:19
*** vijaykc4 has joined #openstack-containers07:31
*** ykarel is now known as ykarel|lunch07:36
*** mgoddard has quit IRC07:41
*** vijaykc4 has quit IRC07:42
*** serlex has joined #openstack-containers07:43
*** ispp has quit IRC07:43
*** rcernin has quit IRC07:47
*** ktibi has joined #openstack-containers07:58
*** yolanda__ has joined #openstack-containers07:59
openstackgerritTuan Do Anh proposed openstack/magnum master: Add release notes link in README  https://review.openstack.org/58124207:59
*** yolanda__ is now known as yolanda07:59
*** sidx64 has quit IRC08:02
*** yolanda_ has quit IRC08:02
*** mgoddard has joined #openstack-containers08:06
*** ykarel|lunch is now known as ykarel08:17
*** sidx64 has joined #openstack-containers08:26
*** vijaykc4 has joined #openstack-containers08:37
*** sfilatov has joined #openstack-containers08:47
sfilatovHi! I've enabled cert_manager_api label for my cluster and got NoneType has no attribute 'is_admin'.08:51
sfilatovThis is where the error occurs https://github.com/openstack/magnum/blob/master/magnum/db/sqlalchemy/api.py#L12608:52
sfilatovand this code does not pass any context https://github.com/openstack/magnum/blob/master/magnum/drivers/heat/k8s_fedora_template_def.py#L12008:52
sfilatovwhile get_cluster_ca_certificate function accepts context08:53
sfilatovSo is it a bug or I'm not getting smth?08:53
*** sidx64 has quit IRC08:55
*** mjura_ has joined #openstack-containers08:55
*** sidx64 has joined #openstack-containers08:56
*** mjura has quit IRC08:58
*** mvpnitesh has joined #openstack-containers09:02
*** mago_ has quit IRC09:06
*** flwang1 has joined #openstack-containers09:07
flwang1strigazi: meeting today?09:07
strigaziflwang1: yes09:07
flwang1strigazi: ok, cool09:07
*** sidx64 has quit IRC09:13
*** vijaykc4 has quit IRC09:13
*** sidx64 has joined #openstack-containers09:19
openstackgerritSpyros Trigazis proposed openstack/magnum master: Support disabling floating IPs in swarm mode  https://review.openstack.org/57120009:27
mvpniteshstrigazi: What to know the status of https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor and https://blueprints.launchpad.net/magnum/+spec/nodegroups09:27
mvpniteshstrigazi: Hi , what to know whether someone is working on them or when they will be implemeneted09:28
*** sidx64 has quit IRC09:28
*** itlinux has joined #openstack-containers09:30
*** mdnadeem has quit IRC09:31
*** mdnadeem has joined #openstack-containers09:37
*** rcernin has joined #openstack-containers09:44
*** rcernin has quit IRC09:47
flwang1mvpnitesh: i think you can take it09:49
flwang1mvpnitesh: i don't think there is anybody working on it09:49
strigaziI added it in agenda https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2018-07-10_1700_UTC09:50
strigazimvpnitesh: we have the meeting in a few minutes. These subjects are best to be targeted for the meeting.09:51
mvpniteshstrigazi: Thanks, Will attend the meeting09:52
*** sidx64 has joined #openstack-containers09:54
*** itlinux has quit IRC09:56
*** mago_ has joined #openstack-containers09:58
openstackgerritSpyros Trigazis proposed openstack/magnum master: Clear resources created by k8s before delete cluster  https://review.openstack.org/49714409:58
*** sidx64 has quit IRC09:59
*** chhagarw has joined #openstack-containers09:59
strigazi#startmeeting containers10:00
openstackMeeting started Tue Jul 10 10:00:15 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.10:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.10:00
*** openstack changes topic to " (Meeting topic: containers)"10:00
openstackThe meeting name has been set to 'containers'10:00
strigazi#topic Roll Call10:00
*** openstack changes topic to "Roll Call (Meeting topic: containers)"10:00
flwang1o/10:00
strigazio/10:00
strigaziagenda:10:01
strigazi#link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2018-07-10_1700_UTC10:01
strigazi#topic Blueprints/Bugs/Ideas10:01
*** openstack changes topic to "Blueprints/Bugs/Ideas (Meeting topic: containers)"10:01
strigazinodegroups, diffrent flavors and AZs10:02
flwang1strigazi: after you10:02
strigaziAt the moment we only support to select flavors for the master and worker nodes10:03
strigaziFor availability zones, we can select the AZ for all nodes.10:03
*** itlinux has joined #openstack-containers10:04
strigaziThere is a need to specify different AZs and flavors10:04
strigaziAZ for availability, flavors for special vms, for example flavors with GPUs10:04
mvpniteshstragazi: If we want to multiple flavour for the clsuter creation , can we go ahead and implement this https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor10:04
mvpniteshi guess the spec for this BP is not approved, can i modify the same or raise the new one ?10:05
brtknro/10:05
strigaziNodegroups were proposed in the past, IMO it is an over engineered solution and we don't have the man power for it.10:05
strigazimvpnitesh: we don't use blueprints any more.10:05
mvpniteshstrigazi: ok10:06
strigazimvpnitesh: we migrated in storyboard. These BPs where moved there with the same name.10:06
strigazibrtknr: hi10:07
strigaziwe can consolidate these there blueprints in one, that allows cluster with different types of nodes10:07
brtknrstrigazi: hi, thanks for reviwing the floating ip patch, im confused about the time on the agenda... says 1700.10:08
strigazieach node can be different from the other is flavor and AZ10:08
strigazibrtknr is it?10:08
strigazibrtknr: we alternate and I copied the wrong line10:08
*** sidx64 has joined #openstack-containers10:08
brtknrah, so 1 week at 1000, another week at 1700? i like that solution10:09
strigazibrtknr: http://lists.openstack.org/pipermail/openstack-dev/2018-June/131678.html10:10
strigaziSo, for AZs and flavors. No one is working on it.10:11
strigaziWe need to design it and target it for S10:11
strigazimvpnitesh: ^^10:12
mvpniteshstrigazi: We want to have a multiple flavours for a single cluster. I'll look into the storyboard and come up with the design and i'll target that for S10:12
strigaziWe can discuss it next week again and we can come prepared.10:13
strigazibrtknr: flwang1: are interested in this ^^10:13
strigazibrtknr: flwang1: are you interested in this ^^10:14
flwang1strigazi: not really ;)10:14
brtknrbrtknr: multiple flavors and multiple os too if possible10:14
strigazimultiple os?10:14
strigazifor the special images you have for gpus?10:15
*** yamamoto has quit IRC10:15
*** Bhujay has quit IRC10:15
brtknrstrigazi: yes, i looked into fedora support for gpu, looks a bit hacky10:15
brtknrconsidering nvidia do not officially support gpu10:15
*** Bhujay has joined #openstack-containers10:15
brtknrwhereas centos is supported10:16
brtknrnvidia do not officially support fedora gpu drivers10:16
strigaziLet's see, I think we can even use centos-atomic without any changes10:16
strigaziNext subject,10:17
*** Bhujay has quit IRC10:17
*** Bhujay has joined #openstack-containers10:18
strigaziI'm working on changing this method to sync the cluster status https://github.com/openstack/magnum/blob/master/magnum/drivers/heat/driver.py#L19110:18
*** Bhujay has quit IRC10:19
flwang1strigazi: what's the background?10:19
strigaziIstead of list that I mentioned we can do get without resolving the outpits of the stack10:19
flwang1ah, i see10:19
flwang1got it10:19
*** Bhujay has joined #openstack-containers10:19
*** chhagarw has quit IRC10:19
*** yamamoto has joined #openstack-containers10:19
strigaziflwang1: with big stack magnum tries to kill heat.10:19
strigaziflwang1: With the health check you are working on10:20
strigaziwe can avoiding getting the worker nodes, like ever10:20
flwang1i can remember the issue now10:20
strigazibut this is another discussion10:22
flwang1ok10:23
strigaziThis week I expect to push, this patch, cloud-provider-enabled/disable patch, k8s upgrades and the keypair patch.10:23
brtknris there a story for this?10:23
strigazithis one https://storyboard.openstack.org/#!/story/200264810:23
strigaziI saw that flwang1 and imdigitaljim had questions about rebuilding and fixing clusters10:24
strigaziThe above change is required also for supporting users.10:24
brtknrAh, I had the same issue with cluster created using just heat template definition which we are using to create clusters with complex configuration10:25
strigaziAt the moment, when a user has a cluster with a few nodes (not one node)10:25
strigaziAnd a node is in bad shape we delete this node with heat10:26
strigaziActually we tell him how to do it:10:26
strigaziopenstack stack update <stack_id> --existing -P minions_to_remove=<comma separated list with resources ids or private ips> -P number_of_minions=<integer>10:27
strigazithe resource id can be found either from the name of the vms or by doing openstack stack resource list -n 2 <stack_id>10:28
strigaziusing the resource id is more helpful since you can even delete nodes that didn't got an ip and also the command we cook for users is smaller :)10:29
flwang1strigazi: btw, is Rocardo still working on auto healing?10:29
strigaziflwang1: he said he will10:29
strigaziflwang1: so auto healing will do what I described auto automatically10:30
flwang1strigazi: ok, cool10:30
strigazieven so, the change with the keypait is useful so that admins can do this operation10:30
strigazior other users on the project10:31
strigazior other users in the project10:31
strigazimake sense?10:31
flwang1strigazi: yes for me10:32
strigaziAnd for supporting users and the health status10:32
strigaziAt the moment10:32
strigaziusers can not retrieve certs if the cluster is in a failed state10:32
strigaziWe must change this so that we also know what k8s or swarm think for the cluster10:33
flwang1yep, we need the health status to help magnum understand the cluster status10:33
strigaziInstead of checking the status of the cluster magnum can change if the CA is created10:34
strigaziIf the CA is created users should retrieve the certs10:34
strigazimakes sense?10:34
flwang1we may need some discussion about this10:35
strigaziWhat are your doubts?10:35
strigaziThe solution we need is10:35
strigaziA user created a cluster with 50 nodes10:36
strigazithe CA is created, the master is up and also 49 workers10:36
strigazi1 vm failed, to boot to get connectivity or to report to heat.10:36
flwang1strigazi: my concern is why user do have to care about a cluster failure10:37
strigazicluster status goes to CREATE_FAILED10:37
flwang1can't he just create a new one?10:37
strigaziwhat if it is 100 nodes?10:37
flwang1no mater how many nodes, the time of creation should be 'same'10:38
strigazidoes it sound reasonable for the rest of the openstack services to take the load again?10:38
flwang1so the effort to create a new one is 'same'10:38
strigaziflwang1: in practice it will not be the same10:38
flwang1i can see your point10:38
flwang1i'm happy to have it fixed, but personally i'd like to see a 'fix' api10:39
strigazii was dealing with a 500 node cluster last week and I still feel the pain10:39
flwang1instead of introducing too much manual effort for user10:39
flwang1we can continue it offline10:40
strigaziwhen automation fails you need to have manual access10:40
strigazinot if, when10:40
strigaziok10:40
strigazithat's it from me10:40
flwang1you finished?10:41
strigaziyes, go ahead10:41
flwang1i have a long list10:41
flwang11. i'm still working on the multi region issue10:41
flwang1and the root cause is in heat10:42
flwang1i have proposed several patches in heat, heat-agents and os-collect-config10:42
strigaziI saw only one for heat10:42
strigazipointers?10:42
flwang1https://review.openstack.org/58047010:42
flwang1https://review.openstack.org/58022910:43
flwang1these two are for heat10:43
flwang1one has been merged in master10:43
flwang1i'm cherrypicking to queens10:43
flwang1for occ https://review.openstack.org/58055410:43
strigazithese three?10:44
flwang1for heat-agent https://review.openstack.org/580984 is backporting to queens10:44
flwang1we don't have to care about the last one10:44
strigaziyeap10:44
flwang1but we do need the fixes for heat and occ10:44
*** yamamoto has quit IRC10:45
flwang12. heat-container-agent images can't build10:45
flwang1failed to find the python-docker-py, not sure if there is anything i missed10:45
*** vijaykc4 has joined #openstack-containers10:46
flwang13. etcd race condition issue   https://review.openstack.org/57948410:46
strigaziflwang1 I'll have a look in 210:46
strigazi3 is ok now10:47
flwang1strigazi: thanks10:47
flwang1strigazi: yep, can you bless it #3? ;)10:47
openstackgerritMerged openstack/magnum master: Pass in `region_name` to get correct heat endpoint  https://review.openstack.org/57904310:47
openstackgerritMerged openstack/magnum master: Add release notes link in README  https://review.openstack.org/58124210:47
strigazi3: Patch in Merge Conflict10:47
flwang1ah, yep.10:48
strigaziflwang1: you can trim down the commit message when you rebase10:48
flwang1as for the rename scripts patch has been reverted, can we get it in again? https://review.openstack.org/581099 i have fixed it10:48
strigaziflwang1: yes10:48
flwang1strigazi: thanks10:48
strigaziflwang1: I was checking the cover job, not sure why sometimes fails10:49
flwang14. Clear resources created by k8s before delete cluster https://review.openstack.org/49714410:49
flwang1the method used in this patch is not good IMHO10:49
*** livelace has joined #openstack-containers10:49
flwang1technically, there could be many clusters in same subnet10:50
strigaziflwang1: what do you propose?10:50
flwang1we're working on a fix in CPO to add cluster id into the LB's description10:50
flwang1that's the only safe way IMHO10:51
sfilatovI might be missing something, but why we dont clear loadbalancers in a separate software deployment?10:51
sfilatovwe won't need to connect to k8s api then10:51
flwang1sfilatov: what do you mean?10:51
flwang1the lb is created by k8s10:51
flwang1we're not talking about the lb of master10:52
sfilatovyes10:52
sfilatovi'm talking about k8s too10:52
sfilatovwe could have dont kubectl delete on all of them10:52
sfilatovinside a cluster10:52
flwang1sfilatov: can you just propose a patch? im happy to review and testing10:53
sfilatovyes, I'm working on it10:53
sfilatovbut i've kinda have a patch similar to the one proposed10:53
strigaziso to delete a cluster we will do a stack update first?10:53
sfilatovand we have a lot of issies with it10:53
sfilatovno10:54
sfilatovwe can have a softwaredeployment  for DELETE action10:54
strigazi+110:54
sfilatovwe are working on this issue, I can provide a patch by the end of this or next week I guess10:55
flwang1sfilatov: nice10:55
flwang1strigazi: that's me10:55
sfilatovcan you link a bug or a blueprint for this issue?10:55
flwang1i'm still keen to understand the auto upgrade and auto healing status10:56
flwang1sfilatov: wait a sec10:56
strigazibrtknr: sfilatov something to add?10:56
*** udesale has quit IRC10:56
flwang1sfilatov: story/171206210:56
sfilatovthx10:57
flwang1np10:57
DimGRhi10:57
*** sfilatov has quit IRC10:58
*** sfilatov has joined #openstack-containers10:58
*** sidx64 has quit IRC10:58
strigaziAnything else for the meeting?10:58
*** sfilatov has quit IRC10:59
flwang1strigazi: nope, im good10:59
strigaziok then, see you this Thursday or in 1 week - 1 hour10:59
strigazi#endmeeting10:59
*** openstack changes topic to "OpenStack Containers Team"11:00
openstackMeeting ended Tue Jul 10 10:59:59 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)11:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-07-10-10.00.html11:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-07-10-10.00.txt11:00
openstackLog:            http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-07-10-10.00.log.html11:00
*** sfilatov has joined #openstack-containers11:00
*** sidx64 has joined #openstack-containers11:01
*** sfilatov has quit IRC11:04
*** sidx64 has quit IRC11:06
*** sidx64 has joined #openstack-containers11:07
*** livelace has quit IRC11:13
*** sfilatov has joined #openstack-containers11:16
brtknrsorry had to leave halfway through the meeting11:18
*** sfilatov_ has joined #openstack-containers11:19
*** itlinux has quit IRC11:21
*** sidx64 has quit IRC11:21
*** sfilatov has quit IRC11:22
*** sidx64 has joined #openstack-containers11:31
*** yamamoto has joined #openstack-containers11:33
*** sfilatov has joined #openstack-containers11:36
*** sidx64 has quit IRC11:36
*** mvpnitesh has quit IRC11:39
*** mvpnitesh has joined #openstack-containers11:39
*** sfilatov_ has quit IRC11:39
*** mvpnitesh has quit IRC11:41
*** ispp has joined #openstack-containers11:48
*** ricolin has quit IRC11:50
*** mgoddard has quit IRC11:54
*** armaan has quit IRC12:00
*** armaan has joined #openstack-containers12:00
*** sidx64 has joined #openstack-containers12:14
*** yolanda has quit IRC12:21
*** janki has quit IRC12:24
*** sidx64 has quit IRC12:24
*** yamamoto has quit IRC12:26
*** sidx64 has joined #openstack-containers12:26
*** yamamoto has joined #openstack-containers12:26
*** mvpnitesh has joined #openstack-containers12:33
*** sfilatov has quit IRC12:35
*** chhagarw has joined #openstack-containers12:36
*** yolanda has joined #openstack-containers12:42
*** sidx64 has quit IRC12:45
*** mdnadeem has quit IRC12:47
*** sidx64 has joined #openstack-containers12:54
*** yolanda_ has joined #openstack-containers13:00
*** mvpnitesh has quit IRC13:00
*** yolanda has quit IRC13:02
*** ykarel is now known as ykarel|afk13:03
*** mdnadeem has joined #openstack-containers13:04
*** mgoddard has joined #openstack-containers13:05
*** yolanda_ has quit IRC13:05
*** sidx64 has quit IRC13:06
*** sidx64 has joined #openstack-containers13:11
*** ispp has quit IRC13:11
*** yolanda_ has joined #openstack-containers13:18
*** janki has joined #openstack-containers13:18
*** ispp has joined #openstack-containers13:23
*** vijaykc4 has quit IRC13:33
openstackgerritMerged openstack/magnum master: Support disabling floating IPs in swarm mode  https://review.openstack.org/57120013:40
*** hongbin has joined #openstack-containers13:40
*** itlinux has joined #openstack-containers13:44
*** lpetrut_ has joined #openstack-containers13:47
openstackgerritMerged openstack/magnum master: Rename scripts  https://review.openstack.org/58109913:48
*** lpetrut has quit IRC13:50
*** ykarel|afk is now known as ykarel13:51
*** ricolin has joined #openstack-containers14:02
*** sidx64 has quit IRC14:02
*** yolanda_ has quit IRC14:03
*** mjura_ has quit IRC14:03
*** itlinux has quit IRC14:04
*** sidx64 has joined #openstack-containers14:06
*** sidx64 has quit IRC14:07
*** sidx64 has joined #openstack-containers14:09
*** sidx64 has quit IRC14:10
*** ispp has quit IRC14:11
*** sidx64 has joined #openstack-containers14:13
*** mdnadeem has quit IRC14:15
*** ispp has joined #openstack-containers14:17
*** yolanda_ has joined #openstack-containers14:18
*** markguz has joined #openstack-containers14:20
*** Bhujay has quit IRC14:26
*** spotz_ is now known as spotz14:31
*** ykarel is now known as ykarel|away14:44
*** peereb has quit IRC14:45
*** ispp has quit IRC14:45
*** ispp has joined #openstack-containers14:54
*** udesale has joined #openstack-containers15:01
*** lpetrut_ has quit IRC15:05
*** ykarel|away has quit IRC15:07
*** armaan has quit IRC15:10
*** armaan has joined #openstack-containers15:10
*** armaan has quit IRC15:11
*** armaan has joined #openstack-containers15:11
*** armaan has quit IRC15:11
*** armaan has joined #openstack-containers15:12
*** armaan has quit IRC15:16
*** yamamoto has quit IRC15:29
*** Bhujay has joined #openstack-containers15:30
*** yamamoto has joined #openstack-containers15:31
*** yamamoto has quit IRC15:34
*** lpetrut_ has joined #openstack-containers15:36
*** mago_ has quit IRC15:36
*** armaan has joined #openstack-containers15:46
*** jmlowe_ has quit IRC15:50
*** ktibi has quit IRC16:01
*** sidx64 has quit IRC16:06
*** jmlowe has joined #openstack-containers16:08
*** jmlowe has quit IRC16:09
*** janki has quit IRC16:11
*** ispp has quit IRC16:13
*** udesale has quit IRC16:15
*** lpetrut_ has quit IRC16:19
*** yamamoto has joined #openstack-containers16:34
*** Bhujay has quit IRC16:38
*** yamamoto has quit IRC16:40
*** armaan has quit IRC16:42
*** armaan has joined #openstack-containers16:43
*** ramishra has quit IRC16:49
*** mgoddard has quit IRC17:06
*** jmlowe has joined #openstack-containers17:09
*** ykarel|away has joined #openstack-containers17:11
imdigitaljimall: hey just wanted to check in, hadnt spoken much in last few days we had us holiday last week so slow week and I've been working on an internal CI for kubernetes components using any version rather than just whats available in the atomic-system-containers rawhide. We're set up to use the latest (1.11). I'll returning on upstream stuff soon :), still watching things though17:28
imdigitaljimUS holiday*17:28
*** livelace has joined #openstack-containers17:32
*** ykarel|away has quit IRC17:35
*** yamamoto has joined #openstack-containers17:37
imdigitaljimmvpnitesh: we'll be working on node pools as well soon, strigazi: updating heat template versions (up to pike) will help a lot in solving this problem without overengineering17:38
imdigitaljimstrigazi: we'll also be trying to use centos atomic at some point too, i also believe it should be interchangable17:39
*** yamamoto has quit IRC17:41
*** zul has quit IRC18:03
*** armaan has quit IRC18:03
*** zul has joined #openstack-containers18:03
*** armaan has joined #openstack-containers18:03
*** armaan has quit IRC18:03
*** armaan has joined #openstack-containers18:04
*** mgoddard has joined #openstack-containers18:10
*** markguz_ has joined #openstack-containers18:31
*** markguz has quit IRC18:34
*** yamamoto has joined #openstack-containers18:39
*** yamamoto has quit IRC18:44
*** itlinux has joined #openstack-containers18:47
*** markguz_ has quit IRC18:55
*** markguz has joined #openstack-containers18:56
*** mgoddard has quit IRC18:56
*** itlinux has quit IRC19:05
*** livelace has quit IRC19:07
*** chhagarw has quit IRC19:07
*** lpetrut_ has joined #openstack-containers19:16
*** lpetrut_ has quit IRC19:20
*** jmlowe has quit IRC19:27
*** serlex has quit IRC19:27
*** yamamoto has joined #openstack-containers19:32
*** flwang1 has quit IRC19:36
*** yamamoto has quit IRC19:37
*** jmlowe has joined #openstack-containers19:51
*** itlinux has joined #openstack-containers19:53
*** serlex has joined #openstack-containers20:00
*** ricolin has quit IRC20:19
*** zul has quit IRC20:20
*** yamamoto has joined #openstack-containers20:33
*** yamamoto has quit IRC20:39
*** zul has joined #openstack-containers20:43
*** serlex has quit IRC20:49
*** itlinux has quit IRC20:53
*** flwang1 has joined #openstack-containers21:06
*** jmlowe has quit IRC21:10
*** yolanda__ has joined #openstack-containers21:18
*** yolanda_ has quit IRC21:21
*** mago_ has joined #openstack-containers21:34
*** jmlowe has joined #openstack-containers21:36
*** mago_ has quit IRC21:39
*** rcernin has joined #openstack-containers22:22
*** yolanda_ has joined #openstack-containers22:29
*** yolanda__ has quit IRC22:32
*** yamamoto has joined #openstack-containers22:36
*** yamamoto has quit IRC22:42
*** hongbin has quit IRC23:06
*** yamamoto has joined #openstack-containers23:38
*** markguz has quit IRC23:39
*** markguz has joined #openstack-containers23:40
*** yamamoto has quit IRC23:44
*** markguz has quit IRC23:45

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!