Tuesday, 2018-07-17

*** rcernin_ has joined #openstack-containers00:04
*** rcernin has quit IRC00:04
*** linkmark has quit IRC00:13
*** jkuei_ has quit IRC00:42
*** hongbin has joined #openstack-containers00:44
*** hongbin_ has joined #openstack-containers00:56
*** hongbin has quit IRC00:58
*** hongbin_ has quit IRC01:45
*** hongbin has joined #openstack-containers01:45
*** dave-mccowan has joined #openstack-containers02:06
*** ricolin has joined #openstack-containers02:24
*** dave-mcc_ has joined #openstack-containers02:25
*** dave-mccowan has quit IRC02:25
*** flwang1 has quit IRC02:41
*** flwang1 has joined #openstack-containers02:42
*** ramishra has joined #openstack-containers03:21
*** janki has joined #openstack-containers03:25
*** dave-mcc_ has quit IRC03:30
*** janki has quit IRC03:31
*** janki has joined #openstack-containers03:35
*** lpetrut has joined #openstack-containers03:49
*** udesale has joined #openstack-containers03:50
*** mvpnitesh has joined #openstack-containers03:55
*** ykarel|away has joined #openstack-containers04:00
*** hongbin has quit IRC04:08
*** sfilatov has joined #openstack-containers04:15
*** Bhujay has joined #openstack-containers04:22
*** ykarel|away is now known as ykarel04:29
*** jchhatbar has joined #openstack-containers04:31
*** janki has quit IRC04:31
*** jchhatbar has quit IRC04:32
*** jchhatbar has joined #openstack-containers04:33
*** lpetrut has quit IRC04:39
*** flwang1 has quit IRC04:56
*** sfilatov has quit IRC04:58
*** sfilatov has joined #openstack-containers04:58
*** sfilatov has quit IRC05:02
*** ykarel has quit IRC05:15
*** rcernin_ has quit IRC05:25
*** rcernin has joined #openstack-containers05:26
*** udesale_ has joined #openstack-containers05:35
*** udesale has quit IRC05:35
*** udesale_ has quit IRC05:40
*** Bhujay has quit IRC05:44
*** ykarel has joined #openstack-containers05:46
*** mvpnitesh has quit IRC05:50
*** mvpnitesh has joined #openstack-containers05:50
*** mjura has joined #openstack-containers05:53
*** sfilatov has joined #openstack-containers06:02
*** mvpnitesh has quit IRC06:07
*** mdnadeem has joined #openstack-containers06:07
*** mvpnitesh has joined #openstack-containers06:10
*** udesale_ has joined #openstack-containers06:11
*** janki has joined #openstack-containers06:14
*** jchhatbar has quit IRC06:14
*** janki has quit IRC06:15
*** Bhujay has joined #openstack-containers06:28
*** gsimondon has joined #openstack-containers06:41
*** openstackgerrit has joined #openstack-containers06:48
openstackgerritMerged openstack/magnum-ui master: Add Apple OS X ".DS_Store" to ".gitignore" file  https://review.openstack.org/58165206:48
*** lpetrut has joined #openstack-containers06:50
*** ispp has joined #openstack-containers06:59
*** yamamoto has joined #openstack-containers07:02
*** peereb has joined #openstack-containers07:04
*** rcernin has quit IRC07:11
*** serlex has quit IRC07:12
*** Bhujay has quit IRC07:21
openstackgerritziyu proposed openstack/magnum-ui master: Modify the 'tox.ini' file  https://review.openstack.org/58312307:21
*** AlexeyAbashkin has joined #openstack-containers07:22
*** niteshmvp has joined #openstack-containers07:22
*** mvpnitesh has quit IRC07:22
*** ispp has quit IRC07:28
*** mago_ has joined #openstack-containers08:05
*** vabada has quit IRC08:05
*** vabada has joined #openstack-containers08:06
*** vabada has quit IRC08:07
*** vabada has joined #openstack-containers08:08
*** vabada has quit IRC08:10
*** vabada has joined #openstack-containers08:10
*** vabada has quit IRC08:12
*** vabada has joined #openstack-containers08:13
*** ispp has joined #openstack-containers08:15
*** mgoddard has joined #openstack-containers08:15
*** niteshmvp has quit IRC08:17
*** Bhujay has joined #openstack-containers08:28
*** Bhujay has quit IRC08:35
*** mvpnitesh has joined #openstack-containers08:36
*** serlex has joined #openstack-containers08:39
*** flwang1 has joined #openstack-containers08:53
flwang1strigazi: do we have meeting today?08:54
*** ktibi has joined #openstack-containers09:01
*** ispp has quit IRC09:02
strigaziflwang1: yes09:05
flwang1strigazi: ok, cool09:13
*** vijaykc4 has joined #openstack-containers09:15
*** ykarel has quit IRC09:23
*** salmankhan has joined #openstack-containers09:30
*** yamamoto has quit IRC09:37
*** ykarel has joined #openstack-containers09:44
*** ykarel is now known as ykarel|away09:48
*** ykarel|away has quit IRC09:57
openstackgerritRicardo Rocha proposed openstack/python-magnumclient master: [k8s] Include cluster certificates in config  https://review.openstack.org/58295509:58
strigazi##startmeeting containers10:00
strigazi#startmeeting containers10:00
openstackMeeting started Tue Jul 17 10:00:40 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.10:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.10:00
*** openstack changes topic to " (Meeting topic: containers)"10:00
openstackThe meeting name has been set to 'containers'10:00
strigazi#topic Roll Call10:00
*** openstack changes topic to "Roll Call (Meeting topic: containers)"10:00
strigazio/10:00
flwang1o/10:01
strigaziIt is the two of us then, let's do it quickly10:02
flwang1ok10:02
sfilatovo/10:02
strigazihey sfilatov10:03
sfilatovI'm here to discuss deletion :)10:03
strigazi#topic Blueprints/Bugs/Ideas10:03
*** openstack changes topic to "Blueprints/Bugs/Ideas (Meeting topic: containers)"10:03
strigaziTest v1.11.0 images10:03
strigazi#link https://hub.docker.com/r/openstackmagnum/kubernetes-kubelet/tags/10:03
strigaziconformance tests are passing for me10:03
flwang1nice nice10:04
strigaziNote: you must use cgroupfs as cgroup_driver10:04
flwang1so are we going to bump k8s version for rocky to 1.11.0?10:04
strigaziwith systemd the node is ready, but it cannot schedule pods10:04
brtknro/10:04
strigaziflwang1: I think yes, it is better10:05
flwang1strigazi: then we need to document it somewhere10:05
strigaziof course10:05
flwang1strigazi: cool, i can do that10:05
strigazibut we need to test it first :)10:05
strigazinot only me :)10:05
flwang1strigazi: sure, when i say i can do that, i mean  i will test it10:05
flwang1and if it works for me, i will propose a patch and add doc10:06
strigazibtw I'm still evaluating the gcr.io hyperkube containers vs a fedora based one10:06
flwang1any benefit using hyperkube?10:07
strigaziwe don't build kubernetes at all we just package the hyperkube container10:07
strigazibut10:07
strigazihyperkube is based on debian, incompatibilities may occur10:08
strigaziI'll push two patches for the two ways to build10:08
flwang1hmm...10:09
flwang1ok10:09
strigazito build the rpms is trivial like: git clone kube && bazel build ./build/rpms10:09
strigazilike this: http://paste.openstack.org/show/726097/10:10
flwang1nice10:11
strigazibazel is a black box for me, but seems to work pretty well and pretty fast10:12
strigaziwe can take this offile10:12
strigaziwe can take this offline10:12
strigazinext:10:12
strigazithis is a trivial change after all: https://review.openstack.org/#/c/582506/10:13
strigaziResolve stack outputs only on COMPLETE10:13
strigazibut I expect to help a lot when magnum poll heat10:13
strigaziin devstack you can not see it, but in big stack will make a difference10:14
strigaziare you looking? should I move one?10:14
sfilatov+ from me10:15
flwang1that looks good for me10:15
strigazinext is the keypair issue and scaling10:15
flwang1though i may need to take a look at the resolve_outputs param10:15
strigaziflwang1: there is a show_params api10:16
flwang1cool10:16
strigaziflwang1: resolve_outputs says to heat to not bother with the outputs of the stack10:16
flwang1and outputs means more api calls in heat i guess?10:17
flwang1to other services10:17
strigaziflwang1: even during stack creation heat will though all server to get the ips10:17
strigaziflwang1: it means more load on the engine10:17
flwang1right, matching what i thought10:17
flwang1all good10:17
strigaziflwang1: and it means slow api response10:17
strigaziflwang1: normally I have 250ms response time10:18
strigaziflwang1: with a 50 node cluster in progress, any api call goes to 15 seconds10:18
flwang1omg10:19
strigaziflwang1: all magnum nodes hit all heat api eventually with the same request and the apis block10:19
strigazibut10:19
strigaziif you create the stack with the heat api and magnum is not hammering, all good10:20
strigaziwithout output resolve the stack get call is a simple lookup10:20
flwang1strigazi: thanks for the clarification10:20
strigaziI created a 500 node cluster 2 weeks ago and immediatly I stop magnum from hitting heat, everything was smooth10:21
strigazinext, keypair10:21
strigazikeypair is an OS resource that it cannot be shared in a project10:22
strigaziand a cluster owns let's say a key10:22
strigaziadmin can't do stack update or other members10:23
strigaziwith this patch:10:23
strigazi#link https://review.openstack.org/#/c/582880/10:23
strigaziusers can do a stack update freely10:23
strigaziwith these params in heat:10:23
strigazideferred_auth_method = trusts10:23
strigazireauthentication_auth_method = trusts10:23
strigaziall good so far but10:24
strigaziif the user is deleted its trust is deleted so the stack can not be touched again10:24
strigaziafaik only solution: pass the key in user_data and not as a keypair in nova10:25
strigazithoughts?10:25
brtknrHow does it allow other users to make changes? it is not immediately obvious to me10:25
brtknrlooking at the patch10:25
strigazibrtknr the keypair is created on cluster creation10:26
*** vijaykc4 has quit IRC10:26
flwang1so instead of using the original user's public key, we just generate a common one for everybody?10:26
strigazibrtknr: and authtication is done using the trust, so userB will authenticate behind the scenes with the trust of userA10:27
strigaziflwang1 the thing is that there is no such thing as common jey10:27
strigaziflwang1 the thing is that there is no such thing as common key10:27
strigazithe key is only visible by the creator10:27
brtknrhow does userA enable trust for userB? I suppose this has to be set somewhere?10:28
flwang1can't we just generate one and 'share' it to all users in the tenant?10:28
strigaziflwang1 impossible in nova10:28
flwang1strigazi: ok10:28
sfilatovflwang1: can we share nova keys10:28
sfilatovy10:28
strigazibrtknr: heat does this10:28
strigazisfilatov: no we can't10:28
strigazisfilatov: https://docs.openstack.org/horizon/pike/user/configure-access-and-security-for-instances.html10:29
strigaziA key pair belongs to an individual user, not to a project. To share a key pair across multiple users, each user needs to import that key pair.10:30
strigazithat is not correctly expressed10:30
strigaziit means all users must import the same public_key10:30
sfilatovstrigazi: yep, so we can't do it natively10:31
strigaziwe can simulate the shared key with heat using the trust10:31
strigazibut as I said if the user is deleted the trust is gone10:31
brtknrhmm10:32
brtknrwould be nice to set group priviledge10:32
brtknre.g any user in admin group can modify the cluster10:33
strigaziI don't think it is possible10:33
brtknrbut this is certainly the next best thing10:33
strigazibrtknr: and it is not only desirable for admins10:34
strigazibrtknr: in private clouds you have shared resources10:34
strigaziin public too, but not as much as in private10:34
strigaziand as I mentioned passing the key in user data will work in all cases.10:34
strigazidoes this sound bad to you ^^10:35
brtknrhow does it limit who is allowed to make changes to the cluster or not?10:36
brtknror is it not limited at all?10:36
strigaziit does not10:36
brtknrsounds a little worrying lol10:37
brtknrhow about a heat parameter that is a list of users that are allowed to make change10:38
brtknror we assume that anyone in the project is allowed to make changes10:38
strigaziyou know about this right? https://github.com/strigazi.keys10:39
strigazilet's take this offline, I need to explain the problem more I guess10:39
strigaziit is a limitation of nova10:39
strigazinot magnum or heat10:39
sfilatovLet's talk about k8s loadbalancer deletions then10:40
strigaziI have one more thing for upgrades, sorry10:40
sfilatovsure10:40
strigaziFor the past weeks, I'm trying to drain nodes before rebuilding them10:41
strigaziThe issue is that this api call must be executed before every node rebuild10:41
strigaziso it must be in the heat workflow10:42
strigaziotherwise heat is not managing the status10:42
strigaziof the infrastructure anymore10:42
strigaziI'm trying this pattern so far: http://paste.openstack.org/show/726098/10:43
strigaziwith no success so far10:43
strigaziI'm thinking of putting the workflow in the master or in magnum10:44
sfilatovAnd btw is draining the node the only roght way to do this? Are there issues behind upgrading in-place?10:45
sfilatovdowntime?10:45
strigaziin-place there is no such problem10:45
sfilatovyes, but the workflow you consider is about draining and rebuilding the nodes10:46
strigazibut it means, the OS must support upgrades in place and if you have upgraded a few times10:46
flwang1can you remind me the limition of in-place upgrade?10:46
strigazithinks will go wrong10:46
strigazis/thinks/things/10:46
strigazi1. GKE and cluster-api are not doing in-place10:47
strigazi2. upgrading a OS in-place is not an atomic operation10:47
*** sfilatov has quit IRC10:47
*** sfilatov has joined #openstack-containers10:48
strigazirebuild works even in ironic10:48
strigazithe suggested way from lifecycle sig is replace10:49
strigazionly master nodes in-place10:49
*** sfilatov_ has joined #openstack-containers10:49
sfilatov_sry, I'm back10:50
strigaziflwang1: sfilatov_ thoughts?10:50
sfilatov_I don't see the history since I reconnected10:50
strigazi< flwang1> can you remind me the limition of in-place upgrade?10:51
sfilatov_strigazi: could copy and paste it pls10:51
flwang1strigazi: fair enough10:51
strigazi1. GKE and cluster-api are not doing in-place10:51
strigazi2. upgrading a OS in-place is not an atomic operation10:51
strigazirebuild works even in ironic10:51
strigazithe suggested way from lifecycle sig is replace10:51
strigazionly master nodes in-place10:51
*** sfilatov has quit IRC10:52
strigaziwith multimaster you can even replace masters one by one with no downtime10:52
sfilatov_strigazi: what you mean by upgrading OS10:52
strigazikernel verionN to kernel versionN+110:53
sfilatov_ok, got it10:53
strigazihave you ever upgraded docker? it is so nice10:53
strigazibut mostly the kernel10:53
sfilatov_strigazi: that's true10:54
sfilatov_strigazi: are you considering rebuilding masters as well?10:54
strigaziyes, with a param10:54
sfilatov_strigazi: looks like we have more or less the same issues with this10:54
*** ispp has joined #openstack-containers10:55
sfilatov_strigazi: I agree then. I llooked through API in the upgrade patch10:55
sfilatov_strigazi: and it seems we need nodegroups implemented10:55
strigazilet's move this to gerrit then10:55
sfilatov_ok10:55
strigazisfilatov_: about delete?10:55
sfilatov_yes10:56
strigaziwhat is the issue?10:56
strigaziI mean, I know the issue10:56
strigaziwhat is the solution(s)10:56
sfilatov_I have almost prepared a patch with software deployments for deletions10:56
strigaziwith an on-delete SD?10:56
sfilatov_yes10:56
strigazipush10:56
sfilatov_I'd like to discuss 2 issues10:57
strigazishoot :)10:57
sfilatov_We still need to wait for LB in neutron10:57
sfilatov_since cloud provider does not support waiting for LB delettion10:57
*** mago_ has quit IRC10:57
sfilatov_we can't wait using kubectl10:57
strigazihmm, that is not nice10:58
strigaziflwang1: maybe kong has some input for this?10:58
flwang1how do you wait for LB in neutron?10:58
strigaziyou ask the api I imagine10:58
sfilatov_you get the LB with the name10:58
sfilatov_yes10:58
sfilatov_since you know LB name = 'a' + k8s svc id10:58
sfilatov_but it's not really nice10:59
flwang1and polling neutron api to see if it's still there?10:59
sfilatov_yep10:59
flwang1hmm...10:59
sfilatov_we can fix this via cloud provider10:59
strigazi1. must be solved in the cloud-provider10:59
strigazi2. polling as a work around11:00
sfilatov_got it11:00
strigazilgty?11:00
strigaziflwang1: ^^11:00
flwang1i'm ok with that11:00
sfilatov_the other issue11:00
sfilatov_what if the user stopped vms11:00
sfilatov_basically - shutdown11:00
sfilatov_I have faces the issue11:00
strigaziwhy? I dont get it11:00
sfilatov_and there is nothing I can do about it11:01
sfilatov_vms are shutdown11:01
sfilatov_and k8s api is not available11:01
sfilatov_so when delete is triggered we can delete the resources11:01
sfilatov_can't *11:01
*** mdnadeem_ has joined #openstack-containers11:01
flwang1hmm... i'm wondering if magnum should take care such a corner case11:02
strigazithere is no solution for this11:02
strigazionly what flwang1 said11:02
*** mdnadeem has quit IRC11:02
strigaziif we do it for the corner case though11:03
flwang1for this case,  user need open a support ticket to ops11:03
*** mago_ has joined #openstack-containers11:03
flwang1and get it removed :D11:03
strigazior he can remove it manually11:03
flwang1to avoid magnum ops too boring11:03
sfilatov_there's a way11:03
*** mago_ has quit IRC11:03
sfilatov_to solve all the issues11:03
*** mattgo has joined #openstack-containers11:04
*** rabel has left #openstack-containers11:04
flwang1i even don't think magnum should just bravely delete a cluster and everything on the cluster11:04
sfilatov_if we add cluster_id to lb metadata11:04
flwang1sfilatov_: Lingxian kong is working on that11:04
strigaziand then?11:04
sfilatov_and delete lb based on their metadata11:04
sfilatov_in this case we don't need to access k8s API11:05
flwang1that's the current solution11:05
strigaziis there anything stopping us from doing this now?11:05
sfilatov_we need to patch cloud provider11:05
strigaziflwang1: Lingxian's patch is not in?11:06
flwang1https://github.com/kubernetes/cloud-provider-openstack/pull/22311:06
flwang1they're just putting the cluster name in lb description11:07
flwang1so we're ok to go with current way i think11:07
sfilatov_so there's no need for my patch?11:08
flwang1i guess so?11:08
sfilatov_with software deployment11:08
flwang1if you're happy with this way11:08
strigaziwe still need a patch11:08
strigaziin magnum11:08
flwang1i can propose a new patch set on this https://review.openstack.org/#/c/497144/11:08
flwang1to check the cluster name11:09
flwang1then we should be ok11:09
strigaziuuid is better I guess11:09
flwang1strigazi: i think so, there is probably limition, i will check with CPO team11:10
strigazifolks anything else? we are 10mins late and I'm 10 mins late for another meeting11:10
strigaziflwang1: it accepts a string11:10
flwang1good for me11:10
strigaziflwang1: it can be anything11:10
flwang1i mean CPO maybe hard to get the UUID of magnum's cluster11:10
strigaziflwang1: it looks like a generic parameter to me, let's see11:11
flwang1unless we pass it to somewhere so that CPO can easily get it, just my guess11:11
flwang1need to check with the author11:11
strigazicool11:11
strigazisfilatov_: anything else?11:11
flwang1strigazi: i think you're good to go ;)11:12
strigazilet's wrap this then11:12
strigazithanks flwang1 sfilatov_ and brtknr11:12
strigazi#endmeeting11:12
*** openstack changes topic to "OpenStack Containers Team"11:12
openstackMeeting ended Tue Jul 17 11:12:55 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)11:12
openstackMinutes:        http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-07-17-10.00.html11:12
*** udesale_ has quit IRC11:12
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-07-17-10.00.txt11:13
openstackLog:            http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-07-17-10.00.log.html11:13
flwang1thank you11:13
brtknrthanks guys11:13
*** sfilatov has joined #openstack-containers11:15
*** sfilato__ has joined #openstack-containers11:16
*** sfilatov has quit IRC11:16
*** sfilatov has joined #openstack-containers11:17
*** sfilatov_ has quit IRC11:17
*** dave-mccowan has joined #openstack-containers11:18
*** sfilatov_ has joined #openstack-containers11:18
*** sfilato__ has quit IRC11:18
*** sfilatov has quit IRC11:22
*** janki has joined #openstack-containers11:39
*** vijaykc4 has joined #openstack-containers11:53
*** sfilatov_ has quit IRC12:05
*** sfilatov has joined #openstack-containers12:13
*** sfilatov_ has joined #openstack-containers12:16
*** sfilatov has quit IRC12:17
*** ispp has quit IRC12:22
*** mvpnitesh has quit IRC12:31
*** udesale has joined #openstack-containers12:43
*** peereb has quit IRC12:56
*** mjura has quit IRC13:01
*** mjura has joined #openstack-containers13:04
*** salmankhan has quit IRC13:09
*** ispp has joined #openstack-containers13:17
*** sfilatov_ has quit IRC13:19
*** vijaykc4 has quit IRC13:21
*** vijaykc4 has joined #openstack-containers13:31
*** salmankhan has joined #openstack-containers13:36
*** chhagarw has joined #openstack-containers13:47
*** ispp has quit IRC13:48
*** ispp has joined #openstack-containers13:50
*** ispp has quit IRC13:52
*** hongbin has joined #openstack-containers14:00
*** ispp has joined #openstack-containers14:01
*** ispp has quit IRC14:02
*** sfilatov has joined #openstack-containers14:04
*** markguz has joined #openstack-containers14:14
*** markguz has quit IRC14:18
*** ricolin has quit IRC14:19
*** ricolin has joined #openstack-containers14:20
*** vijaykc4 has quit IRC14:33
*** jmlowe has joined #openstack-containers14:36
*** ispp has joined #openstack-containers14:38
*** ispp has quit IRC14:39
*** sfilatov has quit IRC14:39
*** gsimondon has quit IRC14:42
*** vijaykc4 has joined #openstack-containers14:42
*** ricolin has quit IRC14:42
*** sfilatov has joined #openstack-containers14:44
*** mgoddard has quit IRC14:46
*** ispp has joined #openstack-containers15:03
*** jmlowe has quit IRC15:09
*** vijaykc4 has quit IRC15:09
*** jmlowe has joined #openstack-containers15:18
*** lpetrut has quit IRC15:23
*** mjura has quit IRC15:27
*** mdnadeem_ has quit IRC15:34
*** gsimondon has joined #openstack-containers15:37
*** mattgo has quit IRC15:41
*** ricolin has joined #openstack-containers15:49
*** jmlowe has quit IRC15:53
*** jmlowe has joined #openstack-containers15:58
*** harlowja has joined #openstack-containers16:18
*** sfilatov has quit IRC16:19
*** vijaykc4 has joined #openstack-containers16:24
*** lpetrut has joined #openstack-containers16:28
*** udesale has quit IRC16:29
*** sfilatov has joined #openstack-containers16:29
*** ktibi has quit IRC16:32
*** jmlowe has quit IRC16:48
*** serlex has quit IRC16:50
*** jmlowe has joined #openstack-containers16:51
*** gsimondon has quit IRC16:52
*** AlexeyAbashkin has quit IRC16:59
*** vijaykc4 has quit IRC17:00
*** vijaykc4 has joined #openstack-containers17:02
*** vijaykc4 has quit IRC17:05
*** ispp has quit IRC17:06
*** mgoddard has joined #openstack-containers17:08
*** vijaykc4 has joined #openstack-containers17:09
*** chhagarw has quit IRC17:09
*** ricolin has quit IRC17:14
*** ramishra has quit IRC17:17
openstackgerritAndrei Ozerov proposed openstack/magnum master: Trustee: provide region_name to auth_url searching  https://review.openstack.org/58294717:17
*** markguz has joined #openstack-containers17:19
*** livelace2 has joined #openstack-containers17:19
*** salmankhan has quit IRC17:22
*** markguz has quit IRC17:24
*** chhagarw has joined #openstack-containers17:29
*** mgoddard has quit IRC17:31
*** chhagarw has quit IRC17:45
*** gsimondon has joined #openstack-containers17:52
*** spiette has quit IRC17:53
*** harlowja has quit IRC17:54
*** spiette has joined #openstack-containers17:58
*** vijaykc4 has quit IRC17:59
*** pcichy has quit IRC18:14
*** janki has quit IRC18:14
*** ianychoi has quit IRC18:14
*** harlowja has joined #openstack-containers18:32
*** sfilatov has quit IRC18:54
*** pcichy has joined #openstack-containers19:00
*** mattgo has joined #openstack-containers19:21
*** flwang1 has quit IRC19:29
*** sfilatov has joined #openstack-containers19:43
*** gsimondon has quit IRC19:49
*** AlexeyAbashkin has joined #openstack-containers20:22
*** gsimondon has joined #openstack-containers20:29
*** lpetrut has quit IRC20:36
*** AlexeyAbashkin has quit IRC20:54
*** gsimondon has quit IRC21:02
*** flwang1 has joined #openstack-containers21:07
*** mattgo has quit IRC21:08
*** mgoddard has joined #openstack-containers21:27
*** mgoddard has quit IRC21:34
*** sfilatov has quit IRC21:47
*** mguz has joined #openstack-containers22:04
*** gsimondon has joined #openstack-containers22:12
*** mattgo has joined #openstack-containers22:13
*** dave-mccowan has quit IRC22:16
*** mattgo has quit IRC22:23
*** rcernin has joined #openstack-containers22:32
*** gsimondon has quit IRC22:40
*** itlinux has joined #openstack-containers23:03
*** itlinux has quit IRC23:09
*** itlinux has joined #openstack-containers23:17

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!