Thursday, 2018-04-19

*** fragatina has quit IRC00:05
*** fragatina has joined #openstack-containers00:09
*** fragatina has quit IRC00:13
*** Nisha_Agarwal has joined #openstack-containers00:20
*** Nisha_Agarwal has quit IRC00:24
*** jianhua has joined #openstack-containers00:44
*** tobberydberg has quit IRC00:52
*** oikiki has joined #openstack-containers01:14
*** kbyrne has quit IRC01:23
*** kbyrne has joined #openstack-containers01:23
*** fragatina has joined #openstack-containers01:28
*** fragatina has quit IRC01:35
*** dave-mccowan has joined #openstack-containers01:36
*** tobberydberg has joined #openstack-containers01:37
*** AlexeyAbashkin has joined #openstack-containers01:39
*** tobberydberg has quit IRC01:42
*** AlexeyAbashkin has quit IRC01:43
*** tobberydberg has joined #openstack-containers01:44
*** janki has joined #openstack-containers02:07
*** oikiki has quit IRC02:08
openstackgerritFeilong Wang proposed openstack/magnum master: Add calico-node on k8s master node  https://review.openstack.org/54813902:08
*** oikiki has joined #openstack-containers02:08
*** oikiki has quit IRC02:08
*** oikiki has joined #openstack-containers02:09
*** hongbin_ has joined #openstack-containers02:10
*** jchhatbar has joined #openstack-containers02:10
*** janki has quit IRC02:12
*** ramishra has joined #openstack-containers02:17
*** masuberu has joined #openstack-containers02:27
*** masber has quit IRC02:31
*** oikiki has joined #openstack-containers02:48
*** harlowja_ has quit IRC02:53
*** udesale has joined #openstack-containers03:08
openstackgerritFeilong Wang proposed openstack/magnum master: Rename scripts  https://review.openstack.org/56245403:10
openstackgerritFeilong Wang proposed openstack/magnum master: Rename scripts  https://review.openstack.org/56245403:11
*** oikiki has quit IRC03:21
*** harlowja has joined #openstack-containers03:22
*** oikiki has joined #openstack-containers03:22
*** oikiki has quit IRC03:22
*** masuberu has quit IRC03:23
*** masuberu has joined #openstack-containers03:33
*** fragatina has joined #openstack-containers03:35
*** masber has joined #openstack-containers03:35
*** masuberu has quit IRC03:38
*** oikiki has joined #openstack-containers03:42
*** mvpnitesh has joined #openstack-containers04:01
mvpniteshHi all , i'm having stable devstack magnum setup of pike release04:08
mvpniteshwe cherry-picked his patch https://review.openstack.org/#/c/475384/ and tried deploy using fedora-atomic driver.04:09
mvpniteshIn this the ironic deploy went through and nova instance is also in running state. But after that heat keep on looping for some thing to be completed but that didn’t succeed and timed out.04:09
mvpnitesh04:09
mvpniteshi've applied the above patch and trying to create a cluster of baremetals using magnunm04:09
mvpniteshmagnum*04:10
mvpniteshi'm getting the following error04:10
mvpniteshwe cherry-picked his patch https://review.openstack.org/#/c/475384/ and tried deploy using fedora-atomic driver.04:10
mvpniteshIn this the ironic deploy went through and nova instance is also in running state. But after that heat keep on looping for some thing to be completed but that didn’t succeed and timed out.04:10
mvpnitesh04:10
mvpnitesh04:10
mvpnitesh2018-04-16 23:24:15.768 ERROR oslo_messaging.rpc.server [req-beeeef16-bfa7-4304-9b70-2288d3d4397e None None] Exception during04:10
mvpniteshmessage handling: CommandError: Could not fetch contents for file:///opt/stack/magnum/magnum/drivers/common/templates/kubernetes/fragments/enable-kube-controller-ma04:10
mvpniteshnager-scheduler.sh04:10
*** hongbin_ has quit IRC04:11
*** jchhatba_ has joined #openstack-containers04:18
*** jchhatba_ has quit IRC04:19
*** jchhatba_ has joined #openstack-containers04:19
*** oikiki has quit IRC04:20
*** jchhatbar has quit IRC04:21
*** Nisha_Agarwal has joined #openstack-containers04:21
*** jianhua has quit IRC04:31
*** jianhua has joined #openstack-containers04:37
*** jchhatbar has joined #openstack-containers04:50
*** mvpnitesh has quit IRC04:51
*** harlowja has quit IRC04:52
*** jchhatba_ has quit IRC04:53
*** flwang1 has quit IRC04:55
*** pcichy has joined #openstack-containers05:12
*** mvpnitesh has joined #openstack-containers05:14
*** udesale_ has joined #openstack-containers05:19
*** udesale has quit IRC05:19
*** armaan has joined #openstack-containers05:20
*** Nisha_Agarwal has quit IRC05:23
*** pcichy has quit IRC05:32
*** oikiki has joined #openstack-containers05:34
*** sidx64 has joined #openstack-containers05:43
*** armaan has quit IRC05:51
*** yolanda has joined #openstack-containers05:52
*** mjura has joined #openstack-containers05:57
*** TobbeCN has joined #openstack-containers05:57
*** itlinux has joined #openstack-containers06:02
*** itlinux has quit IRC06:09
*** itlinux has joined #openstack-containers06:12
*** armaan has joined #openstack-containers06:16
*** pcaruana has joined #openstack-containers06:21
*** oikiki has quit IRC06:23
*** ricolin has joined #openstack-containers06:24
*** niteshmvp has joined #openstack-containers06:28
*** mvpnitesh has quit IRC06:28
*** dsariel has joined #openstack-containers06:31
*** itlinux has quit IRC06:32
*** guest__ has left #openstack-containers06:33
*** itlinux has joined #openstack-containers06:37
*** sidx64_ has joined #openstack-containers06:38
*** sidx64 has quit IRC06:40
*** sidx64 has joined #openstack-containers06:43
*** sidx64_ has quit IRC06:44
*** dsariel has quit IRC06:49
*** dims has quit IRC06:54
*** oikiki has joined #openstack-containers06:56
*** dims has joined #openstack-containers06:56
openstackgerritOpenStack Proposal Bot proposed openstack/magnum-ui master: Imported Translations from Zanata  https://review.openstack.org/56249506:57
openstackgerritOpenStack Proposal Bot proposed openstack/magnum master: Imported Translations from Zanata  https://review.openstack.org/54875307:00
*** dims has quit IRC07:01
*** itlinux has quit IRC07:02
*** dims has joined #openstack-containers07:02
*** gsimondon has joined #openstack-containers07:05
*** mgoddard has joined #openstack-containers07:07
*** dsariel has joined #openstack-containers07:11
*** sidx64 has quit IRC07:14
*** sidx64_ has joined #openstack-containers07:14
*** gsimondon has quit IRC07:19
*** gsimondon has joined #openstack-containers07:23
*** AlexeyAbashkin has joined #openstack-containers07:26
*** rcernin has quit IRC07:33
*** mago has joined #openstack-containers07:33
*** dsariel has quit IRC07:35
*** Nisha_Agarwal has joined #openstack-containers07:41
*** pcaruana has quit IRC07:45
*** pcaruana has joined #openstack-containers07:46
*** flwang1 has joined #openstack-containers07:47
*** flwang1 has quit IRC07:49
*** trinaths has joined #openstack-containers07:50
*** gsimondo1 has joined #openstack-containers07:53
*** gsimondon has quit IRC07:56
*** trinaths has quit IRC07:58
*** niteshmvp has quit IRC08:02
*** mvpnitesh has joined #openstack-containers08:26
*** itlinux has joined #openstack-containers08:28
*** sidx64_ has quit IRC08:31
*** sidx64 has joined #openstack-containers08:33
*** jianhua has quit IRC08:45
*** jianhua has joined #openstack-containers08:47
*** serlex has joined #openstack-containers08:51
*** oikiki has quit IRC08:52
*** trinaths has joined #openstack-containers09:00
*** itlinux has quit IRC09:09
*** ricolin has quit IRC09:25
*** Nisha_ has joined #openstack-containers09:32
*** Nisha_Agarwal has quit IRC09:32
*** trinaths has quit IRC09:42
*** armaan has quit IRC09:55
*** armaan has joined #openstack-containers09:55
*** ricolin has joined #openstack-containers09:56
*** ricolin has quit IRC09:59
*** ricolin has joined #openstack-containers10:17
*** flwang1 has joined #openstack-containers10:20
flwang1strigazi: around?10:25
*** jianhua_ has joined #openstack-containers10:29
*** jianhua has quit IRC10:32
*** ricolin has quit IRC10:34
*** ricolin has joined #openstack-containers10:48
*** zhubingbing has joined #openstack-containers10:54
*** jianhua_ has quit IRC10:55
strigaziflwang1: hi10:59
flwang1strigazi: great10:59
*** armaan has quit IRC10:59
*** armaan has joined #openstack-containers10:59
flwang1strigazi: 1. https://review.openstack.org/555223    dns autoscaling is ready to go11:00
flwang12. the calico node on master patch11:00
*** jianhua has joined #openstack-containers11:00
flwang1you're right, we may need the certs on master for kubelet11:00
flwang1could you please approve https://review.openstack.org/555223 ?11:00
strigaziI left a comment ^^11:02
flwang1strigazi: ah, i see. sorry, i misread it11:03
flwang1i will propose a patch soon11:03
flwang1as for the calico one, i tried to generate certs for kubelet on master, but the heapster still failed to get metrics from the master kubelet, any idea?11:04
strigaziwhich --address did you use for kubelet?11:05
*** yasemin has quit IRC11:08
flwang1the local ipv4 address11:08
strigaziwhat is the error in heapster?11:09
openstackgerritFeilong Wang proposed openstack/magnum master: Make DNS pod autoscale  https://review.openstack.org/55522311:10
flwang1http://paste.openstack.org/show/719548/11:10
strigaziinteresting, do we need kube-proxy too?11:12
strigazican you start a container in the default namespaces and curl kubelet?11:13
flwang1you mean create a container in node kubelet and try to talk to the master kubelet?11:14
strigaziyes11:14
strigazifrom the container11:14
flwang1ok, will do11:15
*** ricolin has quit IRC11:17
*** Nisha_away has joined #openstack-containers11:23
*** Nisha_ has quit IRC11:27
flwang1same, timeout11:29
flwang1strigazi: ^11:29
strigazibut from the node you obviously can, right?11:30
flwang1you mean access the link on master? before using certs, i got UnAuthrized, not timeout11:31
flwang1i haven't tried to use the certs i just created11:31
strigaziI'll try to check locally11:33
flwang1thanks, based on my understanding, i just need to use https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/make-cert-client.sh to create kubelet certs for the master kubelet, then add those certs to the kublet start parameters, restart kubelet11:36
flwang1anything i missed?11:36
strigazino, make-cert-client is enough11:37
*** yasemin has joined #openstack-containers11:37
*** armaan has quit IRC11:37
flwang1strigazi: btw, based on your suggestion, i moved the NM related code to configure-kubernetes-minion.sh and it just works11:42
strigazi+111:42
flwang1so now the heapster is the last one we need to deal with11:42
openstackgerritNguyen Hai proposed openstack/magnum master: Follow the new PTI for document build  https://review.openstack.org/55520911:46
openstackgerritNguyen Hai proposed openstack/magnum master: Fix incompatible requirement  https://review.openstack.org/56026911:46
flwang1strigazi: btw, do you think we should reuse the existing make_client_cert.sh or,  just copy-paste the part we need into configure-kubernetes-master.sh?11:54
strigaziif the existing fragment works, we can use it. If it is not hard we could marge the make-cert fragment to one11:55
*** mvpnitesh has quit IRC11:58
*** armaan has joined #openstack-containers12:00
*** sidx64 has quit IRC12:01
flwang1strigazi: gotcha12:03
*** sidx64 has joined #openstack-containers12:04
*** Nisha_away has quit IRC12:05
*** armaan has quit IRC12:11
*** mvpnitesh has joined #openstack-containers12:11
*** armaan has joined #openstack-containers12:17
flwang1strigazi: btw, i can ping the master node from the container i just created, but can't use curl to access the 10250 port12:17
*** yamamoto_ has quit IRC12:21
*** armaan has quit IRC12:22
*** yamamoto has joined #openstack-containers12:27
strigaziflwang1: it times out?12:29
flwang1strigazi: yes12:32
*** sidx64 has quit IRC12:32
strigaziand from the node?12:32
strigazinode -> master curl kubelet12:32
*** jianhua has quit IRC12:32
flwang1from the node, it works12:32
strigazicalico runs on the master?12:33
flwang1yes12:33
strigazinetwork-manager on the master?12:33
flwang1yes12:33
strigazi:(12:33
*** sidx64 has joined #openstack-containers12:34
flwang1but i don't think it's related to the networkmaster, why do you think it's related?12:34
*** AlexeyAbashkin has quit IRC12:34
strigaziI'm trying to guess what could it be12:34
flwang1right12:34
strigazithey only difference now is kube-proxy, but  it doesn't make sense that this is the problem12:35
*** AlexeyAbashkin has joined #openstack-containers12:36
flwang1true12:36
flwang1does heapster depend on kube-proxy to talk to other nodes?12:37
strigaziI don't think so12:38
*** ricolin has joined #openstack-containers12:38
strigaziI have the same issue12:43
strigazilocally12:43
flwang1ok, at least it's repeatable12:44
flwang1i think it's a network issue12:46
flwang1i start a python simple httpserver on 800012:46
flwang1same12:47
flwang1ah, the security rules?12:54
*** ricolin has quit IRC12:59
flwang1yeah!13:00
flwang1after add the security rule 10250, it works now13:00
flwang1strigazi: ^13:01
strigaziyes, 10250 was missing13:02
flwang1;)13:02
*** armaan has joined #openstack-containers13:02
flwang1i think flannel has the same issue, no?13:02
mvpniteshhi strigazi13:02
mvpniteshwe are trying to create cluster for baremetal using magnum on stable pike.13:04
flwang1strigazi: btw, i also raised my hands for the openstack container whitepaper work, especially for magnum, please let me know if you need any help about that13:05
strigaziflwang1: thanks!13:05
strigaziflwang1: What about flannel? running kubelet in the master?13:06
flwang1(01:02:49) flwang1: i think flannel has the same issue, no?13:06
flwang1yep, i think flannel will have the same issue13:06
flwang1didn't you see it?13:06
strigaziwith the security group?13:06
flwang1yep13:06
flwang1i mean missing 1025013:06
strigaziflwang1: so far we didn't run it on the master, so I don't know, I guess yes13:07
strigazimvpnitesh: with fedora-atomic?13:07
flwang1strigazi: ok13:07
*** yasemin has quit IRC13:08
flwang1strigazi: that makes sense13:08
mvpniteshyeah13:08
flwang1i'm really happy it finally works now13:08
mvpniteshWe had some discussion with Yatin(ykarel) and we cherry-picked his patch https://review.openstack.org/#/c/475384/ and tried deploy using fedora-atomic driver.13:08
mvpniteshIn this the ironic deploy went through and nova instance is also in running state. But after that heat keep on looping for some thing to be completed but that didn’t succeed and timed out.13:08
mvpniteshwe are finding the below error13:09
mvpnitesh2018-04-16 23:24:15.768 ERROR oslo_messaging.rpc.server [req-beeeef16-bfa7-4304-9b70-2288d3d4397e None None] Exception during13:09
mvpniteshmessage handling: CommandError: Could not fetch contents for file:///opt/stack/magnum/magnum/drivers/common/templates/kubernetes/fragments/enable-kube-controller-ma13:09
mvpniteshnager-scheduler.sh13:09
strigaziright, can you try to label the image with fedora-atomic, not fedora, to use the non-ironic driver?13:10
flwang1strigazi: btw, did you see my patch https://review.openstack.org/#/c/562454/ rename scripts ?13:10
mvpniteshstrigazi: sorry i didn't get it13:11
strigazimvpnitesh: what is the os_distro of your image?13:13
mvpniteshstrigazi: fedora13:14
strigaziflwang1 i'm looking now13:14
mvpniteshstrigazi: it is "fedora-atomic"13:14
strigazithe os_distro is fedora or fedora-atomic?13:15
mvpniteshfedora-atomic13:16
strigaziit should use enable-kube-controller-manager-scheduler.sh this file then13:17
*** Nisha_Agarwal has joined #openstack-containers13:17
flwang1strigazi: as for the make_cert.sh https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/make-cert.sh mind me refactoring it like https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/make-cert-client.sh I mean extract a method "generate_certificates", so that it can be used for both server and kubelet13:19
mvpniteshstragazi: then how should we specify it13:19
mvpniteshstragazi: but we are not able to find it in "https://github.com/openstack/magnum/tree/stable/pike/magnum/drivers/common/templates/kubernetes/fragments"13:21
strigazimvpnitesh: if you have the os_distro=fedora-atomic it should not use the above file13:21
mvpniteshstragazi: from where we can get that "enable-kube-controller-manager-scheduler.sh"13:21
strigazimvpnitesh: if you use fedora-atomic you don't need it13:22
strigaziflwang1: no, it is better, but me careful with make-cert.sh13:23
*** AlexeyAbashkin has quit IRC13:23
flwang1you mean "more careful"?13:24
flwang1or "be careful"?13:24
strigazibe :)13:25
flwang1sure, i will13:25
flwang1time to off13:25
flwang1thank you very much for all the help13:25
*** AlexeyAbashkin has joined #openstack-containers13:25
flwang1have a good day13:25
strigaziflwang1: good night13:26
*** yasemin has joined #openstack-containers13:31
Nisha_Agarwalstrigazi, ping13:35
*** TobbeCN has quit IRC13:35
*** mvpnitesh has quit IRC13:35
Nisha_Agarwalstrigazi, mpvnitesh discussed with you the issue above13:36
*** TobbeCN has joined #openstack-containers13:36
Nisha_Agarwalmpvnitesh and i are working on creating k8s cluster on baremetal13:36
*** mvpnitesh has joined #openstack-containers13:36
Nisha_Agarwalstrigazi, u there?13:37
strigaziNisha_Agarwal: yes13:37
Nisha_Agarwalstrigazi, We are using stable/pike for this deployment13:38
Nisha_Agarwalstrigazi, 1. Is fedora-ironic driver working on pike release?13:38
strigaziNisha_Agarwal: yes, and you get the error for the missing file13:38
Nisha_Agarwalstrigazi, yes13:38
Nisha_Agarwalstrigazi, whats the solution for that error?13:38
Nisha_Agarwalstrigazi, is there any pre-action we need to take before starting deploy so that the file gets created?13:39
*** TobbeCN has quit IRC13:40
*** serlex has quit IRC13:44
Nisha_Agarwalstrigazi, 2. WIth fedora-atomic the baremetal deploy using nova boot goes thru, but heat eng keeps on looping for something13:44
Nisha_Agarwalstrigazi, what i guess it waits for some post install steps to complete13:45
strigaziNisha_Agarwal: the machines must be able to access the heat-api13:46
*** hongbin_ has joined #openstack-containers13:46
strigaziNisha_Agarwal: and nova, keystone, magnum, cinder (for the cloud provider)13:46
Nisha_Agarwalstrigazi, any hints on how to proceed for creating cluster for baremetal? Which driver we should use?13:47
Nisha_Agarwalmachines mean?13:47
Nisha_Agarwalbaremetal?13:47
strigaziNisha_Agarwal: heat templates in k8s_fedora_ironic are not working since pike, but13:47
Nisha_Agarwalcinder comes in picture for magnum cluster creation for baremetals?13:47
strigaziNisha_Agarwal: both VM or BM must have access to the apis13:47
strigaziNisha_Agarwal: For baremetal, no, you don't need access to cinder13:48
Nisha_Agarwalstrigazi, ok. We downloaded the fedora-atomic driver available for download...do they have any credentials thru which we can check if heat apis are accessible or not13:48
*** mvpnitesh has quit IRC13:48
strigaziNisha_Agarwal: to make it work on baremetal you need to use k8s_fedora_atomic with fedora 2613:48
strigazipike needs fedora 2613:49
strigaziNisha_Agarwal: yes, the nodes, have creds inside13:49
strigazibut you can just try to curl the apis13:49
Nisha_Agarwali guess we have fedora26, i can cross check. Devstack has uploaedd the image by itself in glance13:49
Nisha_Agarwalon the devstack server the heat apis are accessible13:50
Nisha_Agarwalto do curl on the baremetal i shud be able to log in to the image13:51
Nisha_Agarwalwe are using this Fedora-Atomic-26-20170723.0.x86_64  for fedora atomic13:51
Nisha_Agarwalstrigazi, ok i will recheck on this driver and see why it times out in heat....13:52
Nisha_Agarwalstrigazi, whats the timezone u work in?13:52
*** AlexeyAbashkin has quit IRC13:52
Nisha_Agarwalfrom 2-3 days i have been trying to ping you but couldnt find you online13:53
Nisha_Agarwalstrigazi, this is the template we are using for fedora-atomic for baremetal deploy using magnum http://paste.openstack.org/show/719562/13:55
Nisha_Agarwalstrigazi, let me know if there is something wrong in this?13:55
*** AlexeyAbashkin has joined #openstack-containers13:57
magohi strigazi, on a side note, is there some sort of compatibility matrix around ? You said "pike needs fedora 26". How can I know which Fedora / k8s version is supported on Pike / Magnum ? Thanks13:58
openstackgerritNguyen Hai proposed openstack/magnum master: Follow the new PTI for document build  https://review.openstack.org/55520914:06
Nisha_Agarwalstrigazi, one more point i have14:11
Nisha_Agarwali could not see that missing file in Ocata branch also14:12
strigazimago: No, there isn't one. I'll create one. But for a hint for the image you can look here:14:12
Nisha_Agarwalstrigazi, thats where i asked if i could get it from somewhere in Ocata branch14:12
strigazihttps://github.com/openstack/magnum/blob/master/magnum/tests/contrib/gate_hook.sh#L88 in the different branches14:13
strigazihttps://github.com/openstack/magnum/blob/stable/ocata/magnum/tests/contrib/gate_hook.sh#L8314:13
strigazihttps://github.com/openstack/magnum/blob/stable/pike/magnum/tests/contrib/gate_hook.sh#L8814:14
strigaziNisha_Agarwal: I'm in central europe14:14
magostrigazi : thanks. What about k8s ? I'm currently asking myself what is the last compatible version with Pike ? (using kube_tag)14:19
strigazimago: 1.7.x14:21
Nisha_Agarwalmago, i think that should be there in upper-constraints.txt file of the requirements project14:21
*** AlexeyAbashkin has quit IRC14:23
Nisha_Agarwalstrigazi, In Ocata also the kubemaster.yaml file says to open the "enable-kube-controller-manager-scheduler.sh" ...see line https://github.com/openstack/magnum/blob/30785acd3cd90594cb5d9913ae3830d6faeee0b6/magnum/drivers/k8s_fedora_ironic_v1/templates/kubemaster.yaml#L39714:25
*** AlexeyAbashkin has joined #openstack-containers14:25
Nisha_Agarwalstrigazi, but its not present in https://github.com/openstack/magnum/tree/stable/ocata/magnum/drivers/common/templates/fragments14:25
Nisha_Agarwalstrigazi, but if gate is working this file has to be there or it could be avoided during deploy?14:28
*** armaan has quit IRC14:28
*** AlexeyAbashkin has quit IRC14:40
*** AlexeyAbashkin has joined #openstack-containers14:42
*** mjura has quit IRC14:47
strigaziit is here https://github.com/openstack/magnum/tree/stable/ocata/magnum/drivers/common/templates/kubernetes/fragments14:50
strigaziI guess that during deployment it wasn't installed14:51
strigaziBut, I would try to use k8s_fedora_atomic even with ironic14:51
*** sidx64 has quit IRC14:54
*** AlexeyAbashkin has quit IRC14:54
*** AlexeyAbashkin has joined #openstack-containers14:56
*** TobbeCN has joined #openstack-containers15:04
*** AlexeyAbashkin has quit IRC15:04
*** Nisha_Agarwal has quit IRC15:07
*** gsimondo1 has quit IRC15:07
*** AlexeyAbashkin has joined #openstack-containers15:08
*** TobbeCN has quit IRC15:08
*** zerick has joined #openstack-containers15:10
*** yamamoto has quit IRC15:13
*** yamamoto has joined #openstack-containers15:14
*** yamamoto has quit IRC15:19
*** ricolin has joined #openstack-containers15:25
*** ramishra_ has joined #openstack-containers15:33
*** yamamoto has joined #openstack-containers15:34
*** ramishra has quit IRC15:37
*** pcichy has joined #openstack-containers15:41
*** armaan has joined #openstack-containers15:42
*** pcichy has quit IRC15:46
*** armaan has quit IRC15:47
*** armaan has joined #openstack-containers15:48
*** Nisha_Agarwal has joined #openstack-containers15:54
*** AlexeyAbashkin has quit IRC15:55
*** AlexeyAbashkin has joined #openstack-containers15:56
*** armaan has quit IRC16:02
*** pcaruana has quit IRC16:03
*** fragatina has quit IRC16:05
*** AlexeyAbashkin has quit IRC16:11
*** AlexeyAbashkin has joined #openstack-containers16:14
*** itlinux has joined #openstack-containers16:19
*** gsimondon has joined #openstack-containers16:24
*** ramishra_ has quit IRC16:33
*** gsimondon has quit IRC16:34
*** jchhatbar has quit IRC16:35
*** fragatina has joined #openstack-containers16:45
*** AlexeyAbashkin has quit IRC16:57
*** Nisha_Agarwal has quit IRC16:58
*** itlinux has quit IRC17:02
*** mgoddard has quit IRC17:03
*** yamamoto has quit IRC17:20
*** armaan has joined #openstack-containers17:24
*** udesale_ has quit IRC17:24
*** pcichy has joined #openstack-containers17:29
*** fragatina has quit IRC17:53
*** fragatina has joined #openstack-containers17:53
*** armaan has quit IRC17:56
*** Nisha_Agarwal has joined #openstack-containers18:00
*** Nisha_Agarwal has quit IRC18:14
*** ricolin has quit IRC18:16
*** yamamoto has joined #openstack-containers18:20
*** oikiki has joined #openstack-containers18:29
*** pcichy has quit IRC18:30
*** yamamoto has quit IRC18:30
*** pcichy has joined #openstack-containers18:30
*** TobbeCN has joined #openstack-containers18:36
*** TobbeCN has quit IRC18:41
*** AlexeyAbashkin has joined #openstack-containers18:49
*** AlexeyAbashkin has quit IRC18:59
*** oikiki has quit IRC19:11
*** oikiki has joined #openstack-containers19:44
*** flwang1 has quit IRC19:58
*** harlowja has joined #openstack-containers20:03
*** sidx64 has joined #openstack-containers20:07
*** sidx64_ has joined #openstack-containers20:09
*** sidx64 has quit IRC20:12
*** oikiki has quit IRC20:13
*** oikiki has joined #openstack-containers20:16
*** oikiki has quit IRC20:24
*** oikiki has joined #openstack-containers20:27
*** itlinux has joined #openstack-containers20:36
*** itlinux has quit IRC20:41
*** pcichy has quit IRC21:08
*** armaan has joined #openstack-containers21:14
*** sidx64_ has quit IRC21:27
*** yamamoto has joined #openstack-containers21:49
*** flwang1 has joined #openstack-containers22:08
*** rcernin has joined #openstack-containers22:30
*** TobbeCN has joined #openstack-containers22:37
*** TobbeCN has quit IRC22:41
*** armaan has quit IRC22:50
*** armaan has joined #openstack-containers22:50
*** hongbin_ has quit IRC22:57
*** oikiki has quit IRC23:53
*** oikiki has joined #openstack-containers23:54

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!