Thursday, 2018-12-13

*** itlinux has joined #openstack-containers00:11
*** hongbin has quit IRC00:11
*** itlinux has quit IRC00:11
openstackgerritLingxian Kong proposed openstack/magnum master: Delete Octavia loadbalancers for fedora atomic k8s driver  https://review.openstack.org/49714400:18
mnaserhey lxkong00:55
mnasersorry, busy @ kubecon00:55
lxkongmnaser: oh i didn't realize you are in kubecon, must have a log of fun00:56
mnaserlxkong: thanks for that super useful feedback, unfortunately, the heat-container-agent logs dont give us the info we need00:56
mnaserhow were you able to discover this?  is there anywhere we can add more logging in our ci to catch this?00:56
lxkongmnaser: i tested in my devstack00:56
mnaserhmm00:56
mnaserok i see00:56
mnasershould we run heat-container-agent with extra privs then?00:57
lxkongyeah, probably00:57
lxkongat least some volume mapping00:57
mnaserlxkong: well it probably needs to be able to pull atomic images too00:58
lxkongalso because it's running inside atomic, not sure if it's needed to tweak some atomic configuration00:58
openstackgerritLingxian Kong proposed openstack/magnum master: [DO NOT MERGE] Test  https://review.openstack.org/62310401:09
*** dave-mccowan has joined #openstack-containers01:17
*** dave-mccowan has quit IRC02:14
*** itlinux has joined #openstack-containers02:18
*** hongbin has joined #openstack-containers02:45
*** ykarel has joined #openstack-containers03:26
*** PagliaccisCloud has joined #openstack-containers03:43
zufarHi all, I am creating swarm cluster but stuck in CREATE_IN_PROGRESS status. when i check the heat-engine log, the last log is http://paste.opensuse.org/4662302703:58
zufaranyone know this problem?03:58
*** PagliaccisCloud has quit IRC04:03
*** itlinux has quit IRC04:11
*** hongbin has quit IRC04:20
*** itlinux has joined #openstack-containers04:43
*** udesale has joined #openstack-containers04:49
*** ykarel has quit IRC05:03
*** itlinux has quit IRC05:11
*** ykarel has joined #openstack-containers05:20
*** rcernin has quit IRC07:09
*** zufar has quit IRC07:12
*** pcaruana has joined #openstack-containers07:12
strigazilxkong: mnaser for https://review.openstack.org/#/c/623724 , things must done like in : https://review.openstack.org/#/c/561858/1/magnum/drivers/common/templates/kubernetes/fragments/configure-kubernetes-minion.sh I'm working on that for upgrades.08:06
*** ykarel is now known as ykarel|lunch08:34
lxkongthanks strigazi, i have more comments to that patch, i'm also testing something related. We're very keen to get cluster creation time decreased08:37
strigazilxkong: do you pull from docker.io?08:38
strigazilxkong: this is a very big overhead08:38
lxkongyeah, using internal docker registry is part of the plan08:38
lxkongwe also need to create masters and workers in parallel08:38
lxkongwhich also matters08:39
strigazilxkong: I say it is first on the list, pull from local registry. Parallel pull is next.08:39
lxkongyeah, agree08:39
lxkongand this https://storyboard.openstack.org/#!/story/200456408:39
strigazilxkong: also all the implementation for pulling from local is done since a while.08:40
lxkongusing local docker registry is going to be slow in our internal for some reason ;-(08:41
lxkongi mean the infra deployment to get a local docker registry up and running08:41
lxkongstrigazi: fyi, https://review.openstack.org/#/c/497144/ is ready for review08:43
strigazilxkong: i'm doing this now.08:43
lxkongstrigazi: i remember last time you said there is no cluster uuid in the lb description for some reason08:44
strigazilxkong: I'm using my patch for CPO as a dep08:44
lxkongmake sure you are using the latest version CCM/magnum08:45
lxkongmy devstack environment works well08:45
strigaziI'm always using master...08:45
lxkongi'm using a controller-manger patched by that PR08:45
lxkongdocker.io/lingxiankong/kubernetes-controller-manager:v1.11.5-alpha08:46
strigaziI'm not using and won't use patched kubernetes. I'm using CPO v0.2.0 which has the patch for the LB.08:47
lxkongi just changed magnum/drivers/common/templates/kubernetes/fragments/configure-kubernetes-master.sh file08:47
lxkongstrigazi: so you could see a new lb is created for the service but without cluster uuid in the description?08:48
lxkongin my CPO test before, that also worked well...08:49
strigazisee my comments here: https://review.openstack.org/#/c/620761/08:49
lxkongah, ok08:49
openstackgerritSpyros Trigazis proposed openstack/magnum master: k8s_fedora: Use external kubernetes/cloud-provider-openstack  https://review.openstack.org/57747708:50
lxkongyou can test this patch again, maybe that's because the hook mechanism08:50
lxkonge.g. in devstack env, you need to 'pip install -e .' in the magum repo folder08:50
lxkongbut with the new approach that won't apply08:51
strigazilxkong: I'm testing your patch, I'll leave a comment if it works or not08:51
lxkongi will submitted a separate patch for a release note and/or some docs08:52
lxkongstrigazi: cool08:52
*** ttsiouts has joined #openstack-containers09:18
*** belmoreira has quit IRC09:22
openstackgerritSpyros Trigazis proposed openstack/magnum master: k8s_fedora: Use external kubernetes/cloud-provider-openstack  https://review.openstack.org/57747709:22
*** suanand has joined #openstack-containers09:23
*** belmoreira has joined #openstack-containers09:26
*** ttsiouts has quit IRC09:28
*** ttsiouts has joined #openstack-containers09:29
*** sayalilunkad has quit IRC09:31
*** ttsiouts has quit IRC09:33
*** ykarel|lunch is now known as ykarel09:33
*** ttsiouts has joined #openstack-containers09:40
strigazilxkong: left a comment09:41
strigazilxkong: I left a comment in 49714409:42
lxkongstrigazi: hi, saw your comment, currently without cpo or patched k8s, this patch won't do anything for pre-delete, so personally i think that's fine to merge now(for us, we could backport immediately because we have patched controller-manager)09:51
lxkongstrigazi: but if you confident about the CPO patch, that's fine for waiting09:52
lxkongstrigazi: have you fully tested CPO? service, pv, etc ?09:53
lxkongs/CPO/CPO in magnum09:53
strigaziok, I don't like taking a patch that is not doing anything but we can take this.09:55
strigaziwhy CPO in magnum would work in a different way?09:56
*** jonaspaulo has joined #openstack-containers09:56
jonaspaulohi all09:56
*** PagliaccisCloud has joined #openstack-containers09:56
lxkongtechnically not, i used to install CPO in static pod way, atomic system container is another thing, so i'm not sure09:56
strigazilxkong: I'm using a DS for the CPO, what atomic system container has to do with anything?09:58
lxkongi remember the atomic system container has special configuration for e.g. volume mapping, using static pod, i can define volumes to map the local folder to container, i'm not familiar with atomic system container, just another thing to me10:03
lxkongstrigazi: personally i'm not fun of fedora...10:03
strigaziI think atomic is irrelevant to the discussion of the CPO10:04
strigazilxkong: you can fork and use ubuntu10:04
lxkongi hope i could :-)10:05
strigaziwhy not?10:05
strigazimaybe kubespray works better for you10:05
jonaspauloany1 getting this error on magnum on rocky:10:06
jonaspaulorunc[2432]: Source [heat] Unavailable. runc[2432]: /var/lib/os-collect-config/local-data not found. Skipping runc[2432]: publicURL endpoint for orchestration service in null region not found10:06
lxkong'maybe kubespray works better for you', i also hope i could10:08
strigazijonaspaulo release?10:09
strigazijonaspaulo release of magnum and heat?10:09
strigazilxkong: or gardener10:10
jonaspauloi am using kolla-ansible to deploy everything installed through pip with OS release rocky10:10
brtknrjonaspaulo is using rocky kolla ansible10:10
jonaspauloyep10:10
jonaspauloi have the patches alredy like https://bugs.launchpad.net/ubuntu/+source/magnum/+bug/179381310:11
openstackLaunchpad bug 1793813 in magnum (Ubuntu) "magnum-api not working with www_authenticate_uri" [Undecided,Confirmed]10:11
jonaspaulothis one is also present https://review.openstack.org/#/c/620006/10:12
jonaspaulothe issue remains that inside the master vm10:12
jonaspaulothe region_name is null despite the patches10:13
strigazican you do atomic images list in the master?10:13
jonaspaulohere /etc/os-collect-config.conf , here /var/lib/os-collect-config/heat_local.json or here /run/os-collect-config/heat_local.json for example10:13
jonaspaulook give me some minutes to redeploy10:13
strigazijonaspaulo: should be rocky-stable10:15
strigazijonaspaulo: should be docker.io/openstackmagnum/heat-container-agent:rocky-stable10:15
strigazijonaspaulo: https://github.com/openstack/magnum/blob/stable/rocky/magnum/drivers/common/templates/kubernetes/fragments/start-container-agent.sh10:16
*** ppetit has joined #openstack-containers10:20
*** salmankhan has joined #openstack-containers10:27
strigazilxkong: does cidner work with CPI v0.2.0?10:27
*** salmankhan has quit IRC10:28
jonaspaulo >  docker.io/openstackmagnum/heat-container-agent            rocky-stable   2723793fc200   2018-12-13 10:23   183.33 MB      ostree10:32
*** sayalilunkad has joined #openstack-containers10:35
*** jonaspaulo_ has joined #openstack-containers10:35
jonaspaulo_sorry disconnected10:35
jonaspaulo_strigazi: it is rocky stable10:36
jonaspaulo_and the start container script is also 1:1 with the link you provided10:36
*** jonaspaulo has quit IRC10:37
*** ppetit has quit IRC10:41
*** salmankhan has joined #openstack-containers10:45
*** ttsiouts has quit IRC10:51
*** ttsiouts has joined #openstack-containers10:52
*** ttsiouts has quit IRC10:57
*** salmankhan has quit IRC10:57
*** suanand has quit IRC11:06
*** salmankhan has joined #openstack-containers11:06
*** salmankhan has quit IRC11:11
mkufHi there, I'm trying to deploy a k8s cluster with magnum on fedora-atomic in queens. Cloudinit and wc-notify on the master-node exit successfully and k8s services are up but no further nodes/minions get deployed. Anyone an idea what might be the issue?11:11
*** salmankhan has joined #openstack-containers11:11
ykarelmkuf, have u checked /var/log/cloud-init-output.log on master node11:16
ykarelthere u can find some hints11:16
jonaspaulo_hi mkuf i am having also errors on rocky, but mine says publicURL endpoint for orchestration service in null region not found11:18
jonaspaulo_on the journalctl entries11:18
jonaspaulo_which is due to region_name being null11:18
jonaspaulo_but haven't figured out why yet11:18
mkufykarel: yes, i already checked that. no errors get reported and most importantly the wc-notify gets executed and receives a 200 OK from heat. so from my understanding this should be the point when the minions get deployed.11:20
ykarelmkuf, ack, yes after that only minions are deployed,11:23
mkufykarel: at least, thats the point when the minions of a swarm cluster get deployed (which works fine, opposed to k8s)11:23
mkufstrange. :/11:23
ykarelthere must be some issue with wc_notify, can u paste somewhere the script content11:24
ykareli remeber there were some issue11:24
jonaspaulo_mkuf are you deploying with kolla-ansible?11:24
mkufykarel: sure, i'll spin up a new cluster, give me a sec11:25
mkufjonaspaulo_: i'm using openstack-ansible for deployment11:25
jonaspaulo_kk11:26
jonaspaulo_so probably is not an issue with magnum11:26
jonaspaulo_but dont understand all other parameters are ok like the public uri etc11:26
jonaspaulo_and the region_name is null11:26
*** tobias-urdin is now known as tobias-urdin_afk11:41
*** tobias-urdin_afk is now known as tobias-urdin11:42
*** tobias-urdin is now known as tobias-urdin_afk11:43
mkufykarel: here's the service-state of wc-notify and a cat of the script that gets executed http://paste.openstack.org/show/737204/ also, a slightly redacted (removed domain-name) cloud-init-output.log, if you want to have a look http://paste.openstack.org/show/737205/11:55
ykarelatleast the script looks wrong12:06
ykarelok = ok12:06
ykarelright side shoule be call to healthz12:06
ykarelanyway this should not block minion creation12:07
brtknrstrigazi: if you try to create a magnum cluster with subnet ip range 192.168.0.0/16, it fails to create the cluster... is this a known issue documented anywhere? looks like it interacts with the default value of calico_ipv4pool: 192.168.0.0/1612:13
*** shrasool has joined #openstack-containers12:14
mkufykarel: heat-engine.log shows a success message for the master creation but no further logs appear afterwards http://paste.openstack.org/show/737208/12:15
*** PagliaccisCloud has quit IRC12:19
*** udesale has quit IRC12:25
*** udesale has joined #openstack-containers12:26
ykarelmkuf, hmm strange12:27
ykareli think strigazi or flwang can help in it, i have not deployed it since long12:27
*** mkuf_ has joined #openstack-containers12:45
*** mkuf has quit IRC12:49
*** tobias-urdin_afk is now known as tobias-urdin12:53
*** robertomls has joined #openstack-containers13:11
brtknrmkuf_: what does your /var/log/cloud-init.log and /var/log/cloud-init-output.log files contain?13:13
brtknrstrigazi: although my k8s deployment is using flannel, not calico, so dont see why the different subnet would affect it13:24
*** zul has quit IRC13:32
*** zul has joined #openstack-containers13:41
*** irclogbot_0 has quit IRC14:00
*** irclogbot_0 has joined #openstack-containers14:08
*** mkuf has joined #openstack-containers14:10
*** mkuf_ has quit IRC14:13
*** irclogbot_0 has quit IRC14:14
strigazibrtknr: I think the cluster subnet should different the overlay subnet14:21
brtknrstrigazi: my overlay subnet is 10.100.0.0/16, the default14:22
strigazibrtknr: can should be your cluster template?14:22
brtknrstrigazi: sorry?14:23
strigazibrtknr: can you show me your cluster template?14:23
sayalilunkadstrigazi: hi! have you seen this error before? http://paste.openstack.org/show/737213/14:23
*** irclogbot_0 has joined #openstack-containers14:24
brtknrstrigazi: http://paste.openstack.org/show/737220/14:24
brtknrstrigazi: this one works fine14:24
sayalilunkadstrigazi: happens when creating a template in rocky. Seems to happen after the www_authenticate_uri patches.14:24
brtknrhttp://paste.openstack.org/show/737221/14:25
brtknrthe second one doesnt work14:25
brtknrthe only difference is the subnet14:25
strigazibrtknr: sorry , I was talking with my office mate and I mixed what I was telling him with what I was typing14:25
strigazisayalilunkad: seems irrelevant to that patch, try to set this:14:26
strigazi[drivers]14:26
strigazisend_cluster_metrics = False14:26
sayalilunkadok checking14:27
strigazibrtknr: The subnets are different in the 2nd one, right?14:28
*** ttsiouts has joined #openstack-containers14:28
brtknryeah14:31
strigazican you show me? I can try to repro14:31
brtknrgateway14:31
brtknrother 192.168.1.0/2414:32
brtknrgateway 10.0.0.0/2414:32
brtknrgateway is the network, other is the subnet that fails, gateway is the one that works14:32
sayalilunkadstrigazi: that is false by default..14:33
strigazisayalilunkad: but it tries to use the k8s_monitor14:33
mkufbrtknr: as far as I can see, both report success, you can have a look here cloud-init-output.log http://paste.openstack.org/show/737205/ and cloud-init.log https://pastebin.com/cyWM9f5P14:35
brtknrstrigazi: mkuf: context switch... hmm what does your heat stack say?14:39
brtknropenstack stack resource list k8s-stack-name -n 414:40
*** hongbin has joined #openstack-containers14:42
*** ykarel is now known as ykarel|away15:05
*** itlinux has joined #openstack-containers15:07
sayalilunkadstrigazi: is there any other config which needs to be disabled?15:10
*** ykarel|away has quit IRC15:15
brtknrstrigazi: ignore my subnet issue. i cant pin down why it has suddenly started working.15:17
*** zufar has joined #openstack-containers15:44
*** ttsiouts has quit IRC15:56
*** ttsiouts has joined #openstack-containers15:57
*** ykarel|away has joined #openstack-containers15:57
zufarHi all, i am trying to run `openstack coe service list` but have error.16:08
zufarCRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Identity server rejected authorization necessary to fetch token data: ServiceError: Identity server rejected authorization necessary to fetch token data16:08
zufarI can access other except magnum service.16:08
*** PagliaccisCloud has joined #openstack-containers16:13
*** ttsiouts has quit IRC16:16
*** udesale has quit IRC16:41
brtknrstrigazi: is tehete a way to only give floating ip to the master atm?16:41
strigazibrtknr: no :(16:42
brtknrzufar: `openstack coe cluster list` doesnt seem to work for me either16:43
brtknri get `the JSON object must be str, not 'bytes'`16:43
brtknr*openstack coe service list16:43
brtknrstrigazi: is that something worth having or does the master lb fulfil the same function? even when there is 1 node16:44
brtknr?16:44
*** shrasool has quit IRC16:45
brtknralso strigazi: do you mind putting a link to scheduled meetingss on the channel topic similar to a lot of other team topics?16:46
strigazibrtknr: I don't have the power to do it, I can ask infra. Would a link to docs.openstack.org/magnum/latest/index.html help?16:50
brtknrI was thinking of this page: https://wiki.openstack.org/wiki/Meetings/Containers16:50
strigazibrtknr master lb does the same but in clouds without octavia or neutron lb or just one master, fips for only on master(s) makes sense.16:51
strigazibrtknr: yes point to https://wiki.openstack.org/wiki/Meetings/Containers from https://docs.openstack.org/magnum/latest/index.html16:51
brtknrstrigazi: ah gotcha! yes, better than not having it :)16:52
*** robertomls has quit IRC16:59
*** jonaspaulo_ has quit IRC17:05
*** ramishra has quit IRC17:06
mnaserstrigazi: have you looked much into cluster api?17:11
strigazimnaser: I was just looking17:12
*** TodayAndTomorrow has quit IRC17:12
strigazimnaser: some things are better than heat, managing nodes with rolling updates17:13
mnaserstrigazi: i think also it allows us to avoid a lot of things that we dont have/want to deal with17:13
mnaserscaling of nodes, updates, etc.. im thinking maybe this can be implemented as a magnum driver17:13
strigazimnaser: others like SoftwareDeployments and creation of networks, LBs order of actions, heat is better on that17:14
mnaserstrigazi: yeah, i think it uses kubeadm to deploy things?17:15
strigazimnaser: for the driver, yes, I was justing this with colleagues in my office :)17:15
mnaseraha, cool17:15
strigazimnaser: we could use kubeadm with the clusterapi, we don't have to17:15
mnaserstrigazi: thinking out loud, i wonder why we dont use kubeadm to deploy things17:16
strigazimnaser: the only part I didn't like in the cluster api is: heavy dependency on user_data17:16
mnaseryeah we're trying to move away from that.. that never ends well :p17:17
strigazimnaser: we don't use kubeadm already because we started first without it and sticked to what we know and have experience with17:17
mnaserstrigazi: gotcha i think hacking on seeing what things can look lik ewith kubeadm might be interesting17:17
strigazibrtknr: mnaser: I have to commute home, I'll be back online later. mnaser: kubeadm 1.13.0 is very appealing17:19
strigazisee you in a bit17:19
mnaserstrigazi: cool, take care :) have a good rest of your day17:19
brtknrstrigazi: cool, talk soon17:22
*** zul has quit IRC17:35
*** ianychoi has quit IRC17:42
*** salmankhan has quit IRC18:11
openstackgerritMark Goddard proposed openstack/magnum master: Query cluster's Heat stack ID in functional tests  https://review.openstack.org/48513018:12
*** ykarel|away has quit IRC18:30
openstackgerritFeilong Wang proposed openstack/magnum master: Support Keystone AuthN and AuthZ for k8s  https://review.openstack.org/56178318:30
*** jmlowe has quit IRC18:47
*** jmlowe has joined #openstack-containers19:12
*** pcaruana has quit IRC19:47
*** PagliaccisCloud has quit IRC19:52
*** shrasool has joined #openstack-containers19:54
*** munimeha1 has joined #openstack-containers20:28
flwangmkuf: what's your current issue?20:38
flwangmnaser: i had discussion with strigazi at Berlin, and we generally agreed to have a driver with kubeadm as v2 or something like that20:42
flwangmnaser: and re cluster-api, i think that's good to take a look, but i also agree it's not really necessary to combine the usage of cluster-api and kubeadm20:45
mnaserflwang: yeah I think kubeadm as a first step for v2 driver is good20:46
mnaserIt’ll reduce the workload of maintaining all the provisioning toolset20:47
flwangmnaser: true, indeed20:47
flwangmnaser: btw, thank you so much for those good patches to fix the function test20:48
flwangreally appreciate it20:48
mnaserflwang: I only made small fixes, most of the work was already done :) I’m gonna add conformance tests to it soon20:48
flwanganother thing we can do in the future is enabling the sonobuoy as e2e test20:48
flwanghaha, same thought20:48
mnaserTook the words out of me haha20:48
flwangwe're printing it in same time, man20:49
mnaserI’m a20:49
flwang(09:48:38) mnaser:    (09:48:38) flwang:20:49
mnaserActually dedicating some time today on that20:49
flwangyou're cracking my mind20:49
flwangthat's fantastic20:50
flwangnow i'm working on the auto healing stuff20:50
flwangwith NPD/Draino/Autoscaler20:50
mnaserYep. The jobs might be a bit longer but we’ll be able to know it’s a proper cluster20:50
mnaserAbout end to end stuff20:51
flwangmnaser: exactly, we can make it as a separated job20:51
flwangand start with non-voting20:51
mnaserYup.  Also I was thinking we need to move the system container image build pipeline to upstream20:52
flwangmnaser: we do have patch for that, wait a sec20:52
flwanghttps://review.openstack.org/#/c/585420/20:52
mnaserThat way we always build those system containers easily (or if we move towards kubeadm we eliminate it entirely)20:53
flwangi think that one is ready to go, i'm just waiting strigazi to remove the WIP20:53
*** lpetrut has joined #openstack-containers20:53
flwangwith kubeadm, i think we probably need it based on a new OS, like Ubuntu?20:53
mnaserFedora atomic apparently can deploy with kubeadm20:54
flwangbut it's going to be end of life ?20:54
mnaserHonestly.. I wouldn’t mind deploying on top of Ubuntu .. I feel people gonna find it easier to maintain honestly20:54
flwangso personally, i think it's probabaly a good time to rethink the OS part20:54
flwangi think see your point20:55
mnaserI agree. I really think moving to something like Ubuntu makes a lot of sense. Even tools that run on top of clusters a lot test on Ubuntu20:55
mnaserAtomic is kinda the thing on its own20:55
mnaserI feel the more we align ourselves with the ecosystem the easier it’ll be for us20:56
flwangmnaser: same here20:57
mnaserNow if only we just shared time zones :p20:57
flwangwhat's your tz now?20:57
mnaserI’m pacific right now but home is eastern20:58
*** ianychoi has joined #openstack-containers20:59
flwangcool20:59
flwangas you may know, i'm based in NZ20:59
flwangso now it's 10AM20:59
flwangand the summer Xmas is coming21:00
*** jmlowe has quit IRC21:01
mnaserflwang: yeah, so catalyst is in nz, cern is in eu, and we're in na21:03
mnaser:P21:03
flwangna or ca?21:08
flwangi remembered your guys are in Montreal21:08
mnaserflwang: canada :)21:10
mnaserflwang: btw i think also k8s is great on ubuntu because we also can start relying on easily getting multiarch stuff going21:10
flwangmnaser: yep, so let's start the idea with kubeadm on ubuntu21:11
mnaserflwang: i think i'll have on a heat stack using softwaredeploymentgroup/etc outside magnum just to see it work as a heat stack21:12
mnaserand then look into integrating it21:12
flwangmnaser: i think we can still use heat for orchestrating the general infra, and then use kubeadm to bootstrap21:13
flwangto bootstrap k8s21:13
mnaserflwang: yup, its just easier for me to develop against our cloud with a stack instead of have the extra stuff magnum has :P21:14
flwangmnaser: make sense21:15
lxkongmnaser, flwang, sorry for chiming in, but mnaser, do you mind i changing something in this patch https://review.openstack.org/#/c/623724/?21:18
lxkongmnaser: i have a working version after some testing21:19
mnaserlxkong: no please go for it :D21:19
mnaserthats exciting21:19
lxkongthe cluster creation time decreased from 24min to 17min in my devstack env21:19
lxkongmaybe it's not the final version, but we could discuss based on that21:20
*** jmlowe has joined #openstack-containers21:22
*** jmlowe has quit IRC21:26
*** tobias-urdin has quit IRC21:32
*** jmlowe has joined #openstack-containers21:50
*** lpetrut has quit IRC22:00
*** salmankhan has joined #openstack-containers22:08
*** shrasool has quit IRC22:18
*** salmankhan has quit IRC22:21
*** rcernin has joined #openstack-containers22:21
*** PagliaccisCloud has joined #openstack-containers22:31
*** itlinux has quit IRC22:36
*** lbragstad has quit IRC22:37
mnaserlxkong: it makes sense, most clusters are like 10 minute provision so it will be half of it23:30

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!