Wednesday, 2020-03-11

*** ianychoi has quit IRC00:07
*** ianychoi has joined #openstack-containers00:08
*** ondrejburian has quit IRC00:50
*** ondrejburian has joined #openstack-containers00:51
*** ianychoi has quit IRC01:30
*** ianychoi has joined #openstack-containers01:32
*** ianychoi has quit IRC02:10
*** ianychoi has joined #openstack-containers02:18
*** hongbin has joined #openstack-containers02:20
*** hongbin has quit IRC02:39
*** ricolin_ has joined #openstack-containers03:01
*** ianychoi has quit IRC03:35
*** ianychoi has joined #openstack-containers03:37
*** ricolin_ has quit IRC03:38
*** ykarel|away is now known as ykarel03:44
*** tkaprol has joined #openstack-containers03:53
*** tkaprol has quit IRC04:01
*** ricolin has quit IRC04:44
*** ianychoi has quit IRC05:00
*** ianychoi has joined #openstack-containers05:03
*** udesale has joined #openstack-containers05:07
*** ricolin has joined #openstack-containers05:51
*** rcernin has quit IRC05:58
*** ianychoi has quit IRC06:01
*** ianychoi has joined #openstack-containers06:03
*** ianychoi has quit IRC06:11
*** ianychoi has joined #openstack-containers06:13
*** ianychoi has quit IRC06:29
*** ianychoi has joined #openstack-containers06:31
*** ianychoi has quit IRC06:39
*** ianychoi has joined #openstack-containers06:41
*** ianychoi has quit IRC06:58
*** ianychoi has joined #openstack-containers07:05
*** pcaruana has joined #openstack-containers08:16
*** openstackgerrit has joined #openstack-containers08:16
openstackgerritMerged openstack/magnum master: Fix duplicated words issue like "meaning meaning that"  https://review.opendev.org/70116608:16
*** flwang1 has joined #openstack-containers08:34
*** sapd1_x has quit IRC08:36
*** ykarel is now known as ykarel|lunch08:38
*** gouthamr has quit IRC08:49
*** mgoddard has quit IRC08:49
*** gouthamr has joined #openstack-containers08:50
brtknrMorning folks08:54
*** vishalmanchanda has joined #openstack-containers08:56
*** ykarel has joined #openstack-containers08:56
*** ykarel|lunch has quit IRC08:57
flwang1hello08:58
strigazihello08:59
flwang1#startmeeting magnum09:00
openstackMeeting started Wed Mar 11 09:00:11 2020 UTC and is due to finish in 60 minutes.  The chair is flwang1. Information about MeetBot at http://wiki.debian.org/MeetBot.09:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.09:00
*** openstack changes topic to " (Meeting topic: magnum)"09:00
openstackThe meeting name has been set to 'magnum'09:00
brtknro/09:00
strigaziο/09:00
flwang1o/09:00
brtknrMy topics can go last, I’m still 5 mins away from work09:01
brtknrnot easy to type on the phone09:01
flwang1ok09:02
flwang1#topic Allow updating health_status, health_status_reason https://review.opendev.org/71038409:02
*** openstack changes topic to "Allow updating health_status, health_status_reason https://review.opendev.org/710384 (Meeting topic: magnum)"09:02
flwang1strigazi: brtknr: i'd like to propose above change to allow updating the health_status and health_status_reason via the update api09:02
strigaziflwang1: would it make sense to configure who can do this by policy?09:03
flwang1i'm still doing testing, but i'd like to get your guys comment09:03
flwang1strigazi: that's a good idea09:04
flwang1i can do that09:04
flwang1the context is, all the k8s cluster on our cloud are private, which are not accessible by the magnum control plane09:04
brtknrWould it make sense to make all health updates using this rather than magnum making api calls to k8s end point?09:04
flwang1so we would like to let the magnum-auto-healer to send api call to update the health status from the cluster inside09:05
brtknrI.e. do we need 2 health monitoring mechanism side by side?09:05
strigaziI think we need to options, not two running together09:05
strigazis/to/two/09:05
flwang1strigazi: +109:05
flwang1brtknr: the two options work for different scenarios09:06
flwang1if the cluster is a private cluster, then currently we don't have option to update the health status09:06
flwang1but if it's a public cluster, then magnum can handle it correctly09:06
brtknrBut the api would work for both types of clusters09:08
flwang1brtknr: yes09:08
flwang1you can disable the magnum server side health monitoring if you want09:09
flwang1but the problem is, not all vendors will deploy magnum auto healer09:10
flwang1make sense?09:10
brtknrOk09:11
brtknryeah i am happy to have the option, its a nice workaround for private clusters09:12
flwang1strigazi: except the role control, any other comments?09:13
brtknrDo we have the different roles that magnum expects documented somewhere?09:14
flwang1for this case or general?09:14
brtknre.g. only a heat_stack_owner can deploy a cluster for example09:14
strigazino, as a genetal comment, out-of-tree things should be opt-in. Only kubernetes can not be opt-in09:14
strigazimagnum has it's own policy. We can tune policy-in-code or policy fie09:16
strigazis/fie/file/09:16
flwang1in general, magnum has the policy.json, and you can define any role and update the file based on your need09:18
strigazi+109:18
flwang1shall we move on?09:18
strigazibrtknr: have you arrived?09:19
brtknryep ive been at my desk for 10 mins :D09:19
brtknrseamless transition09:19
flwang1#topic Restore deploy_{stdout,stderr,status_code} https://review.opendev.org/#/c/710487/09:19
*** openstack changes topic to "Restore deploy_{stdout,stderr,status_code} https://review.opendev.org/#/c/710487/ (Meeting topic: magnum)"09:19
flwang1brtknr: ^09:20
brtknrOk basically it was bugging me that deploy_stderr was empty and Rico also pointed this out that this breaks backward compatibility09:20
brtknrthreading seems like the only way to handle two input streams simultaneously without weird buffering behaviour you get when you use "select"09:21
strigaziput the same thing as stderr and stdout \o/09:21
strigazibrtknr: what do you think is the best option?09:22
brtknri think using threading to write to the same file but capture the two streams separately is the winner for me09:22
brtknrsince there might be a genuine deploy_stderr which will always resolve to being empty09:23
strigaziI'm only a little scared that we make a critical component for us more complicated09:23
strigaziThe main reason for this is backwards compatibility?09:23
brtknrit works though... threading is not a complicated thing...09:24
strigaziok if it works09:24
flwang1i kind of share the same concern as strigazi09:25
flwang1and personally, i'd like to see we merge all the things back to the heat-agents repo09:25
*** mgoddard has joined #openstack-containers09:25
flwang1it would be nice if we  can share the maintainance of the heat-container-agents09:26
brtknrThe parallel change Rico suggested doesnt work as it reads stderr first and then consumes stdout all at once at the end09:26
strigazi+1 ^^09:26
*** udesale_ has joined #openstack-containers09:27
brtknrI'm happy to share the maintenance burden with the heat team... looks like they have even incorporated some tests09:27
strigaziIMO The best options are: two files and different outputs (as proposed by brtknr intially) or one file and duplicated output in heat09:28
flwang1so my little tiny comment for this patch is, please collaborate with heat team to make sure we are not far away from the original heat-agents code09:28
strigaziwe can try threading if you want, it produces exactly what we need.09:29
brtknrThat is my concern, removing deploy_stderr feels like cutting off an arm from the original thing09:29
strigaziflwang1: I think heat follows us in this case09:29
*** udesale has quit IRC09:30
flwang1strigazi: good09:30
flwang1i don't really care who follows who, TBH, i just want to see we're synced09:30
strigaziflwang1: they follow == it was low priority for them09:31
flwang1ok, fair enough09:31
strigaziwhile for us it is high, we can sync of course.09:31
brtknrplease test the threading implementation, its available at brtknr/heat-container-agent:ussuri-dev09:32
brtknri have tested with both coreos and atomic and it works09:32
strigazii will, let's move this way then.09:33
strigazilast thing to add09:33
strigaziI was hesitant because the number of bugs user found instead of us was very high in train.09:34
strigazilet's move on09:34
flwang1strigazi: should we try to enable the k8s functional testing again?09:34
strigazino09:34
flwang1or at least put it on our high priority?09:34
strigaziwe can't09:35
flwang1still blocked by the nested virt?09:35
strigazilack of infra/slow infra09:35
flwang1CatalystCloud hasn't enable the nested virt yet09:36
flwang1but we will do in the near future, then we maybe able to contribute the infra for testing09:36
flwang1CI i mean09:36
strigazisounds good09:36
flwang1move on?09:37
strigaziyeap09:37
flwang1#topic Release 9.3.009:38
*** openstack changes topic to "Release 9.3.0 (Meeting topic: magnum)"09:38
strigaziWe need some fixes for logging09:39
strigazidisable zincati updates09:39
strigaziand fix cluster resize (we found a corner case)09:39
flwang1if so, we can hold it a bit? brtknr09:40
brtknrflwang1: yeah no problem09:40
brtknrthats why I wanted to ask you guys if there was anything09:40
flwang1i appreciate that09:41
flwang1strigazi: anything else?09:41
strigazithe logging issue is too serious for us. Heavy services break nodes (fill up the disk)09:42
strigazithat's it09:42
brtknrhow did you choose a value of 50 million?09:42
strigaziit is strange you haven'e encountered it09:42
strigazi50μ09:42
strigazi50m09:42
brtknrAh 50m09:43
strigazimega bytes09:43
strigaziI think it is reasonable. Can't explode nodes09:43
brtknrshould this be configurable?09:43
flwang1can you explain more? is it a very frequent issue?09:43
strigaziIt is not agressive for reasonable services. I mean the logs will stay there for long09:44
strigaziif a services produces a lot of logs and they are not rotated the disk fills up.09:45
strigazior creates preassure09:45
brtknrso do you think this option should be configurable?09:46
brtknrwith a default value of 50m?09:46
brtknror is that overkill?09:46
strigaziit fills like an overkill09:46
*** pcaruana has quit IRC09:46
strigazithe nodes are not a proper place to hold a lot of logs09:46
strigazithis is not an opinion :)09:47
strigaziwhat do you think?09:47
flwang1the log is for the pod/container, right?09:48
strigaziand of k8s services in podman09:48
strigazis/of/for/09:48
flwang1ok,  max 100pod per node, so 50 * 100 =5G, that makes sense for me09:48
flwang1and i think it's large enough09:49
flwang1as the local log09:49
*** xinliang has joined #openstack-containers09:50
strigazimove on?09:50
flwang1#topic https://review.opendev.org/#/c/712154/ Fedora CoreOS Configurarion09:50
*** openstack changes topic to "https://review.opendev.org/#/c/712154/ Fedora CoreOS Configurarion (Meeting topic: magnum)"09:50
flwang1strigazi: ^09:50
strigaziI pushed the ignition user_data in a human readable format09:51
strigazifrom that format the user_data.json can be generated09:52
flwang1cool, looks good, i will review it09:52
strigaziwhen do we take this?09:53
strigazibefore my patch of logging or after?09:53
brtknrin https://review.opendev.org/#/c/712154/3/magnum/drivers/k8s_fedora_coreos_v1/templates/fcct-config.yaml  line 167, does $CONTAINER_INFRA_PREFIX come from heat or /etc/environment?09:53
strigazis/of/for/ this is a new pattern for me  now09:53
flwang1is there any depedency b/w them?09:53
brtknri think we should take this first then the logging09:53
brtknrflwang1: yes, both update fcct-config.yaml09:54
strigazibrtknr: this means both will be backported09:54
brtknryes09:54
strigaziok09:54
flwang1i agree format first09:55
flwang1which can make your rebase easier, i gues09:55
flwang1guess09:55
brtknri can +2 quickly if you can address my comment09:55
strigaziI will09:56
brtknrShould we add a test to make sure that the user-data.json generated from fcct-config.yaml matches the one in the commit tree?09:57
brtknr:P09:57
strigazilet's test first though :) you never know. And give flwang1 a chance to review09:57
flwang1thanks :)09:57
brtknrstrigazi: what is this cluster resize corner case?09:59
strigazi1. cluster create09:59
brtknris this after stein->train upgrade?09:59
strigazi1.1 with an old kube_version (not kube_tag) in kubecluster.yaml09:59
strigazi2. create a nodegroup10:00
strigazi3. resize default ng10:00
strigazicauses change of user_data10:00
brtknryikes10:01
brtknrdo you have a fix?10:01
brtknrdoes this rebuild the whole cluster?10:01
brtknror just the nodegroups?10:01
brtknrstrigazi:10:02
flwang1hi team, can i close the meeting?10:02
brtknryes10:02
flwang1we can discuss this resize issue offline10:03
strigaziwe have10:03
flwang1#endmeeting10:03
*** openstack changes topic to "OpenStack Containers Team | Meeting: every Wednesday @ 9AM UTC | Agenda: https://etherpad.openstack.org/p/magnum-weekly-meeting"10:03
openstackMeeting ended Wed Mar 11 10:03:06 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)10:03
openstackMinutes:        http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-03-11-09.00.html10:03
flwang1thanks10:03
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-03-11-09.00.txt10:03
openstackLog:            http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-03-11-09.00.log.html10:03
brtknrthanks for hosting flwang110:03
flwang1thank you for joining10:03
flwang1i have to offline, ttyl10:03
strigaziI'm going offline, I will update gerrit and storyboard10:04
brtknrok speak soon10:07
brtknrstrigazi: do you have a fix for a resize issue?10:07
strigaziI will post when finish with logging.10:07
brtknrwe should also try to backport the fix for that10:09
brtknror is it a much bigger change?10:10
brtknrstrigazi:10:10
-openstackstatus- NOTICE: The mail server for lists.openstack.org is currently not handling emails. The infra team will investigate and fix during US morning.10:25
openstackgerritSpyros Trigazis proposed openstack/magnum master: Add fcct config for coreos user_data  https://review.opendev.org/71215410:28
*** markguz_ has joined #openstack-containers10:46
openstackgerritSpyros Trigazis proposed openstack/magnum master: Add fcct config for coreos user_data  https://review.opendev.org/71215410:48
openstackgerritSpyros Trigazis proposed openstack/magnum master: Add fcct config for coreos user_data  https://review.opendev.org/71215410:50
openstackgerritThomas Hartland proposed openstack/magnum master: Add node groups documentation  https://review.opendev.org/71234010:54
yankcrimebrtknr: have you noticed on fedora atomic that kube-apiserver.service fails more often than not on bootstrap?11:00
yankcrimeExecStartPre=/usr/bin/podman rm kube-apiserver (code=exited, status=1/FAILURE)11:00
yankcrimeif you restart that service then it's fine, but at that point cluster creation has already failed11:00
brtknryes, kube-apiserver is allowed 10 mins to start, i guess the container download is taking longer than that atm11:01
brtknrit works when you create a cluster with 1 worker and 1 master11:01
brtknrany more workers, the bandwidth appears to struggle11:01
yankcrimeit's not bandwidth, unless we're being throttled - and i'd be surprised if we're triggering something from just a couple of hosts....11:04
yankcrimealso this cluster creation failed in under 10 minutes11:05
*** udesale_ has quit IRC11:17
*** xinliang has quit IRC11:17
*** xinliang has joined #openstack-containers11:17
brtknryankcrime: hmm11:21
brtknryankcrime: how many workers did you try to create?11:23
yankcrimethis failed on the master before any workers11:23
*** ianychoi has quit IRC11:24
brtknrDo we know why it fails?11:26
openstackgerritSpyros Trigazis proposed openstack/magnum master: Add fcct config for coreos user_data  https://review.opendev.org/71215411:27
*** xinliang has quit IRC11:32
yankcrimewell i see:11:32
yankcrime+ ssh -F /srv/magnum/.ssh/config root@localhost systemctl restart kube-apiserver11:32
yankcrimeJob for kube-apiserver.service failed because a timeout was exceeded.11:32
yankcrimeSee "systemctl status kube-apiserver.service" and "journalctl -xe" for details11:32
yankcrimein the heat logs11:32
*** xinliang has joined #openstack-containers11:32
yankcrimekube-apiserver.service is active / running11:33
*** xinliang has quit IRC11:33
yankcrimebut one of the execstartpre commands failed11:33
yankcrime  Process: 3371 ExecStartPre=/usr/bin/podman rm kube-apiserver (code=exited, status=1/FAILURE)11:33
*** xinliang has joined #openstack-containers11:34
*** mkuf_ has joined #openstack-containers11:59
openstackgerritSpyros Trigazis proposed openstack/magnum master: Add fcct config for coreos user_data  https://review.opendev.org/71215412:02
*** mkuf has quit IRC12:02
*** udesale has joined #openstack-containers12:02
openstackgerritSpyros Trigazis proposed openstack/magnum master: Add fcct config for coreos user_data  https://review.opendev.org/71215412:03
*** pcaruana has joined #openstack-containers12:15
*** mkuf has joined #openstack-containers12:29
*** mkuf_ has quit IRC12:32
openstackgerritSpyros Trigazis proposed openstack/magnum master: Add fcct config for coreos user_data  https://review.opendev.org/71215412:35
brtknrstrigazi: just testing your patch gimme 5 mins12:50
openstackgerritSpyros Trigazis proposed openstack/magnum master: Add fcct config for coreos user_data  https://review.opendev.org/71215412:55
*** markguz_ has quit IRC12:58
openstackgerritSpyros Trigazis proposed openstack/magnum master: Add fcct config for coreos user_data  https://review.opendev.org/71215413:10
*** irclogbot_2 has quit IRC13:22
*** irclogbot_2 has joined #openstack-containers13:22
*** dave-mccowan has joined #openstack-containers13:42
*** dave-mccowan has quit IRC13:46
*** zigo has quit IRC13:49
*** markguz_ has joined #openstack-containers13:52
markguz_ brtknr: turns out my problems were selinux related. I manually disabled it on the minions and i'm up and running13:53
markguz_brtknr: is there a way to disable it at cluster creation time?13:53
*** dave-mccowan has joined #openstack-containers13:56
*** vishalmanchanda has quit IRC14:00
openstackgerritSpyros Trigazis proposed openstack/magnum master: Add fcct config for coreos user_data  https://review.opendev.org/71215414:18
openstackgerritSpyros Trigazis proposed openstack/magnum master: fcos-podman: Set max size for logging to 50m  https://review.opendev.org/71212714:18
openstackgerritSpyros Trigazis proposed openstack/magnum master: atomic-podman: Set log imit to 50m  https://review.opendev.org/71215314:18
*** xinliang has quit IRC14:21
*** sapd1_x has joined #openstack-containers14:25
*** vishalmanchanda has joined #openstack-containers14:41
*** ykarel is now known as ykarel|away15:13
openstackgerritSpyros Trigazis proposed openstack/magnum master: k8s-fedora: Set max-size tp 10m for containers  https://review.opendev.org/71247515:18
openstackgerritSpyros Trigazis proposed openstack/magnum master: fcos: Disable zincati auto-updates  https://review.opendev.org/71247615:18
openstackgerritSpyros Trigazis proposed openstack/magnum master: k8s-fedora: Set max-size to 10m for containers  https://review.opendev.org/71247515:38
openstackgerritSpyros Trigazis proposed openstack/magnum master: fcos: Disable zincati auto-updates  https://review.opendev.org/71247615:38
openstackgerritSpyros Trigazis proposed openstack/magnum master: fcos: Disable zincati auto-updates  https://review.opendev.org/71247615:39
*** sapd1_x has quit IRC15:51
*** zigo has joined #openstack-containers15:52
strigazibrtknr: so15:54
strigaziwhat is it going to be?15:54
strigaziwith the logs issue15:54
strigazibrtknr: this question: why are we passing this here and not as kubelet arg?15:55
brtknrstrigazi: what is the problem you're trying to solve?15:56
strigazibrtknr: docker default15:57
strigazihttps://docs.docker.com/config/containers/logging/json-file/15:57
brtknrif the default for kubelet is already 10m and 5 files, why is it necessary to also apply this to docker?15:57
strigazimax-size15:57
strigaziThe maximum size of the log before it is rolled. A positive integer plus a modifier representing the unit of measure (k, m, or g). Defaults to -1 (unlimited).15:57
strigazi Defaults to -1 (unlimited).15:57
strigaziunlimited15:57
strigaziplease go to this page: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/15:57
strigazisearch for container-log-max-size15:58
strigazithe docs say: his flag can only be used with --container-runtime=remote.15:58
strigazithe docs say: This flag can only be used with --container-runtime=remote.15:58
strigaziI can create a reproducer if you want.15:58
brtknrstrigazi: sorry if i misunderstanding15:58
strigazidocker is not a repote runtime15:59
brtknrso containerd is remote?15:59
strigaziyes15:59
strigazihttps://github.com/openstack/magnum/blob/fa45002e21ef6de3b4a9da35d590a4c5b3d0d7a4/magnum/drivers/common/templates/kubernetes/fragments/configure-kubernetes-minion.sh#L27416:00
brtknrjust saw the code16:00
brtknrthanks for the clarification16:00
strigazifor completion:16:00
strigaziPeople asked the same from containerd here:16:01
strigazihttps://github.com/containerd/containerd/issues/335116:01
strigaziBut since the design for the container runtime interface16:01
strigaziThe runtime is not resposible for log rotation16:01
strigaziThe orchestrator is16:01
strigaziin our case k8s and kubelet16:02
strigazidocker-daemon is the OG runtime that is not remote16:02
brtknrstrigazi: thanks16:06
brtknrbtw we should do these changes if container_runtimetime is not containerd right?16:07
strigazithese changes are done /etc/sysconfig/docker16:11
strigazithe line above my addition is changing /etc/sysconfig/docker16:11
strigaziwe can do and make backport strange16:12
strigaziit si better to do it safely16:12
strigaziI will add a check for the existence of the file16:13
strigaziIt does not make sense to check for containerd at the moment.16:13
strigaziI'm working on the reproducer..16:14
*** flwang has quit IRC16:17
*** ricolin has quit IRC16:25
*** udesale has quit IRC16:41
brtknrstrigazi: gotcha, makes sense16:46
strigaziI left a reply in gerrit16:54
strigazihow to reproduce16:55
strigaziand what to check16:55
strigaziI'm testing now with containerd to post in the patch16:55
*** markguz_ has quit IRC16:58
strigazibrtknr: ^^16:59
brtknrthanks will check17:00
brtknrbut maybe tomorrow as I need to head home now17:00
*** ricolin has joined #openstack-containers17:01
brtknrstrigazi: does podman need to be configured in a similar way?17:43
*** vishalmanchanda has quit IRC18:00
*** markguz_ has joined #openstack-containers18:33
markguz_hey. so is there a label or a setting that allows you disable selinux when spinning up fedora-core based k8s clusters?18:33
markguz_selinux blocks mounting cinder volumes correctly18:34
flwang1markguz_: could you please explain more how the selinux blocks the cinder volume mounting?18:58
markguz_flwang1: so when selinux is set to enforcing in the minion nodes the volumes do not mount correctly to the containers or at all18:59
markguz_with selinux set to permissive, everything works as expected19:00
flwang1markguz_: could you please create a story on story board to track this? i don't think we have a label for selinux now https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/disable-selinux.sh19:01
markguz_the volumes attach ok, i either got a "timeout waiting for volume to attach or mount" or sometimes it did mount but i could not run "chmod" on the mount path to allow non root to write to it19:01
flwang1so it would be nice if we can figure out what's the root cause, since enable the selinux is a preferable way for security reasons, does that make sense?19:02
markguz_i also had a problem with nfs mounts, where i was getting "systemctl not found" when the the minion was trying to start statd to mount the nfs.19:03
markguz_however if i ssh'd in and ran start-statd directly it worked... path seems correct in the start-statd script19:04
markguz_flwang1: what's the url for the story board?19:04
flwang1markguz_: https://storyboard.openstack.org/#!/dashboard/stories19:05
flwang1select Magnum as the project when creating19:05
flwang1markguz_: are you using stable/train?19:05
markguz_flwang1: no master19:07
flwang1ok19:07
*** flwang1 has quit IRC19:12
*** iokiwi has quit IRC19:20
*** rcernin has joined #openstack-containers21:28
*** pcaruana has quit IRC21:36
*** dave-mccowan has quit IRC22:05
*** dave-mccowan has joined #openstack-containers22:09
*** tkaprol has joined #openstack-containers22:17
*** tkaprol has quit IRC22:22
*** jrosser has quit IRC22:31
*** andrein has quit IRC22:31
*** jrosser has joined #openstack-containers22:36
*** andrein has joined #openstack-containers22:36
*** guilhermesp has quit IRC22:36
*** mnaser has quit IRC22:36
*** mnaser has joined #openstack-containers22:39
*** guilhermesp has joined #openstack-containers22:41
*** irclogbot_2 has quit IRC22:47
*** markguz_ has quit IRC22:47
*** irclogbot_1 has joined #openstack-containers22:47
*** openstackstatus has quit IRC22:48
*** mnasiadka has quit IRC22:50
*** mnasiadka has joined #openstack-containers22:54
*** lxkong has quit IRC22:56
*** lxkong has joined #openstack-containers23:02
*** ianychoi has joined #openstack-containers23:06
*** dave-mccowan has quit IRC23:08
*** dave-mccowan has joined #openstack-containers23:10
*** KeithMnemonic has quit IRC23:51
*** KeithMnemonic has joined #openstack-containers23:51

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!