Wednesday, 2020-02-26

*** hongbin has joined #openstack-containers03:49
*** hongbin has quit IRC04:39
*** ykarel|away is now known as ykarel05:13
*** vishalmanchanda has joined #openstack-containers05:34
*** udesale has joined #openstack-containers05:41
*** udesale has quit IRC05:41
*** udesale has joined #openstack-containers05:41
*** ramishra has quit IRC05:42
*** ramishra has joined #openstack-containers05:50
*** ramishra has quit IRC05:55
*** ramishra has joined #openstack-containers05:56
*** udesale has quit IRC06:24
*** udesale has joined #openstack-containers06:25
*** rcernin has quit IRC06:47
openstackgerritAndreas Jaeger proposed openstack/magnum master: WIP: Convert playbooks to Zuul v3 logic  https://review.opendev.org/70995607:06
*** udesale has quit IRC07:12
*** udesale has joined #openstack-containers07:12
openstackgerritAndreas Jaeger proposed openstack/magnum master: WIP: Convert playbooks to Zuul v3 logic  https://review.opendev.org/70995607:27
openstackgerritAndreas Jaeger proposed openstack/magnum master: WIP: Convert playbooks to Zuul v3 logic  https://review.opendev.org/70995607:29
*** AJaeger has joined #openstack-containers07:31
AJaegeris there any value in updating the magnum-dib-buildimage jobs? They have all been disabled in April 2019 and are broken. Should we remove them completely?07:32
brtknrAJaeger: I created a ps in 2018 to remove them, still sitting there07:35
brtknrAJaeger: we all use upstream images now07:36
openstackgerritAndreas Jaeger proposed openstack/magnum master: WIP: Convert playbooks to Zuul v3 logic  https://review.opendev.org/70995607:38
AJaegerbrtknr: thanks for the info.07:39
AJaegerbrtknr: Is that change still active? Couldn't find it quickly...07:40
AJaegerbrtknr: I'll push a new one...07:42
openstackgerritAndreas Jaeger proposed openstack/magnum master: Remove buildimage jobs  https://review.opendev.org/70995607:45
AJaegerbrtknr: let's see whether I have more luck...07:45
openstackgerritAndreas Jaeger proposed openstack/magnum master: Remove buildimage jobs  https://review.opendev.org/70995607:47
cosmicsoundgood day07:49
cosmicsoundmade some progress with magnum07:49
cosmicsoundyet now we got this on 2 remaining tasks tha time out: Create Failed: CREATE aborted (Task create from SoftwareDeployment "kube_cluster_deploy" Stack "lambda-24-cluster-x6hsmrxjbnlg" [a7b86128-aeba-4676-95c7-f1a6a1eba3de] Timed out)07:50
cosmicsoundthe rest of clluster is made including master and minions07:50
cosmicsoundany ideas on this?07:50
cosmicsoundonly these 2 tasks fails:07:54
cosmicsoundhttps://mdb.uhlhost.net/uploads/d433c085a2536a9b/image.png07:54
cosmicsoundhttps://mdb.uhlhost.net/uploads/94aaafc60e4d3bd3/image.png this is the current stack topology .07:55
*** ykarel is now known as ykarel|lunch08:03
*** udesale has quit IRC08:18
*** rcernin has joined #openstack-containers08:22
*** udesale has joined #openstack-containers08:39
*** flwang1 has joined #openstack-containers08:53
flwang1brtknr: strigazi: hi08:53
brtknrflwang1: hi08:53
brtknrJust me who has things on the agenda? lol08:54
flwang1brtknr: that's cool08:55
flwang1the heat patch has been merged08:56
flwang1i have proposed to cherry pick it to train08:56
brtknrflwang1: nice :) good o hear08:56
strigazihello08:57
flwang1strigazi: long time no see08:57
brtknro/08:58
AJaegerflwang1, strigazi, could you review https://review.opendev.org/709956, please? I'd like to remove the legacy jobs from our configs...08:58
flwang1AJaeger: +2ed08:59
strigaziAJaeger: +208:59
flwang1#startmeeting magnum09:00
openstackMeeting started Wed Feb 26 09:00:12 2020 UTC and is due to finish in 60 minutes.  The chair is flwang1. Information about MeetBot at http://wiki.debian.org/MeetBot.09:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.09:00
*** openstack changes topic to " (Meeting topic: magnum)"09:00
openstackThe meeting name has been set to 'magnum'09:00
flwang1#topic 0 worker cluster09:00
*** openstack changes topic to "0 worker cluster (Meeting topic: magnum)"09:00
flwang1brtknr: ^09:00
brtknryes, so i have the patch up, it works, just need to fix the tempest test so that functional-api passes09:01
brtknrit allows existing clusters to remove the default-worker too09:01
flwang1brtknr: did you try to add node group to the 0 worker cluster?09:01
strigazithis needs at least a microversion?09:02
flwang1strigazi: +109:02
brtknri microversioned Nodegroups up09:02
flwang1it's an important api change, so a microversion would be good09:03
brtknrwhat else do i need to microversion?09:03
brtknrsee https://review.opendev.org/#/c/709721/2/magnum/objects/nodegroup.py09:04
strigazibrtknr: this is an object version :)09:04
brtknryes i added nodegroup to the zero worker cluter09:04
flwang1brtknr: we need api microversion09:04
strigazihttps://review.opendev.org/#/c/709721/2/magnum/api/controllers/v1/cluster.py@53109:04
brtknrnice thing is, a nodegroup with zero workers can linger in the cluster until needed09:05
brtknrwhich i think is quite nice09:05
brtknrok i will take a look at that09:06
brtknrbut in principal, you both are happy to take it?09:06
flwang1brtknr: https://github.com/openstack/magnum/blob/master/magnum/api/controllers/v1/cluster_actions.py#L8409:06
flwang1brtknr: i'm happy to get it in09:06
strigazibrtknr: https://github.com/openstack/magnum/blob/master/magnum/api/controllers/versions.py#L4809:07
strigazithese are api versions ^^09:07
brtknrok cool thank for the links :)09:07
flwang1next topic?09:07
brtknrwill post another PS later today09:07
*** ykarel|lunch is now known as ykarel09:08
*** pcaruana has joined #openstack-containers09:08
flwang1#topic labels merging09:08
*** openstack changes topic to "labels merging (Meeting topic: magnum)"09:08
brtknrsure lets move on09:08
flwang1strigazi: do you know what's the process about the labels merging ?09:08
brtknrI'd like to revive this in some way as I expected "merge" to be the default behaviour but looks like all labels from cluster template get discarded which is a bit silly... defeats the point of having a cluster template in the first place. I'd like to see a merge behaviour by default going forward and with an option to replace. Also like the suggestion of a "-" suffix to delete exisiting labels09:08
brtknrfrom the cluster template.09:09
brtknrcopy paste from https://review.opendev.org/#/c/62161109:09
strigaziflwang1: we had a discussion internally. in Decemeber but we never managed to push09:09
strigazisomething cleaner than the -- and ++09:09
flwang1strigazi: do you have a private patch for this?09:09
strigaziI can try to dig it up again09:09
strigaziwe don't have patch09:10
strigaziflwang1: fyi our code has always been public09:10
flwang1strigazi: it would be much appreciated if you can follow this09:10
flwang1personally, i'd like to see we can get this fixed in this cycle09:10
strigaziflwang1: the team member pushing this is leaving, we need to regroup09:10
brtknri quite like the label_policy metalabel09:11
flwang1strigazi: ok, i see. and please let me know if there is anything i can help09:11
flwang1brtknr: good to know you like the idea :)09:11
brtknrwe kind of need this soon so happy to update Richardo's patch if no objections09:12
strigazilabel_policy metalabel ? omg09:12
strigaziwhat is this?09:12
flwang1strigazi: do you want to lead this or you're happy to let brtknr to take?09:13
strigazican you give me two days to discuss with Ricardo?09:13
strigaziif we can't push. Do as you want09:13
flwang1strigazi: we don't have to dig into the details at this meeting09:13
flwang1strigazi: sounds great09:13
flwang1brtknr: happy?09:13
brtknrThanks Feilong. Also not too happy, the code gets weird with this.09:13
brtknrThere seem to be a few cases when passing labels on cluster create:09:13
brtknr1. we want to replace all cluster template labels with this new set09:14
brtknr2. we want to override some labels, and keep the cluster template values for others09:14
brtknr3. we want to drop some labels from the cluster template set09:14
brtknrWould this work?09:14
brtknrcluster-template labels: labelx=valuex,labely=valuey09:14
brtknr1. --labels label_policy=replace,label1=value1,label2=value2 -> label1=value1,label2=value209:14
brtknr2. --labels label_policy=merge,labelx=valuexyz,label1=value1 -> labelx=valuexyz,labely=valuey,label1=value109:14
brtknr3. --labels label_policy,labelx= -> labelx=,labely=valuey09:14
brtknrThere's a corner case between not providing a label at all, and providing an empty value - wsme seems to have the Unset type for this case. If we want to follow this, we could add a '-' as a suffix to the label name to explicitly say the label should be dropped from the list. kubectl label works like this.09:14
brtknrThe 'label_policy' works and i think i'm ok with it, though it's not perfect.09:14
brtknrsorry more copy paste09:14
brtknrsee the 8th comment https://review.opendev.org/#/c/621611/309:14
flwang1brtknr: can we let strigazi discuss this with their team members and he will get back to us09:15
flwang1(22:13:24) strigazi: can you give me two days to discuss with Ricardo?09:15
flwang1(22:13:33) strigazi: if we can't push. Do as you want09:15
brtknrah sorry didnt see that09:15
brtknryeah sure09:15
flwang1thanks, strigazi09:16
flwang1let's move on09:16
flwang1#topic re tag heat-container-agent09:16
*** openstack changes topic to "re tag heat-container-agent (Meeting topic: magnum)"09:16
flwang1brtknr: ^09:16
brtknrthoughts on retagging current ussuri-dev hca as stable-train?09:17
flwang1brtknr: why?09:17
brtknrfor the logs mainly09:18
strigaziI can do it now with train-209:18
strigazinever retag09:18
brtknrok sounds good09:18
strigaziever09:18
brtknrtrain-2 is fine09:18
cosmicsoundthis replacement for ussuri-dev?09:19
flwang1can we use stable-train.2    ?09:19
strigazihttps://hub.docker.com/layers/openstackmagnum/heat-container-agent/train-stable-1/images/sha256-c8af38b06b38921e5328291acfe92f63a97ad85c806e54531c70aaf58863b64f?context=explore09:19
strigazitrain-stable-209:19
flwang1fine09:20
brtknrcool thanks09:20
strigaziwe have train-stable-1 aleady09:20
flwang1strigazi: all good09:20
flwang1i'm ok with that09:20
strigazidone09:21
brtknrthanks :)09:21
flwang1move on?09:21
brtknryep09:22
flwang1#topic ansile os_coe_cluster09:22
*** openstack changes topic to "ansile os_coe_cluster (Meeting topic: magnum)"09:22
brtknri found a bug in os_coe_cluster where it tries to access uuid which doesnt exist, not sure how this passed through the next, must be a change in openstacksdk code09:23
flwang1i will take a look later09:24
flwang1no idea what happened now :)09:24
brtknrok thanks09:24
flwang1#topic another train release?09:25
*** openstack changes topic to "another train release? (Meeting topic: magnum)"09:25
flwang1i prefer to use 9.3.0, i'm not a fan of x.y.109:25
brtknras you prefer09:26
flwang1and it's long list which deserves a bigger release version09:26
flwang1strigazi: ^09:26
flwang1brtknr: i will review the list tomorrow09:26
strigaziI +2 all of them09:26
brtknrcool, i am pretty sure i have missed off some of the patches09:26
brtknrfrom the master09:27
strigazido 9.3, no prob09:27
brtknrok thanks09:28
brtknrnext topic09:28
flwang1#topic Post install manifest https://review.opendev.org/67683209:28
*** openstack changes topic to "Post install manifest https://review.opendev.org/676832 (Meeting topic: magnum)"09:28
flwang1strigazi: ^ are you still OK with this idea?09:28
strigazisure, opt-in right?09:30
flwang1strigazi: sure, it's opt-in09:30
flwang1brtknr suggested to add a label   along with the config09:31
strigaziand let users do whatever they want?09:31
flwang1technically, user can install something when creating the cluster with the label09:33
strigazibrtknr: can you defend the label option?09:33
flwang1but it's not on my initial plan09:33
strigazihave both?09:33
brtknri prefer the label option tbh09:33
strigaziI don't get it09:33
flwang1the original requirement for this is, as a public cloud, we'd like to pre-install the storage class09:34
brtknri mean, i dont think we'd ever use the config option, just the label option09:34
strigazithe label option for the user is not compatible with the storageclass use case09:35
strigaziIn the sense, the user will forget09:35
brtknrbut i can see that label option presents the risk that a user creating the cluster can install anything they want as root09:35
strigaziwill have typos in the SC and so on09:35
flwang1and then break the cluster bootstrap09:36
flwang1can we hold the label a bit until we really need it09:36
flwang1?09:36
brtknryes we can hold the label09:36
strigaziwe can do the config as opt-in first and then we see09:36
brtknrhappy to take config option for now09:36
flwang1cool, thanks, i will post a new patch set09:37
flwang1a very productive meeting when we have strigazi09:37
brtknrlol yeah the last one got cancelled because you werent here09:37
strigazithis was a good one :) I was sick last week, sorry09:38
flwang1strigazi: i'm sorry to hear that, i hope you're well now09:38
brtknrcoronavirus?09:38
strigaziall good now, stand by for coronavirus :)09:38
flwang1strigazi: btw, will you go to Vancouver ?09:38
strigazibrtknr: no just a mild flu09:38
strigaziflwang1:  I don't think so09:38
strigazikubecon AMS if it wonnt cancelled09:39
flwang1strigazi: ok, then i probably won't go as well if i don't have you and brtknr09:39
flwang1kubecon at Boston?09:39
strigazithe next US is boston? I don't know about US.09:39
strigazinext month is Amsterdam09:40
flwang1ah, i see.09:40
flwang1i probably go to the Boston one09:40
flwang1will see09:40
strigazioh team, I think I found a cli bug yesterday09:40
flwang1strigazi: what's that? do you have a patch?09:40
strigaziopenstack coe cluster resize --nodes-to-remove=0,3 doesn't work. openstack coe cluster resize --nodes-to-remove=0 --nodes-to-remove=3  works09:41
strigazineed to double check09:41
strigaziwith index or uuid should be the same.09:41
strigazikeep it in mind. I will test09:42
flwang1cool09:42
flwang1guys, anything else?09:42
strigaziI'm good09:42
flwang1cool, thank you for joining09:42
strigazicheers09:43
flwang1i need to go to bed :)09:43
flwang1#endmeeting09:43
*** openstack changes topic to "OpenStack Containers Team | Meeting: every Wednesday @ 9AM UTC | Agenda: https://etherpad.openstack.org/p/magnum-weekly-meeting"09:43
strigazigood night09:43
openstackMeeting ended Wed Feb 26 09:43:14 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)09:43
openstackMinutes:        http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-02-26-09.00.html09:43
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-02-26-09.00.txt09:43
flwang1o/09:43
openstackLog:            http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-02-26-09.00.log.html09:43
*** flwang1 has quit IRC09:47
openstackgerritMerged openstack/magnum master: Remove buildimage jobs  https://review.opendev.org/70995610:00
openstackgerritMerged openstack/magnum stable/train: [k8s] Fix instance ID issue with podman and autoscaler  https://review.opendev.org/70977310:01
openstackgerritAndreas Jaeger proposed openstack/magnum stable/train: Remove buildimage jobs  https://review.opendev.org/70999710:04
openstackgerritAndreas Jaeger proposed openstack/magnum stable/stein: Remove buildimage jobs  https://review.opendev.org/70999810:05
openstackgerritAndreas Jaeger proposed openstack/magnum stable/rocky: Remove buildimage jobs  https://review.opendev.org/71000010:08
openstackgerritAndreas Jaeger proposed openstack/magnum stable/queens: Remove buildimage jobs  https://review.opendev.org/71000110:11
AJaegerflwang, strigazi, brtknr , thanks for approving 709956, I backported it everywhere ^10:12
AJaegerthanks, brtknr !10:13
brtknreven queens? we havent touched the queens branch in a while10:13
AJaegerbrtknr: I want to remove the parent job, so need to touch all branches where it is used ;(10:18
*** ykarel is now known as ykarel|afk10:59
brtknrAJaeger: I've +2ed them all11:23
*** ianychoi_ is now known as ianychoi11:32
AJaeger\o/11:41
*** ykarel|afk is now known as ykarel11:52
cosmicsoundgood day12:12
cosmicsoundthis is the closest i got in the magnum deployment12:12
cosmicsoundCreate Failed: CREATE aborted (Task create from SoftwareDeployment "kube_cluster_deploy" Stack "rho-12-cluster-fcexzbldrtrt" [0b086cba-3d51-4323-a4d7-71b4d4fc07d9] Timed out)12:12
cosmicsoundany ideas how to debug this deeper? all machines were created this time, and entire stack is green beside this one task12:13
strigaziopenstack software deployment output --all --long <SOFTWARE DEPLOYMENT ID>12:13
strigaziopenstack software deployment output show --all --long <SOFTWARE DEPLOYMENT ID>12:13
*** udesale_ has joined #openstack-containers12:34
*** udesale has quit IRC12:36
cosmicsoundstrigazi , i get nothing12:49
cosmicsoundhttps://mdb.uhlhost.net/uploads/8edb6c32a9d233e8/image.png this is while testing also the other deployments12:50
cosmicsoundthey were not magnum or not this cluster, since they had complete status12:50
cosmicsoundtried: openstack software deployment output show --all --long 8230a90d-067f-462a-b491-e86746c208ff12:51
cosmicsoundStack Resource ID12:52
cosmicsoundkube_cluster_deploy12:52
cosmicsoundResource ID12:52
cosmicsound8230a90d-067f-462a-b491-e86746c208ff12:52
*** markguz_ has joined #openstack-containers12:52
cosmicsoundrunc[1593]: /var/lib/os-collect-config/local-data not found. Skipping13:19
cosmicsoundi see this repeating13:19
cosmicsoundinside the machinee of k8s13:20
cosmicsoundred hat site say it can be ignored13:21
*** ykarel is now known as ykarel|afk13:36
openstackgerritMerged openstack/magnum stable/train: Remove buildimage jobs  https://review.opendev.org/70999713:51
openstackgerritMerged openstack/magnum stable/stein: Remove buildimage jobs  https://review.opendev.org/70999813:51
openstackgerritMerged openstack/magnum stable/rocky: Remove buildimage jobs  https://review.opendev.org/71000013:51
*** ykarel|afk is now known as ykarel14:02
*** markguz_ has quit IRC14:02
*** dasp has quit IRC14:08
*** jmlowe has quit IRC14:09
*** jmlowe has joined #openstack-containers14:09
brtknrcosmicsound: you can check the output of log inside /var/log/heat-config/14:25
brtknrif you are using the latest branch14:25
*** jmlowe has quit IRC14:29
*** jmlowe has joined #openstack-containers14:32
*** ykarel is now known as ykarel|afk15:06
*** markguz_ has joined #openstack-containers15:26
markguz_hi. i'm running Train and have followed the docs for setup and install. Is it correct that Fedora Atomic 27 is still the recommend container for k8s?15:32
markguz_It seems a bit old. and by following the  train install and setup docs it doesn't work...15:34
markguz_it seems to get as far as downloading the container images but fails to do so. The system has unrestricted access to the internet15:41
markguz_I tried the coreos image but that seems to require octavia15:41
markguz_has fedora-coreos been added to the supported images yet?15:42
*** dasp has joined #openstack-containers15:52
markguz_nvm. i figured it out... i've just moved to ssl for all apis, and it seemed my metadata was not working.16:10
*** udesale_ has quit IRC16:24
brtknrmarkguz_: use FedoraAtomic29-2019112616:36
brtknrthats the last FedoraAtomic release16:36
brtknrFedora CoreOS is supported from Magnum 9.1.016:36
markguz_brtknr: isn't train 9.1.0 ?16:37
brtknryes16:38
brtknr9.2.0 is also available now16:38
brtknrwe are working on releasing 9.3.0 soon with more bug fixes16:38
brtknrmarkguz_:16:38
cosmicsoundbrtknr , using latest branch16:39
cosmicsoundi use that release you mentioned16:40
cosmicsoundbrtknr , the logs inside the master of k8s?16:40
brtknrdid you seee the logs cosmicsound16:40
cosmicsoundor ? i deploy with kolla-ansiblee16:40
brtknryes16:40
cosmicsoundlet me check16:40
brtknrssh into master16:40
brtknrtail -f /var/log/heat-config/...16:40
markguz_actually i'm running 9.2.1-dev616:41
markguz_ok so i'm trying fedora-coreos and there is no /var/log/heat-config folder16:43
markguz_with fedora-coreos the master configured ok, but the minions failed to configure16:43
cosmicsoundbrtknr , will do it as soon as reconfiguration ends16:43
*** flwang has quit IRC16:53
*** pcaruana has quit IRC16:53
*** ykarel|afk is now known as ykarel|away17:00
brtknrmarkguz_: check /var/log/cloud-init-output.log17:04
brtknrlooks like your heat-container-agent failed to deploy17:04
*** pcaruana has joined #openstack-containers17:06
markguz_brtknr: so i deleted the cluster and updated to the master branch of magnum.17:18
markguz_no i get this error : "Resource CREATE failed: BadRequest: resources.kube_masters.resources[0].resources.docker_volume: Availability zone ''"17:18
markguz_and the cluster doesn't even get off the ground.17:18
markguz_is that a new config option maybe?17:18
cosmicsoundbrtknr , http://paste.openstack.org/show/790040/ i do not have it seems that specific file yet17:35
cosmicsoundmaybe was not yet created i started a new cluster17:35
cosmicsoundjournalctl -u heat-container-agent wont be same?17:36
cosmicsoundhttp://paste.openstack.org/show/790041/ here is the journalctl for heat container agent17:43
cosmicsoundthat location was not working on my master i use fedora 29 latest 29.11.17:44
brtknrcosmicsound: Check the cloud unit output file in the logs17:44
brtknrcloud-init-output.log17:45
*** vishalmanchanda has quit IRC17:47
cosmicsoundsecond will upload it17:56
cosmicsoundhttp://paste.openstack.org/show/790042/17:58
cosmicsoundhttp://paste.openstack.org/show/790043/ this is the cloud-init.log previous was cloud-init-output.log18:09
*** ramishra has quit IRC18:34
cosmicsoundResource DELETE failed: Conflict: resources.network.resources.private_subnet: Unable to complete operation on subnet 5385abdd-accc-4ae8-b7c4-9955ee7222cc: One or more ports have an IP allocation from this subnet. Neutron server returns request_ids: ['req-89f1895a-f64e-45a0-a905-3fc5168111a1']18:45
cosmicsoundNow this is first time when i get a FAILED_DELETE18:45
brtknrcosmicsound: i am surprised your output of journalctl -u heat-container-agent is empty18:52
cosmicsoundIs not empty19:00
cosmicsoundhttp://paste.openstack.org/show/790041/ here is the journalctl for heat container agent19:00
markguz_brtknr: so the heat agent is failing in fedora-coreos with "line 276: python: command not found"19:50
markguz_i'm running fedora-coreos-31.20200127.3.019:50
markguz_the exact error is here :http://paste.openstack.org/show/790045/19:52
markguz_it doesn't look like fedora-coreos-31 has python installed19:58
*** mgariepy has quit IRC20:45
brtknrmarkguz_: thats funny, i dont have that issue and have been able to create a cluster fine21:03
markguz_brtknr: could it be a heat thing?21:06
markguz_i'm running the master branch of heat also21:06
markguz_i realized i had installed the master branch of magnum with python2 so i just re-installed it with python 3.621:07
*** pcaruana has quit IRC21:11
markguz_hmm now i have re-installed with 3.6 i'm getting SSLError(SSLError("bad handshake: SysCallError(-1, 'Unexpected EOF')",),)) when i connect to the port21:11
*** rcernin has quit IRC21:21
openstackgerritFeilong Wang proposed openstack/magnum master: [fedora-atomic][k8s] Support post install manifest URL  https://review.opendev.org/67683221:38
brtknrflwang u there?22:21
markguz_has anyone run the latest magnum with ssl ssl on the api enabled with python3.6 ?  I'm running in virtualenv and with 2.7 ssl works fine, but with 3.6 it gives Unexpected EOF22:44
*** rcernin has joined #openstack-containers22:49
*** markguz_ has quit IRC23:47

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!