Thursday, 2020-11-26

*** sshnaidm has quit IRC00:04
*** sshnaidm has joined #openstack-ansible00:05
*** jamesdenton has quit IRC00:10
*** jamesden_ has joined #openstack-ansible00:10
*** sep has quit IRC02:04
*** sshnaidm has quit IRC02:19
*** sep has joined #openstack-ansible02:24
*** sshnaidm has joined #openstack-ansible02:25
*** cshen has quit IRC02:43
*** mcarden has quit IRC03:38
*** raukadah is now known as chandankumar04:06
*** evrardjp has quit IRC05:33
*** evrardjp has joined #openstack-ansible05:33
*** pto has joined #openstack-ansible06:22
*** pto has joined #openstack-ansible06:23
*** yourstruly has joined #openstack-ansible06:23
yourstrulyhello06:23
*** yourstruly has quit IRC06:24
*** YoursTruly has joined #openstack-ansible06:27
YoursTrulyhello06:29
*** miloa has joined #openstack-ansible07:11
*** rpittau|afk is now known as rpittau07:26
YoursTrulyhmm07:31
noonedeadpunko/07:59
*** SiavashSardari has joined #openstack-ansible08:01
jrossermorning08:05
*** cshen has joined #openstack-ansible08:10
*** jbadiapa has joined #openstack-ansible08:27
*** andrewbonney has joined #openstack-ansible08:30
*** tosky has joined #openstack-ansible08:37
*** yann-kaelig has joined #openstack-ansible08:49
admin0morning09:36
admin0osa daily -- searching for people who have successfully deployed magnum/k810:12
jrosseradmin0: that would be guilhermesp, and as far as I know the important thing (a starting point for the cluster template) has already been shared10:14
jrosserbut magnum is hard, you've just got to systematically work through the logs in the magnum containers, then in the cloud-init and heat container agent logs in the VM it deploys10:14
*** gshippey has joined #openstack-ansible10:26
admin0yes . which is why i feel its time we should have at least one working osa config for k810:42
kleiniadmin0: I have deployed at least K8s with Magnum in Pike version correctly. But I think, a lot changed between P and U10:49
admin0if you have some free time, even in an aio .. requesting you to give it a 2nd try :)11:01
admin0what is the difference between member and _member_ ?11:31
admin0in the openstack role11:31
admin0and what is generally used11:31
admin0or are they the same ?11:31
*** fridtjof[m] has quit IRC11:34
*** ioni has quit IRC11:35
*** masterpe has quit IRC11:35
*** csmart has quit IRC11:35
SiavashSardariadmin0 I'm not sure but I think _member_ was part of old roles in oslo policy. I'm updating my policies on all services and _member_ is not in the codes11:37
admin0so  better to use member and not "_underscore member underscore_ "11:38
*** pto has quit IRC11:38
SiavashSardariI'm thinking about removing that role, but I need to make sure nothing will break11:38
admin0my chat app (hexchat on ubuntu) does not show _ correctly11:38
admin0show "_underscore_"11:38
SiavashSardarinp11:39
*** pto has joined #openstack-ansible11:39
SiavashSardariyeah generally I recommend using member without underscore11:40
SiavashSardariadmin0 did you try to use new policies for services?11:41
admin0SiavashSardari, i just use default osa :D11:43
admin0and the services11:43
admin0" try to use new policies for services?" -- not had a chance to look or even know them11:43
SiavashSardariadmin0 oh OK.11:44
*** masterpe has joined #openstack-ansible11:44
admin0i think for new tags, we can let go of the _member_ role ( just redundant/confusing ) and put that in the documentation11:44
admin0underscore_member_underscore role*11:45
SiavashSardarithis is a good idea, I checked and there is no role assignment in default osa with _member_ role. so I don't think this change will break anything. just to be sure, I'd like to hear if anyone else has any comment on this matter11:50
SiavashSardariso while we are kinda on the topic let me ask my question too. I am working on updating our policy files for all openstack services, the problem is service users have project scope admin roles, some of services like neutron needs to have more privilages policies for example to update nova on port state change. my workaround for that is I added11:52
SiavashSardariservice project_id to system_scope policies. my problem with my workaround is I have to run openstack_openrc role on almost all of my containers.11:52
SiavashSardariI would appreciate if anyone can help me with this. and also I think we need to eventually add this functionality to osa11:54
admin0"problem is service users have project scope admin roles, some of services like neutron needs to have more privilages policies for example to update nova on port state chang" -- why is this an issue .. except ansible and the cluster working internally, no one ever sees the passwords ?12:02
*** ioni has joined #openstack-ansible12:09
*** fridtjof[m] has joined #openstack-ansible12:09
*** csmart has joined #openstack-ansible12:09
guilhermespit looks like magnum is still a big challenge admin0 ? :)12:26
guilhermespyes I got magnum successfully working with most of the k8s cool features under ussuri12:26
*** ianychoi_ has joined #openstack-ansible12:35
*** ianychoi has quit IRC12:36
*** rfolco has joined #openstack-ansible12:37
SiavashSardariadmin0, that is an issue because the new policies enforce some restrictions for service users, maybe I should talk to oslo.policy guys. anyways, right now there is no leaking of credentials, and like you said it is internally, I just thought maybe there is a better way of handling this, running openstack_openrc role on all containers seems a bit12:51
SiavashSardariredundant.12:51
*** dpaclt has joined #openstack-ansible13:02
*** rfolco is now known as rfolco|ruck13:09
*** simondodsley has quit IRC13:42
*** simondodsley has joined #openstack-ansible13:42
*** pto_ has joined #openstack-ansible13:45
*** pto_ has quit IRC13:46
*** pto_ has joined #openstack-ansible13:46
*** pto has quit IRC13:49
admin0guilhermesp, please share the exact tag you are using and the docs/command line .. that way, we can try to replicate13:55
admin0i am trying, ThiagoCMC is trying .. djhankb is trying13:56
guilhermespadmin0: ThiagoCMC djhankb this should be enough... https://gist.github.com/guilhermesteinmuller/c23722dcabe5e6175fa722b7c278113a  if you face issues and have access to the master, please share the whole heat-config log14:03
*** cshen has quit IRC14:15
*** chandankumar is now known as raukadah14:16
openstackgerritRafael Folco proposed openstack/openstack-ansible-os_tempest stable/train: Switch tripleo job to content provider  https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/76102114:30
openstackgerritMerged openstack/openstack-ansible stable/ussuri: Fix octavia tempest tests  https://review.opendev.org/c/openstack/openstack-ansible/+/76304814:34
openstackgerritRafael Folco proposed openstack/openstack-ansible-os_tempest stable/train: Switch tripleo job to content provider  https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/76102114:36
admin0guilhermesp, thank you .. i will start in the evening and post success/failure/logs :)14:39
admin0you mentioned the branch/tag for magnum: fe35af8ef5d9e65a4074aa3ba3ed3116b7322415 .. is that for info.. or we need to override and use this specific one14:40
admin0if this is not what i get for any reasons, how to tell osa i need this specific one ?14:40
openstackgerritRafael Folco proposed openstack/openstack-ansible-os_tempest stable/ussuri: Switch tripleo jobs to content provider  https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/76101914:46
*** NewJorg has quit IRC14:53
*** NewJorg has joined #openstack-ansible14:59
*** frickler is now known as frickler_pto15:02
*** SiavashSardari has quit IRC15:02
*** mathlin has joined #openstack-ansible15:11
ThiagoCMCguilhermesp, I'm suprised about how big is your line to start the Kubernetes cluster with Magnum...15:12
ThiagoCMCAny idea why something simpler fails, like: `openstack coe cluster template create k8s-cluster-template --image fedora-coreos-32 --keypair testkey --external-network public --dns-nameserver 8.8.8.8 --flavor ds1G --master-flavor ds2G --docker-volume-size 5 --network-driver flannel --docker-storage-driver overlay2 --coe kubernetes` ?15:13
ThiagoCMCI got it from: https://docs.openstack.org/magnum/latest/contributor/quickstart.html (since the Ussuri branch still points to old Fedora Atomic).15:13
*** miloa has quit IRC15:29
*** YoursTruly has quit IRC15:35
*** cshen has joined #openstack-ansible15:37
jrosseradmin0: quite often you need to be running a very recent version of magnum (i.e newer than your OSA release) to make it work15:46
admin0how do i get/set it jrosser ?15:47
jrosserstuff seems to change here much quicker than the openstack release lifcycle, which is why it's basically impossible to ship a "known working" config with OSA15:47
jrossermagnum_git_install_branch is a variable to put in user_variables15:48
jrosserit specifies a hash of the magmin git repo to install in the magnum service containers15:48
admin0guilhermesp, in your instructions, the download is qcow2, but the add command is using raw15:52
admin0is that an intentional easter egg :)15:53
*** macz_ has joined #openstack-ansible16:03
jrossernoonedeadpunk: any release blocking patches need looking at before my week is over?16:04
noonedeadpunkthat one would be awesome to merge https://review.opendev.org/c/openstack/openstack-ansible/+/76390816:06
jrossernew gerrit is taking some getting used to16:08
noonedeadpunkit does....16:28
noonedeadpunkI tried not to it it this week :(16:28
noonedeadpunk*not to touch16:28
admin0looks liek a tubelight theme :) .. not sure if such exist16:28
admin0in AIO .. when you run the tests, do you only create volumes or mount them also16:30
noonedeadpunkwe're testing VM creation and if it is operational16:31
noonedeadpunkeventaully we run tempest basic scenario16:31
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_swift master: Stop to use the __future__ module.  https://review.opendev.org/c/openstack/openstack-ansible-os_swift/+/73288316:32
*** YoursTruly has joined #openstack-ansible16:43
YoursTrulyhello16:43
YoursTrulycould I  small hint how to install openstack-ansible on my single node hp580 g7 with centos7 and 4 x 1GB, 2 x 10GB NICs?16:45
noonedeadpunkYoursTruly: I think you need aio in case it's single node16:49
noonedeadpunkyou should look at https://docs.openstack.org/openstack-ansible/latest/user/aio/quickstart.html16:49
noonedeadpunkbut ussuri is the latest release where we have support of centos 716:50
YoursTrulycan I have br-mgmt on private network or it has to be internet facing one?16:50
noonedeadpunkit's designed to be private network16:50
jrosserYoursTruly: the AIO makes all the choices for you and gives you something that 'just works', but it's not a production cloud16:50
noonedeadpunkwell yes^16:51
jrosseri would highly recommend having a play with the AIO to familiarise yourself with how everything is plumbed together16:51
jrosserthen re-do it if necessary for something more real16:51
YoursTrulyI have 2 other nodes waiting in the line to add, so AIO is death trap :D16:51
noonedeadpunkand, you can deploy basic openstack on it just by clonning repo and running ./scripts/gate-check-commit.sh16:51
jrossereveryone says that they won't do AIO because they want to end up with something better16:51
YoursTrulywill check that ./scripts/gate-check-commit.sh16:51
noonedeadpunkwell not really as you can expand it16:52
noonedeadpunkgate-check-commit.sh will deploy aio for you as well16:52
jrosserIMHO AIO will get you to your destination quicker even if you throw it away after learning16:52
YoursTrulyhmm I was basing my config on aio16:52
noonedeadpunkyou casn provide services that need to be deployed with positional argiments, like ./scripts/gate-check-commit.sh aio_heat_ceph16:53
YoursTrulyall was going smothly but some nodes could not connect internet16:53
jrosseralso be aware that centos-7 is also a dead end16:53
jrosserussuri is the last release that it is possible to support16:53
YoursTrulywhat about train?16:53
noonedeadpunkit goes before ussri so it does support it16:53
YoursTrulyoh nice, other distributions only support up to train :D16:54
YoursTrulymy hardware will not go withy centos816:55
YoursTrulyi tried :(16:55
jrosserOSA deploys from source code so we are not limited by the packages produced by other parties16:55
jrosserso long as the python code works and the required versions of libvirt and stuff are avaialble, things are OK16:55
*** jbadiapa has quit IRC16:56
YoursTrulyhmm I make some ansible playbooks to configure networking on my centos716:58
YoursTrulywould that be fine for bridges?16:58
noonedeadpunkwell 8gb of ram is not really sufficient indeed16:58
YoursTruly      - interface: br-host # 0 => bond0        address: 192.168.8.20        gateway: 192.168.8.1        type: br        role: node000 master controller        metric: 0        defroute: yes            - interface: br-mgmt # 0 => bond1        address: 172.29.236.20        gateway: 172.29.236.1        type: br        role: management         metric:16:58
YoursTruly200        defroute: yes          - interface: br-vxlan # 0 => bond2        address: 172.29.240.20        type: br        role: vxlan #      - interface: br-vlan # 0 => bond2        type: br-unused        role: vlan #       - interface: br-storage # 0 => bond3        type: br-unused        role: storage #16:58
YoursTrulydamn16:58
noonedeadpunk-> paste.openstack.org16:58
*** mmethot_ has quit IRC17:06
YoursTrulyok I have made config for my networking17:11
YoursTrulyhttp://paste.openstack.org/show/800469/17:11
*** yann-kaelig has quit IRC17:11
YoursTrulyand based on that put that openstack_user_config.yml17:12
YoursTrulyhttp://paste.openstack.org/show/800470/17:12
YoursTrulycould You look at it?17:15
*** dasp_ has quit IRC17:16
*** tosky has quit IRC17:17
YoursTrulyso basically I have networking like http://paste.openstack.org/show/800471/17:22
admin0guilhermesp, it did not even went to the cluster template ( heat stack) step.. just died in internal:error :)17:24
admin0ThiagoCMC, yours working yet ?17:24
guilhermespsorry admin0 regading the raw, is becasue our backend being ceph, so raw17:26
admin0that i figured17:26
*** dasp has joined #openstack-ansible17:26
guilhermespadmin0: logs from magnum-api/conductor?17:27
guilhermespThiagoCMC: probably missing some labels :) as jrosser has been saying too, magnum is a matter of getting the right combination of labels depending on the version you're running :)17:28
*** YoursTruly has quit IRC17:30
admin0i am reconfigung it with the tag you mentioned17:34
guilhermespyes. Add it in your user_variables and re-run os-magnum playbooks17:35
ThiagoCMCRight, I see... This sounds more complicated than advertised...  lol17:38
guilhermespwhich errors in specific are you seeing ThiagoCMC ?17:38
guilhermespyeah unfortunately docs are not so up-to-date17:39
guilhermespthat's why i say magnum is an adventure17:39
guilhermespbut totally doable17:39
ThiagoCMCI really wanna do it! Let me try again and collect some some...17:40
ThiagoCMCsome logs...  =P17:40
*** rfolco|ruck has quit IRC17:40
openstackgerritMerged openstack/openstack-ansible-os_adjutant master: Make role fit to the OSA standards  https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/75631317:41
ThiagoCMCguilhermesp, first command worked: `openstack coe cluster template create k8s-ultra-cluster --image fedora-coreos-32 --keypair default --external-network public --dns-nameserver 1.1.1.1 --master-flavor m1.small --flavor r1.large --docker-volume-size 8 --network-driver flannel --docker-storage-driver overlay2 --coe kubernetes`17:42
*** YoursTruly has joined #openstack-ansible17:44
ThiagoCMCThen, `openstack coe cluster create k8s-test --cluster-template k8s-ultra-cluster --node-count 1` was accepted.17:47
ThiagoCMCBut, by default, it has to copy the fedora-coreos-32 from Glance (Ceph) to the Compute Node /var/lib/, since I'm not using Ceph `vms` pool by default, it takes time here... That *might* be a problem and it times out, let's see.17:48
YoursTrulyhello guys17:53
YoursTrulycould you gimme hint about openstack-ansible networking setup17:53
YoursTrulyhttp://paste.openstack.org/show/800471/17:53
YoursTrulywill that work?17:53
YoursTrulynothig long in there17:54
YoursTrulyso basically I have networking like: br-host (internet, 192.168.8.20)  -> bond0                -> nic0, br-mgmt (private, 172.29.236.20)  -> bond1                -> nic1, br-vxlan (private, 172.29.240.20) -> bond2.10,            -> bond2br-vlan (private, manual)         -> bond2.20,            -> bond2br-vlan (private, manual)         -> bond217:54
YoursTruly(192.168.8.22) -> nic2, nic3, br-storage (private 10GB, manual) -> bond4                -> sfp0, sfp117:54
admin0do i have to nuke the repo containers to have it redownload the magnum of that specific version ?17:54
YoursTrulyso the thing is I have 4 x 1GB nics named: net0, net1, net2, net3; 2 x 10GB nics named sfp0, sfp1 - kinda have to setup bridges on them17:56
admin0YoursTruly, why is your br-vlan that complex17:56
admin0others look ok17:56
admin0bridge setup is straightforward17:56
admin0netplan currently ( if on ubuntu)17:56
admin0i would do the br-storage and br-vxlan on the 10g ,   would create 2 more bonds on net0 and net2 for br-mgmt and net1 and net3 for br-vlan17:57
admin0should be enough to get going .. where east-west and storage is on the 10g17:57
admin0and mgmt and north-south on the 2g17:57
YoursTrulyok thanks, I was following https://docs.openstack.org/openstack-ansible/latest/user/network-arch/example.html17:58
YoursTrulyand I have centos7 xD17:59
YoursTrulyhmm so what about this config with two br-vlan17:59
YoursTrulycan I skip that?17:59
*** andrewbonney has quit IRC17:59
YoursTrulyI mean would be nice to have flat network too18:00
admin0dunno why .. but i always struggled with flat network .. 8 years of using osa .. zero flat network18:01
admin0step1. create the bridges in centos .. do brctl show and paste again18:01
admin0you need to have only 1 of each bridge18:01
YoursTrulyopenswitch bridges will work fine?18:03
ThiagoCMCadmin0, guilhermesp, the k8s master node came up but, the node didn't. It still shows CREATE_IN_PROGRESS though but, I'm not seeing Heat trying to launch the node anywhere.18:03
YoursTrulyor have to struggle with that https://www.openstackfaq.com/openstack-ansible-with-openvswitch/18:03
admin0YoursTruly, i am working on a new set of configs for the site18:04
admin0forget ovs for a moment18:04
admin0use bridges18:04
admin0basically for osa, you need 4 bridges ..  it can be 4 interfaces, 40 or a single interface and vlan tags18:04
guilhermespok ThiagoCMC so "node came up" means you have a master active and running?18:05
admin0all you need is 4 bridges .. so in your case, you can forget the bonds and everything for a bit and just use 4 interfaces and map them to 4 bridges18:05
admin0what i have observed guilhermesp ThiagoCMC is that the master comes up, does nothing when logged in, but the  stack shows master always in creation process18:05
guilhermespare you able to ssh into master?18:06
ThiagoCMCguilhermesp, yes, the master is up and running but, no "node".18:06
admin0ThiagoCMC, does the heat template mark the master node as DONE .. or is it still on creation process ?18:06
guilhermespok i see, you mean k8s node, ok. That means heat-config is stuck at some point18:06
YoursTrulyok :D  how do I assign IP when my external router gateway is 192.168.8.1, and my host is node00018:06
YoursTrulythen br-mgmt will be 192.168.8.21?18:07
guilhermespnexxt step is: login in the master, look what is going on inside /var/log/heat-config18:07
admin0YoursTruly,  use the IPs in the document .. for br-mgmt, br-vxlan and br-storage18:07
guilhermespor check the status of heat-container-agent18:07
admin0as they are un-routed networks18:07
admin0but for br-mgmt, you can use any routable ip range that you can ssh into18:07
admin0or you can use another ethernet to have the ips you need to be able to ssh in18:07
ThiagoCMCguilhermesp, the "master" doesn't have the "default" ssh key, is there a default user/pass?  lol18:08
YoursTrulyok, so If my node000 has nic0 with 192.168.8.20, then br-mgmt -> nic0?18:08
admin0in most servers, where you have 1G port and 4x 10g port,            the 1G will be for ssh,internet connectivity etc .. while the 4 ports could be on a single or multiple bond or separate and are part of br-mgmt br-etc18:08
guilhermespi dont think so ThiagoCMC keys are injected into ignition18:08
admin0YoursTruly, yes18:09
admin0you have to add nic0 to be on br-mgmt and set the same IP there18:09
ThiagoCMCThe key pair is empty18:09
ThiagoCMCat Horizon display18:09
admin0YoursTruly, you alread have br-host which is the ssh ip right18:10
admin0leave it like that18:10
guilhermespideally, create you cluster with your keys and ssh into master. Otherwise, you can try to find logs in heat service. But for me, faster debugging happens when we are able to connect to master and see how heat-config is doing what it needs to do :)18:10
admin0create the 4 extra bridges on the remaining nics18:10
YoursTrulyok thanks :D18:10
admin0guilhermesp, mine scripts are almost done with the rebuild18:10
admin0then i can also get on and paste logs18:10
YoursTrulythen what about this two vlan interfaces18:15
YoursTruly    - network:        container_bridge: "br-vlan"        container_interface: "eth12"        host_bind_override: "eth12"18:15
YoursTrulywhat put in place of host_bind_override?18:15
YoursTrulyif I do 1) br-vlan -> bondX -> nicX + nicY, then 2) br-vlan -> ?18:20
YoursTrulyis this 2) even needed?18:21
admin0YoursTruly, first goal is to get the bonds up18:23
admin0and not look into the configs18:23
admin0forget that you have osa configs or the yaml files18:23
admin0that is not the concern here18:23
admin0first get the bonds and bridges up18:23
admin0when its up then do a brctl and show18:23
admin0you don't touch the configs at all.. they are like that for a reason18:24
admin0you have one network card, or 4 or 10 .. the configs remain the same and it not changed ..18:24
admin0so first goal for you is on all your servers, create the bridges and bonds18:24
admin0or vlan or how you want the servers to talk to each other18:25
YoursTrulyok thank You very much, not its somehow more clear :D18:26
ThiagoCMCguilhermesp, cool, I could ssh into the "master"! The username is "core".18:30
ThiagoCMCThere is no "/var/log/heat-config" within it18:31
ThiagoCMCI just got more hardware, gonna try again later, without bottleneck lol18:31
openstackgerritMerged openstack/openstack-ansible stable/ussuri: Add magnum tempest URL  https://review.opendev.org/c/openstack/openstack-ansible/+/76390818:34
guilhermespoh yeah, forgot to mention that the df username is core :)18:37
guilhermespso how about18:37
guilhermespsystemctl status heat-container-agent18:38
admin0is boot-volume-type required ?18:47
admin0it just says create failed .. internal error ..  without even creating a heat stack18:47
admin0doing a pastebin18:47
admin0guilhermesp, this first step is due to  ERROR magnum.drivers.heat.k8s_fedora_template_def [req-d0c4c196-de3e-44e5-afc3-ec150fd6c264 - - - - -] Failed to load default keystone auth policy: FileNotFoundError: [Errno 2] No such file or directory: '/etc/magnum/keystone_auth_default_policy.json'18:50
admin0if osa supples the role, should it also not create the policy file ?18:50
admin0i mean all other projects work out of the box18:51
openstackgerritMerged openstack/openstack-ansible master: Bump SHAs for master  https://review.opendev.org/c/openstack/openstack-ansible/+/76276218:52
admin0there is only policy.json .. no file name keystone_auth_policy.json18:53
admin0ThiagoCMC, you got keystone_auth_policy.json insinde the /etc/magnum ?18:53
admin0YoursTruly, done ? are the bridges created ? ready for next step ?18:54
*** sep has quit IRC18:57
admin0guilhermesp, can you share your keystone_auth_policy.json that is inside /etc/magnum ?18:57
*** sep has joined #openstack-ansible18:58
jrosseradmin0: the OSA auth policy was only recently added https://github.com/openstack/openstack-ansible-os_magnum/commit/200dcd89aaba6b2b3e78a16b7f45f18af34408a819:05
*** mmethot has joined #openstack-ansible19:07
ThiagoCMCadmin0, nop... I'm installing new bond channels in my cloud, there was a bottleneck hehe19:23
ThiagoCMCgonna try again in about 3 hours, I also got a job interview, wish me luck!  :-P19:24
admin0best of luck19:27
admin0log file: https://gist.github.com/a1git/da364eca1793c7e13a82a58a2fff2c4619:56
*** cshen has quit IRC19:59
admin0jrosser, thanks ..20:10
*** rfolco has joined #openstack-ansible20:10
*** cshen has joined #openstack-ansible20:10
*** sshnaidm has quit IRC20:41
*** gshippey has quit IRC20:44
*** rpittau is now known as rpittau|afk20:45
*** tosky has joined #openstack-ansible21:09
*** cshen has quit IRC21:37
*** cshen has joined #openstack-ansible22:04
*** YoursTruly has quit IRC22:27
*** sshnaidm has joined #openstack-ansible23:00
*** sshnaidm is now known as sshnaidm|off23:01
*** rfolco has quit IRC23:32
*** rfolco has joined #openstack-ansible23:32
*** YoursTruly has joined #openstack-ansible23:42

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!