Wednesday, 2019-08-28

*** bbowen has quit IRC00:29
*** ozzzo has quit IRC00:39
*** gyee has quit IRC00:43
*** markvoelker has quit IRC00:57
*** xenos76 has quit IRC01:13
*** Kuirong has joined #openstack01:29
Kuironghi all, I would like to know if I want install OpenStack stein version in CentOS, should I enable the OpenStack Stein repository?01:30
Kuirongbecause I didn't find "centos-release-openstack-stein" in the Installation guide(https://docs.openstack.org/install-guide/environment-packages-rdo.html)01:30
cz2Kuirong: http://mirror.centos.org/centos/7/extras/x86_64/Packages/centos-release-openstack-stein-1-1.el7.centos.noarch.rpm01:38
cz2Kuirong:  it's in CentOS-Extras repo01:38
cz2make sure it's enabled01:38
cz2most likely documentation is not updated01:42
*** aakarsh|2 has quit IRC01:43
Kuirongthanks  cz201:44
*** a0s510 has quit IRC01:49
*** markvoelker has joined #openstack01:52
*** markvoelker has quit IRC02:02
*** markvoelker has joined #openstack02:03
*** a0s510 has joined #openstack02:10
*** spsurya has joined #openstack02:11
*** markvoelker has quit IRC02:13
*** morazi has quit IRC02:13
*** zaitcev has joined #openstack02:13
*** markvoelker has joined #openstack02:14
*** markvoelker has quit IRC02:24
*** a0s510 has quit IRC02:41
*** bbowen has joined #openstack02:44
*** BXS has joined #openstack03:04
*** Kuirong has quit IRC03:05
*** maddtux has joined #openstack03:06
*** ianychoi has quit IRC03:29
*** ianychoi has joined #openstack03:30
*** gkadam has joined #openstack03:36
*** gkadam has quit IRC03:36
*** dsneddon has quit IRC03:51
*** arnoldoree has quit IRC03:52
*** markvoelker has joined #openstack04:05
*** zaitcev has quit IRC04:06
*** markvoelker has quit IRC04:10
*** zaitcev has joined #openstack04:20
*** dsneddon has joined #openstack04:24
*** BXS has quit IRC04:28
*** darvon has quit IRC04:40
*** gshippey_ has quit IRC04:40
*** darvon has joined #openstack04:41
*** brault has joined #openstack04:50
*** brault has quit IRC04:54
*** soniya29 has joined #openstack04:54
*** dr_feelgood has joined #openstack05:05
*** odicha has joined #openstack05:08
*** zhubx has joined #openstack05:09
*** dr_feelgood has quit IRC05:09
*** boxiang has quit IRC05:10
*** markvoelker has joined #openstack05:20
*** markvoelker has quit IRC05:25
*** sauvin has joined #openstack05:26
*** jhesketh has quit IRC05:34
*** dr_feelgood has joined #openstack05:38
*** shyamb has joined #openstack05:39
*** yaawang has quit IRC05:39
*** yaawang has joined #openstack05:40
*** Warped has joined #openstack05:41
*** jhesketh has joined #openstack05:42
*** dr_feelgood has quit IRC05:43
*** shyamb has quit IRC05:44
*** links has joined #openstack05:46
gregworkdoes tripleo/rdo/osp support policy route configurations with os-net-config ?05:46
*** miloa has joined #openstack05:46
*** ymasson has quit IRC05:47
gregworkim trying to figure out how to deploy a network config that will correctly set ip route/rule rules for multi homed servers05:48
gregworkive got this lovely asymmetric routing issue because both vlan101 and vlan103 on my controllers can both reach vlan71 on my switch via their svi's and the default route is vlan101 .. so packets come in vlan103 and go out vlan101.  This of course gets logged as martian traffic and dropped until i set rp_filter to 2 .. but id rather figure out how to push an ip rule/route map that will survive a stack update05:49
*** stamps has quit IRC05:50
*** lpetrut has joined #openstack06:00
*** Kuirong has joined #openstack06:06
*** dsneddon has quit IRC06:14
*** zhubx has quit IRC06:18
*** zhubx has joined #openstack06:18
*** surpatil has joined #openstack06:18
*** shyamb has joined #openstack06:20
*** jtomasek has joined #openstack06:26
*** slaweq has joined #openstack06:27
*** zhubx has quit IRC06:28
*** zhubx has joined #openstack06:28
*** yaawang has quit IRC06:31
*** slaweq has quit IRC06:32
*** yaawang has joined #openstack06:33
*** dgurtner has joined #openstack06:36
*** dsneddon has joined #openstack06:42
*** dsneddon has quit IRC06:47
*** dsneddon has joined #openstack06:48
*** slaweq has joined #openstack06:48
*** macz has joined #openstack06:52
*** dsneddon has quit IRC06:54
*** dsneddon has joined #openstack06:54
*** macz has quit IRC06:56
*** avivgt has joined #openstack06:59
*** dsneddon has quit IRC07:00
*** mlycka has joined #openstack07:00
*** dsneddon has joined #openstack07:00
*** crazzy has joined #openstack07:00
*** zhubx has quit IRC07:01
*** zhubx has joined #openstack07:01
*** avivgta has joined #openstack07:01
*** dsneddon has quit IRC07:06
*** dsneddon has joined #openstack07:07
*** trident has quit IRC07:08
*** zhubx has quit IRC07:11
*** zhubx has joined #openstack07:11
*** rcernin has quit IRC07:12
yoctozeptowatersj: hey, re: kolla, we have fixed that in https://review.opendev.org/677453 - though it only applies to cluster expansion and some cases of redeployments after failure, clean deploys worked normally07:13
yoctozeptolet me know whether that fixes your problem07:13
yoctozeptoalso, join us on #openstack-kolla, we are more vigilant in there :-)07:13
*** trident has joined #openstack07:17
*** zhubx has quit IRC07:20
*** zhubx has joined #openstack07:20
*** fsimonce has joined #openstack07:25
*** mrechtma__ has joined #openstack07:27
*** ivve has joined #openstack07:28
*** mrechtma__ has quit IRC07:29
*** mrechtma__ has joined #openstack07:29
*** avivgt has quit IRC07:29
*** zhubx has quit IRC07:30
*** zhubx has joined #openstack07:30
*** shyamb has quit IRC07:42
*** aojea has joined #openstack07:42
*** zaitcev has quit IRC07:50
*** aojeagarcia has joined #openstack08:00
*** aojea has quit IRC08:01
*** zaitcev has joined #openstack08:02
*** jangutter has joined #openstack08:03
*** avivgt has joined #openstack08:05
*** dgurtner has quit IRC08:05
*** zaitcev_ has joined #openstack08:06
*** mrechtma__ has quit IRC08:07
*** zaitcev has quit IRC08:09
*** dgurtner has joined #openstack08:11
*** suuuper has joined #openstack08:14
*** alexmcleod has joined #openstack08:22
*** zhubx has quit IRC08:26
*** zhubx has joined #openstack08:27
*** zhubx has quit IRC08:27
*** boxiang has joined #openstack08:29
*** imega has joined #openstack08:29
*** zaitcev_ has quit IRC08:30
*** shyamb has joined #openstack08:33
*** boxiang has quit IRC08:33
*** boxiang has joined #openstack08:34
*** dsneddon has quit IRC08:38
*** zaitcev_ has joined #openstack08:43
*** phyrox has joined #openstack08:48
*** dgurtner has quit IRC08:56
*** jraju__ has joined #openstack08:56
*** links has quit IRC08:57
*** xenos76 has joined #openstack09:02
*** xenos76 has quit IRC09:07
*** xenos76 has joined #openstack09:09
*** dsneddon has joined #openstack09:12
*** brault has joined #openstack09:17
*** dsneddon has quit IRC09:17
*** shyamb has quit IRC09:22
*** phyrox has quit IRC09:27
*** abirba has quit IRC09:41
*** dgurtner has joined #openstack09:43
*** links has joined #openstack09:51
*** jraju__ has quit IRC09:52
*** zaitcev_ has quit IRC09:55
*** zhubx has joined #openstack10:00
*** sshnaidm|afk is now known as sshnaidm10:02
*** zhubx has quit IRC10:03
*** zhubx has joined #openstack10:03
*** boxiang has quit IRC10:03
*** zhubx has quit IRC10:04
*** zhubx has joined #openstack10:04
*** macz has joined #openstack10:05
*** zhubx has quit IRC10:06
*** zhubx has joined #openstack10:07
*** zaitcev_ has joined #openstack10:07
*** zhubx has quit IRC10:08
*** zhubx has joined #openstack10:09
*** macz has quit IRC10:09
*** zhubx has quit IRC10:09
*** zhubx has joined #openstack10:10
*** shyamb has joined #openstack10:10
*** dsneddon has joined #openstack10:18
*** melsakhawy has joined #openstack10:26
*** arnoldoree has joined #openstack10:31
*** melsakhawy has quit IRC10:32
*** forgotmynick has joined #openstack10:38
*** shyamb has quit IRC10:41
*** shyamb has joined #openstack11:00
*** tesseract has joined #openstack11:12
*** tonythomas has joined #openstack11:12
LarsErikPIs there still no client support for neutron port forwarding?11:16
LarsErikPCan't really find much info, besides this pretty old RFE https://bugs.launchpad.net/neutron/+bug/181135211:16
openstackLaunchpad bug 1811352 in neutron "[RFE] Include neutron CLI floatingip port-forwarding support" [High,In progress] - Assigned to LIU Yulong (dragon889)11:16
slaweqLarsErikP: it seems that this should be added with https://review.opendev.org/#/c/650062/ but it's still not finished11:18
*** zaitcev_ has quit IRC11:20
LarsErikPwow.. that was a pretty long history :p11:21
*** dudek has joined #openstack11:32
*** zaitcev_ has joined #openstack11:32
*** Lucas_Gray has joined #openstack11:33
*** dsneddon has quit IRC11:36
*** xenos76 has quit IRC11:39
*** mcornea has joined #openstack11:39
*** shyamb has quit IRC11:41
*** shyam89 has joined #openstack11:41
*** markvoelker has joined #openstack11:43
*** dsneddon has joined #openstack11:45
*** shyam89 has quit IRC11:45
*** morazi has joined #openstack11:50
*** surpatil has quit IRC12:01
*** shyamb has joined #openstack12:03
*** maddtux has quit IRC12:40
*** shyamb has quit IRC12:42
*** ttsiouts has joined #openstack12:43
*** shyamb has joined #openstack12:47
*** soniya29 has quit IRC12:50
*** surpatil has joined #openstack12:51
*** shyamb has quit IRC12:52
*** mcriswel1 has joined #openstack13:01
*** xenos76 has joined #openstack13:05
*** surpatil has quit IRC13:12
*** RickDeckard has joined #openstack13:15
*** Kuirong has quit IRC13:16
*** forgotmynick has quit IRC13:17
*** dsneddon has quit IRC13:20
*** dsneddon has joined #openstack13:28
*** dsneddon has quit IRC13:33
*** ScrumpyJack has joined #openstack13:34
ScrumpyJackin a swift cluster, a node had to be rebuilt and drives wiped. it's been out of the cluster for over a week. what's the best way to re-introduce it back into the cluster?13:36
ScrumpyJackringfiles have not been touch since the node was wiped13:37
DHEmake sure that you also formatted all the drives referenced in the rings, not just the OS drives13:37
*** Goneri has joined #openstack13:42
ScrumpyJackyes, I have done that13:43
ScrumpyJackbut starting up swift on that node seems to bare a lot of load on the swift cluster13:44
ScrumpyJackwill swift try to move all the container/objects back to that node?13:44
ScrumpyJack(after weeks on that node being down)13:45
*** mcriswel1 has quit IRC13:46
*** mcriswell has joined #openstack13:46
*** ttsiouts has quit IRC13:48
*** ttsiouts has joined #openstack13:48
*** odicha has quit IRC13:50
ScrumpyJackshould i remove the node from the ring files and have swift tread the rebuilt node as new?13:52
*** ttsiouts has quit IRC13:53
DHEwell it's rebuilding all the data it's missing, so yeah I can see it getting slammed13:56
DHEmaybe you just want to bring the weight to a small number like 1/10 its normal value and raise it slowly over a few days.13:56
*** DK2 has joined #openstack13:57
ScrumpyJackis it best to do that on all the disks on that node? or perhaps just add a few disks at a time with a 1/10th weight?13:57
*** aojeagarcia has quit IRC13:58
DHEdepends on your capacity I guess. this would be a network issue14:03
*** dsneddon has joined #openstack14:07
ScrumpyJacka node is 1PB14:08
ScrumpyJackand swift-recon shows 30 nodes14:09
*** aakarsh has joined #openstack14:13
*** boxiang has joined #openstack14:14
*** zhubx has quit IRC14:17
*** sshnaidm has quit IRC14:17
*** sshnaidm has joined #openstack14:19
DHEimpressive...14:20
*** boxiang has quit IRC14:20
ScrumpyJackthe node has been out of the cluster for a long time, so swift has created a new third copy of each object right? So If i remove the node from the ringfiles and rebalance, what happends?14:20
*** boxiang has joined #openstack14:21
ScrumpyJackdo those objects (the third copy, presumably offloaded somewhere) move somewhere else again? or does swift just mark those third copies as "good, you're staying here now"14:23
ScrumpyJackDHE: I used the term exabyte for the first time the other day. It was only half an exabyte, but still ... :)14:25
DHEand here i am looking at my 1/2 petabyte ZFS array and thinking it's big....14:25
*** boxiang has quit IRC14:25
*** boxiang has joined #openstack14:26
*** markvoelker has quit IRC14:26
ScrumpyJackso when a node dies, after a certain time (a week?) swift will create a new third copy of all the objects on that node right?14:27
ScrumpyJackonto other nodes in the cluster14:27
DHEthe risk is that when a file is deleted, swift writes a tombstone object to disk so replication has something physical to look at and understand the point in time a delete happened. but that tombstone is only kept around for a short while, usually about a week14:28
ScrumpyJackand when i add the node back (blank drives) swift will copy those objects back right?14:28
DHEso if a machine is down for too long and brought back with its disks intact, you can have zombie objects taking up space but not supposed to be there, not in container listings, etc.14:29
DHEthis is why I said you should wipe all disks, not just the OS14:29
*** xubozhang has joined #openstack14:29
*** markvoelker has joined #openstack14:29
*** xubozhang has quit IRC14:30
DHEif a node is down, replication will select a machine to act as the temporary replacement, and when the node returns it will be sync'd up14:30
*** hukyld has joined #openstack14:30
*** aconole has joined #openstack14:32
ScrumpyJackright, but can make the temp replacement permanent and avoid the sync up?14:33
ScrumpyJack(when i had the node with blank disks back)14:33
*** gillesMo has quit IRC14:34
*** miloa has quit IRC14:34
*** miloa has joined #openstack14:36
*** markvoelker has quit IRC14:37
*** markvoelker has joined #openstack14:38
*** gagan662 has quit IRC14:39
*** gagan662 has joined #openstack14:40
*** Lucas_Gray has quit IRC14:51
*** lpetrut has quit IRC14:52
*** noogie has quit IRC14:53
*** igordc has joined #openstack14:57
*** links has quit IRC15:01
*** xenos76 has quit IRC15:03
*** zaitcev_ has quit IRC15:04
timburkeoh, this sounds like a fun conversation :-)15:07
timburke"the node has been out of the cluster for a long time, so swift has created a new third copy of each object right?" yes, as i recall. definitely if the node was up but had drives unmounted15:08
timburke(the main question in my mind is whether we'd replicate to a handoff if the primary is temporarily unavailable, but certainly all new data, swift will try to write three replicas wherever it can)15:09
timburke"do those objects (the third copy, presumably offloaded somewhere) move somewhere else again?" yes -- swift's ring has a notion of "primary assignments" for objects; if a primary isn't available for a PUT, it'll start going through the remaining devices in a consistent order and write to a "handoff" location15:11
DHEbut you can't really take a host out of the ring and assure what use to be handoffs are promoted to primary nodes in the order they would be used.15:12
timburkewhen the replicator comes around and finds the handoff data, it'll try to move it back to the primaries15:12
timburkeDHE, absolutely correct15:13
*** PagliaccisCloud has joined #openstack15:13
timburkein short, there's got to be a bunch of movement regardless of what you do15:14
DHEand my weight thing was about making it a bit slow as the drives re-populate with the data they're supposed to have, hopefully keeping the disks from getting too slammed with requests during replication15:14
*** lpetrut has joined #openstack15:15
*** zaitcev_ has joined #openstack15:16
timburkeyeah -- but the rebalance is going to move traffic elsewhere. it'll probably improve your queue depths on the freshly-formatted node, but i'm not sure it'll get the cluster to the desired end state any faster15:16
*** xenos76 has joined #openstack15:17
*** markvoelker has quit IRC15:18
timburkeso, ScrumpyJack -- at this point i'd recommend leaving the rings alone. there's a bunch of load in your swift cluster because there's some pent up work that we've been putting off. one way or another, it needs to get resolved15:18
*** mcriswell has quit IRC15:19
*** markvoelker has joined #openstack15:21
timburkeif it starts to impact client-facing traffic, you might consider tuning down the replicators -- but personally, i'd be partial toward thinking of the client impact as back-pressure to give the cluster enough headroom that it can get back to full health15:21
timburkeunfortunately, it takes time to move 1PB ;-)15:22
*** ivve has quit IRC15:24
*** gyee has joined #openstack15:26
* ScrumpyJack nods15:26
*** ymasson has joined #openstack15:27
*** dsneddon has quit IRC15:27
watersjis there a way to import images contained in ceph to a new openstack deployment ?15:29
gregworknew openstack has ceph ?15:30
gregworkif so then you could rbd-mirror15:30
watersjyes, new openstack, existing ceph15:31
watersjimages contained were from a (cough , messed up install)15:31
gregworkoh15:31
gregworkceph is external to openstack i take it15:31
gregworkeg. not managed ceph15:31
watersjyes, external storage15:32
*** RickDeckard has quit IRC15:32
gregworki mean you define the pools where things are, have you tried pointing at your old pools ? (maybe after making a copy of them)15:32
watersjgetting ready to re-deploy kolla-ansible15:33
watersjI guess i'll find out soon15:33
gregworkso full disclosure im only really versed in tripleO based openstack installs15:33
gregworkso this might not work at all for you :)15:33
gregworktheoretically tho .. ceph is ceph15:34
watersjthinking worst case, i might need to do some manual db edits in cinder15:34
ScrumpyJacktimburke: are there any docs about "tuning down the replicators"?15:34
watersjor I export and reimport15:34
watersjfrom ceph15:35
watersjgregwork, why did you choose triple0 for you deployment tool?15:36
*** niceplace_ has joined #openstack15:39
gregworkit made a lot of sense for doing this in prod15:40
timburkeScrumpyJack, you can fiddle with concurrency/workers: https://github.com/openstack/swift/blob/master/etc/object-server.conf-sample#L240-L248 and ionice: https://github.com/openstack/swift/blob/master/etc/object-server.conf-sample#L309-L31915:40
gregworktemplate driven deployment with heat managing ansible/puppet, a big name vendor using it as their distro base .. the ability to scale up/down easily15:41
timburkeScrumpyJack, various replication-related settings on the server may be useful, too: https://github.com/openstack/swift/blob/master/etc/object-server.conf-sample#L170-L19415:42
ScrumpyJackand if i were to leave the ring files alone, and just add the node back, it keep it load low, i should add, say, 10 disks at a time?15:42
watersjkolla seemed to have ability to scale easily, issue I had was with mariadb cluster was hosed15:43
*** dsneddon has joined #openstack15:44
timburkeScrumpyJack, IME more spindles makes for a happier cluster -- i'd be inclined to just add them all back. you could start by just mounting a few at a time, though, see how the system reacts15:46
timburkedo you have a separate replication network?15:46
*** sshnaidm is now known as sshnaidm|afk15:49
ScrumpyJackno seperate network15:50
*** RickDeckard has joined #openstack15:50
timburkemight be worth looking into. the big advantage there is that you can saturate the replication network without much impacting client traffic15:55
timburkeout of curiosity, what are you using swift for? i love hearing about people's use-cases :-)15:55
*** imega has quit IRC15:58
*** zaitcev_ has quit IRC16:01
*** hukyld has left #openstack16:02
*** miloa has quit IRC16:03
*** RickDeckard has quit IRC16:03
*** tesseract has quit IRC16:08
*** med_ has quit IRC16:10
*** bobh has joined #openstack16:11
*** zaitcev_ has joined #openstack16:13
*** alexmcleod has quit IRC16:14
*** bobh has quit IRC16:15
*** bobh has joined #openstack16:16
*** bobh has quit IRC16:20
*** jathan has joined #openstack16:23
*** watersj has quit IRC16:25
*** donnyd has joined #openstack16:26
*** brault has quit IRC16:27
*** dgurtner has quit IRC16:28
*** iclon has quit IRC16:32
*** macz has joined #openstack16:33
gregworkif you deploy a stack with a particular template version, can you update the template version of that stack with a stack update?16:37
gregworkim thinking no given my limited testing16:37
gregworkbut was looking for other opnions16:37
*** jonaspaulo has joined #openstack16:39
*** suuuper has quit IRC16:48
*** miloa has joined #openstack16:49
*** mlycka has quit IRC16:51
*** salmankhan has joined #openstack16:51
*** satanist has quit IRC16:52
*** Warped has quit IRC16:52
*** satanist has joined #openstack16:52
*** avivgta has quit IRC16:54
*** RickDeckard has joined #openstack17:02
*** RickDeckard has quit IRC17:07
*** RickDeckard has joined #openstack17:09
*** zaitcev_ has quit IRC17:12
*** salmankhan has quit IRC17:13
*** ozzzo has joined #openstack17:19
*** zaitcev_ has joined #openstack17:24
*** brault has joined #openstack17:30
*** miloa has quit IRC17:32
*** brault has quit IRC17:34
*** sajoupa has quit IRC17:35
*** shibboleth has joined #openstack17:43
*** brault has joined #openstack17:44
*** brault has quit IRC17:45
*** jmlowe has quit IRC17:51
*** zaitcev__ has joined #openstack17:55
*** shibboleth has quit IRC17:57
*** mmethot has quit IRC17:58
*** zaitcev_ has quit IRC17:58
*** mmethot has joined #openstack17:58
*** jmlowe has joined #openstack18:06
dsneddongregwork, Hi, I see in my scrollback that you were asking about policy-based routing with os-net-config last night. Did you get your answer?18:16
dsneddongregwork, I added policy-based routing fairly recently, so you'll need a recent build. I can find the minimum package which has the code if you want. It's not really fully tested yet.18:17
*** igordc has quit IRC18:23
*** phyrox has joined #openstack18:24
*** morazi has quit IRC18:27
*** RickDeckard has quit IRC18:48
*** zaitcev_ has joined #openstack18:51
*** bbowen has quit IRC18:51
*** zaitcev__ has quit IRC18:55
*** RickDeckard has joined #openstack18:56
*** lpetrut has quit IRC19:02
*** godlike has quit IRC19:08
*** godlike has joined #openstack19:08
*** godlike has quit IRC19:09
*** godlike has joined #openstack19:10
*** RickDeckard has quit IRC19:11
*** godlike has quit IRC19:16
*** godlike has joined #openstack19:16
*** zaitcev_ has quit IRC19:18
*** ivve has joined #openstack19:20
*** zaitcev_ has joined #openstack19:30
*** morazi has joined #openstack19:31
*** aakarsh has quit IRC19:35
*** PagliaccisCloud has quit IRC19:35
*** morazi has quit IRC19:38
gregworkdsneddon: no i didnt19:42
gregworkdsneddon: hmmn so for now i guess i need to do a post config hook in my overcloud deploy19:43
gregworkim using redhat osp13 (queens)19:43
gregworkto create the ip route / ip rule entries19:43
dsneddongregwork, Yeah, let me check and see if we backported the PBR to OSP 13.19:44
dsneddongregwork, OK, it looks like it only made it into 15+.19:44
*** sajoupa has joined #openstack19:45
dsneddongregwork, There were some workarounds, like adding the route table identifier in "route_options" of the route itself. Then you need to manually update the /etc/iproute2/rt_tables file separately or via a script that gets run during deploy.19:47
dsneddongregwork, I know a few production OSP 13 deployments used that method, before one user requested that we make it possible to create route tables in os-net-config.19:47
gregworkyeah19:49
gregworkmy network is breaking pretty badly without a proper policy config19:49
gregworkasymmetric and all that19:49
gregworkive had to set rpfilter to 219:49
gregworkfor the time being19:49
*** morazi has joined #openstack19:56
*** gyee has quit IRC20:05
*** atmark has quit IRC20:07
*** mloza has joined #openstack20:08
*** slaweq has quit IRC20:09
*** zaitcev_ has quit IRC20:17
*** zhubx has joined #openstack20:20
*** gyee has joined #openstack20:23
*** boxiang has quit IRC20:24
*** aconole has quit IRC20:26
*** zaitcev_ has joined #openstack20:29
*** brault has joined #openstack20:32
*** jmlowe has quit IRC20:36
*** brault has quit IRC20:37
*** aakarsh has joined #openstack20:39
*** xenos76 has quit IRC20:53
*** bbowen has joined #openstack20:53
*** markvoelker has quit IRC20:58
*** markvoelker has joined #openstack20:58
*** belmoreira has joined #openstack20:59
*** markvoelker has quit IRC21:10
*** markvoelker has joined #openstack21:10
*** slaweq has joined #openstack21:11
*** Goneri has quit IRC21:15
*** zaitcev_ has quit IRC21:16
*** slaweq has quit IRC21:16
*** whyz has quit IRC21:16
*** belmoreira has quit IRC21:16
*** jmlowe has joined #openstack21:18
*** ircuser-1 has quit IRC21:22
*** aakarsh has quit IRC21:23
*** belmoreira has joined #openstack21:25
*** imega has joined #openstack21:27
*** markvoelker has quit IRC21:39
*** markvoelker has joined #openstack21:40
*** markvoelker has quit IRC21:45
*** dayou has quit IRC21:51
*** jbadiapa has quit IRC21:52
*** ivve has quit IRC22:03
*** dayou has joined #openstack22:05
*** markvoelker has joined #openstack22:05
*** dudek has quit IRC22:05
*** markvoelker has quit IRC22:11
*** slaweq has joined #openstack22:11
*** belmoreira has quit IRC22:11
*** ircuser-1 has joined #openstack22:15
*** fsimonce has quit IRC22:16
*** slaweq has quit IRC22:17
*** jonaspaulo has quit IRC22:24
*** avivgt has quit IRC22:33
*** notmyname has quit IRC22:38
*** mcornea has quit IRC22:43
*** notmyname has joined #openstack22:48
*** macz has quit IRC22:57
*** arnoldoree has quit IRC23:04
*** jathan has quit IRC23:10
*** slaweq has joined #openstack23:11
*** slaweq has quit IRC23:15
*** rcernin has joined #openstack23:21
*** PagliaccisCloud has joined #openstack23:25
*** imega has quit IRC23:30
*** phyrox has quit IRC23:30
*** Warped has joined #openstack23:49

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!