Wednesday, 2019-05-29

*** jaypipes has quit IRC00:09
*** takashin has joined #openstack-placement00:17
*** efried has quit IRC00:37
*** efried has joined #openstack-placement00:45
*** tetsuro has joined #openstack-placement01:12
*** tetsuro has quit IRC02:45
*** tetsuro has joined #openstack-placement02:51
openstackgerritMerged openstack/placement master: Allow [a-zA-Z0-9_-]{1,64} for request group suffix  https://review.opendev.org/65741902:53
openstackgerritMerged openstack/placement master: Avoid traversing summaries in _check_traits_for_alloc_request  https://review.opendev.org/66069102:53
openstackgerritMerged openstack/placement master: Use trait strings in ProviderSummary objects  https://review.opendev.org/66069202:53
*** tetsuro has quit IRC02:55
*** tetsuro has joined #openstack-placement02:55
*** tetsuro has quit IRC03:01
*** tetsuro has joined #openstack-placement03:09
*** tetsuro has quit IRC03:12
*** tetsuro has joined #openstack-placement03:20
*** tetsuro has quit IRC03:24
*** altlogbot_3 has quit IRC03:44
*** altlogbot_2 has joined #openstack-placement03:46
*** tetsuro has joined #openstack-placement03:59
*** tetsuro has quit IRC04:04
*** tetsuro has joined #openstack-placement04:26
*** altlogbot_2 has quit IRC04:38
*** altlogbot_2 has joined #openstack-placement04:40
*** altlogbot_2 has quit IRC04:40
*** altlogbot_2 has joined #openstack-placement04:42
*** tetsuro has quit IRC05:12
*** tetsuro has joined #openstack-placement06:02
*** tetsuro has quit IRC06:07
*** tetsuro has joined #openstack-placement06:19
*** tetsuro has quit IRC07:16
*** tetsuro has joined #openstack-placement07:22
*** tetsuro has quit IRC07:41
*** tetsuro has joined #openstack-placement07:47
*** helenafm has joined #openstack-placement07:50
*** tetsuro has quit IRC07:56
*** takashin has left #openstack-placement07:57
*** e0ne has joined #openstack-placement07:59
openstackgerritChris Dent proposed openstack/placement master: Optionally run a wsgi profiler when asked  https://review.opendev.org/64326908:37
*** cdent has joined #openstack-placement09:05
openstackgerritChris Dent proposed openstack/placement master: Modernize CORS config and setup  https://review.opendev.org/66192209:32
*** jaypipes has joined #openstack-placement10:57
*** helenafm has quit IRC12:05
*** e0ne has quit IRC12:16
*** helenafm has joined #openstack-placement12:55
sean-k-mooneycdent: replied to your replies :) am but i think this might be the most interesting https://review.opendev.org/#/c/658510/4/doc/source/specs/train/approved/2005575-nested-magic.rst@36313:05
sean-k-mooneyim not actully sure we need can_split if we do ^13:05
sean-k-mooney"... always report vcpus and memory_mb(4k memory only) on the compute node RP and report hugepages and PCPUs allways on the numa nodes."13:06
sean-k-mooneycdent: oh and i did not say it in the review but yes im fine with  same_subtree, resourceless request groups, and verbose suffixes13:08
sean-k-mooneyso at least on my part it was not jsut because can_split was first in the docuemnt13:09
cdentcool, thanks13:19
*** mriedem has joined #openstack-placement13:20
*** e0ne has joined #openstack-placement13:24
efriedcdent, sean-k-mooney: o/13:40
sean-k-mooneyefried: o/13:40
efriedI didn't manage to catch up with all the can_split comments even as of yesterday.13:40
efriedbut the one thing that stands out as possibly bacon-saving is:13:40
efriedif we gotta have NUMA and non-NUMA guests on the same host, put VCPUs and non-huge pages on the root RP, and PCPUs and hugepages on the NUMA RPs.13:41
sean-k-mooneyefried: ya so if we do that i think we dont need can split13:42
efriedright13:42
efriedand cdent, to answer your question, no, I don't think existing flavors need to change.13:42
sean-k-mooneywe do need to handel that one edgecase of you ask for hw:mem_page_size=4k|small13:42
efriedWe were going to need to add code to interpret them anyway13:43
sean-k-mooneybut we can do that in the prefilter that construct the placemetn request13:43
efriedsean-k-mooney: is that a done thing?13:43
sean-k-mooneyyes13:43
efriedCan we have an equivalent of cpu_dedicated_set for memory pages?13:43
sean-k-mooneyyou can explictly request small pages but it also give the vm a numa toplogy of 113:43
sean-k-mooneyso we alreay track memory pages in teh resouce tracker even for 4k13:44
sean-k-mooneywe jsut dont use that unless you set hw:mem_page_size=4k or small13:44
sean-k-mooneywhen you set that we bot corectly do the acconting and claim it in the resouce tracker and affinites teh vm to the numa node it was allcoated form13:45
sean-k-mooneybut as i said that is only a proablem if you set that + requested pinned cpus13:46
efriedIs there a config option that sets those things up, or do we slice 'em on the fly?13:46
sean-k-mooneyset which up the tracking in the resouce tracker ectra13:46
sean-k-mooneythat is always done be just dont claim alwasy as for non numa guests we allow the kerenl to float both the memroy and the cpus across numa nodes as it sees fit13:47
efriedsean-k-mooney: So what we're suggesting here is to allow the operator to decide which CPUs are going to be for NUMA (cpu_dedicated_set) and non-NUMA (cpu_shared_set) guests.13:49
efriedThat's a new (mostly) config setup per stephenfin's spec13:49
sean-k-mooneyno cpu_shared_set does not defien that13:49
sean-k-mooneywe would have to update the spec to state that if we want to use it that way13:49
efriedsean-k-mooney: I'm saying that's what we're talking about doing to solve the problem at hand.13:49
sean-k-mooneynot quite13:50
efriedisn't it? Or did I miss13:50
efriedapparently I missed13:50
sean-k-mooneyi am saying at the palcemnt level yes13:50
sean-k-mooneyin mont saying that you can have numa affintiy with floating cpus13:50
sean-k-mooney*not13:50
sean-k-mooneythat would be a big regression13:50
efriedwhat does that mean, NUMA affinity with floating CPUs?13:51
sean-k-mooneybut that affinity would still be provided by the numa toplogy filter not placemnt13:51
efriedyou mean "my guest can float across all the CPUs in the NUMA node, it's not pinned to specific ones"?13:51
sean-k-mooneyif you requst hugepages and nothing else13:51
efriedBut it's still restricted to only CPUs in the NUMA node13:51
sean-k-mooneywe restict your cores to the cores that the hugpage memory is allocated form and create a numa toplogy of 113:51
sean-k-mooneyso they still float but just within the 1 numa node your huge page memory came form13:52
efriedthat ^ is how it's done today?13:52
sean-k-mooneyyes13:52
sean-k-mooneywhen you set hw:mem_page_size to anythign we treat it as if you also set hw:numa_nodes=113:53
efriedso in placement-land, if we put cpu_shared_set on the root RP, we would satisfy ^ this by allocating some number of those VCPUs from the root RP, and the memory thingies from a NUMA node.13:53
sean-k-mooneyyep13:54
sean-k-mooneyand checking the affinity of that would be up to the numa toplogy filter13:54
sean-k-mooneye.g. not placment problem13:54
efriedBut that brings us back to the scenario where we have to fail late if there's not enough VCPUs in one NUMA node to fulfil the request.13:54
efriedhow frequently is this ^ done, vs other NUMA-y configurations?13:55
sean-k-mooneytechnicaly that could happen but if we claim the cpus in the resouce tracker in the conducto that also solves that issue13:55
sean-k-mooneyjust requesting hugepages?13:56
sean-k-mooneyand not pinning13:56
sean-k-mooneyits done anythime you have a dpdk netowrk backend as its reqiured for it to work13:56
sean-k-mooneybut cpu pinning is not13:56
efriedAnd how much of a regression is it really if we float *all* the CPUs in that case?13:56
sean-k-mooneyi have raised that idea of breaking that copeling many times and always been shut down13:57
efriedokay13:57
efriedby whom, out of curiosity?13:58
efriedSounds like they would be good people to ask these questions we've been noodling.13:58
sean-k-mooneyintel, windriver and telcos13:59
efriedbtw, I had another idea to mitigate "candidate explosion"13:59
efried(cdent edleafe)13:59
efriedallow an optional :$chunk_size syntax in the value, so the caller can decide on a per-call basis.14:00
efriedcan_split=VCPU,MEMORY_MB:102414:00
cdentmleh14:00
cdenthow smart are we expecting the caller to be14:00
efriedthat's halfway between a 'meh' and a 'bleh'?14:00
cdentthis has been a constant concern on my part14:00
cdentpretty much14:01
sean-k-mooneyefried: by the way wat i think would be fiar is not to have affinity for hw:mem_page_size but always have it if you set hw:numa_nodes14:01
efriedWell, the caller in this case is nova. I'm *not* expecting that syntax ever to be put in place by a human.14:01
sean-k-mooneyhum interesting14:01
sean-k-mooneyhow would the operator decide that? or specify that14:01
efriedso for example cdent, hw:mem_page_size translates directly to $chunk_size for MEMORY_MB14:01
sean-k-mooneyefried: that is not enought ot make hugepages work in placment14:02
efried(course, it's normally being used to define a NUMA topology, which can_split is specifically not, but...)14:02
sean-k-mooneywe shoudl have seperate inventories of each size14:02
cdentbut then you would have to pre-partition14:02
efriedI don't understand how hugepages work. Are they preallocated in some way?14:02
efriedyeah, that.14:02
sean-k-mooneywell it depends on if you said i want 1G pages or any large page14:02
sean-k-mooneyefried: yes preallocated in the kernel usually on the kernel commandline14:03
sean-k-mooneywe do not have to select specific hugepage to give to the guest but we have pools of them per numa node and just allocat 4 units of them fo a given size14:03
sean-k-mooney*x units14:04
efriedsean-k-mooney: and if my system is set up that way and I just ask for a random amount of MEMORY_MB in a non-NUMA configuration, does it actually consume whole pages?14:04
efriedOr does the memory somehow get to "float" like the CPUs do14:04
sean-k-mooneyefried: no the default hw:mem_page_size is small not any14:04
sean-k-mooneyso if that is not set it always uses the smallest pagesize which is typically 4k14:05
sean-k-mooneysince teh kernel cannot work with only hugepage memory you will never have a case where the only memeory is hugepages14:05
sean-k-mooneyif you want guests ot only use hugepages on a host then you set the host reserve memory  = to the totoal amount of 4k memory on the host14:06
sean-k-mooneywhen the memory is allocated as hugepages even when they are not used by anything they are removed from the free memory reported on the host at the kernel level14:07
sean-k-mooneyso if you allocte 60 1G pages on a 64G host then free will say you only have 4 G free14:08
sean-k-mooneywell - whatever the os is useing14:08
sean-k-mooneyanyway back to chunksize14:08
sean-k-mooneydid you envison that in the flaovr somewhere? or a config?14:08
efriedI figured the sched filter would add it to the request14:10
cdent[t aPG6]14:10
purplerbot<efried> Sounds like they would be good people to ask these questions we've been noodling. [2019-05-29 13:58:18.035937] [n aPG6]14:10
cdentbecause we keep coming up with edges and corners where _something_ is going to change for someone, no matter what we do, unless we continue to say "for now, don't co host numa and not", whether we do can_split or not14:10
efriedbased on either something already in the flavor (like a "page size" of some sort - not sure what's there); or perhaps calculated on the fly based on the total amount of memory being requested; or perhaps even a config option that defaults to something "sensible".14:11
sean-k-mooneyefried: well even for 4k pages the minium allcoation in the flavor is 1mb chunks14:12
efriedcdent: I agree with you14:12
sean-k-mooneyand 1G for disk14:12
efriedsean-k-mooney: that's useful information, I would think that should be fed into the step_size in the inventory setup.14:12
sean-k-mooneyso there is a natural minium granularity isn most cases14:12
efriedBut it's not today14:12
efriedcdent: The issue I have is even knowing what are the reasonable subset of questions to ask.14:13
efriedIf we could identify a small handful of knowledgeable stakeholders, and have an actual sit-down (hangout) with them for a couple of hours, we could get somewhere.14:13
cdentMaybe we don't have to ask all the questions at once. Have you had a chance to look at what I said on the etherpad?14:13
efriedBut we've speculated on sooo many things at this point, if we put it all in an email, is anyone actually going to read and process it?14:13
efriedyes14:14
cdentMy concern with a "small handful" of stakeholders is that they are often representative of a small but powerful segement that does not represent the commonweal. we need both.14:14
* edleafe catches up14:14
efriedfair enough14:14
edleafeSounds like the classic "i want" vs. "we need"14:15
cdentmaybe start with "how much will you hate it if we continue to say 'do not mix numa and non-numa' and see where that goes"14:15
cdentedleafe: yes14:15
efriedcdent: At this point I'm tempted to say we should do something of limited scope that supports a certain known subset... yes, kind of that --^14:15
edleafecdent: when you say "mix": do you mean on the same host, or having numa/non-numa in the same deployment?14:15
cdentthat's where I'm at too, which is why I was trying to determine if "everything can_split" is a workable thing14:16
cdentedleafe: same host14:16
edleafeugh - no, that's dumb14:16
edleafeI think we agreed that that wasn't going to happen back in Atlanta14:16
cdentdamn bit typo: "every feature _but_ can_split, from the spec"14:16
cdents/bit/big/14:16
cdentedleafe: are you up to date on the vast commentary on https://review.opendev.org/658510 ? if not, probably best to catch up there first14:17
edleafecdent: no, I'm not up to date on much lately. :)14:17
sean-k-mooneyedleafe: the peopel that care about numa and non numa on the same host are typically the edge folk for context but even then im not totally sold on that argument as they tend to tune everthing to use every last resouces14:18
cdentefried: so yeah: maybe an option, if it is not damaging, is: same_subtree, resourceless providers, mappings in allocations, complex suffixes and NOT can_split14:19
edleafesean-k-mooney: I'm not surprised14:19
sean-k-mooneyso in general i personlly woudl be be fine with say please continue not to mix numa and non numa instace14:19
efriedcdent: and a per-compute config option saying use_placement_for_numa14:19
sean-k-mooneyand as cdent mention earlier the nova config option for what resouce to report under numa ndoes is enough to enable that14:19
cdentefried: yeah, something like14:20
sean-k-mooneyefried: that is in the nuam in placement spec14:20
efriedand when nova sees a flavor that asks for numa-y things, it adds a required trait for HW_NUMA_ROOT in its own group so we're sure to land on such a host.14:20
sean-k-mooneythere is a per compute config option that list the resouce classes to  report under a numa node14:20
efriedI need to read that spec again...14:20
efriedI've lost track of it.14:21
sean-k-mooneyill get the link to the right bit on sec i have it open14:21
sean-k-mooneyhttps://review.opendev.org/#/c/552924/14/specs/stein/approved/numa-topology-with-rps.rst@31114:22
sean-k-mooney  [numa]14:22
sean-k-mooney  resource_classes = [VCPU, MEMORY_MB, VGPU]14:22
sean-k-mooneythe default was propsed as empty so you had to opt in to thinks being reported as numa resouces14:22
efried++14:23
sean-k-mooneyalthough i think PCPU shoudl be in the list by default14:23
efriedNah14:23
sean-k-mooneygive that you can only consume it if you have a numa toplogy14:23
sean-k-mooneybut im fine with it being empty too14:23
efriedI can think of a couple use cases right off the top that care about PCPU but don't care about NUMA.14:23
sean-k-mooneyand nova does not allow it today14:24
sean-k-mooneyi have been trying to get nova to allow it since before we intoduced cpu pinning14:24
sean-k-mooneythe only reason we havent is a have not convice peopel its ok to break backwards compatiablity since they can get teh same behavior be also setting hw:numa_nodes=114:25
sean-k-mooneyanyway i need to go work on a static cache allocation spec...14:27
efriedcdent: Okay, so if we do this (no mixing, use_numa_for_placement type config), there should be a way for operators to continue to do things the way they are today, i.e. no NUMA topo in placement, rely completely on the NUMATopologyFilter. Conductor-level config opt use_placement_for_numa=$bool too?14:27
* cdent parses14:27
sean-k-mooneyefried: am the idea was to not enable the prfilter14:27
sean-k-mooneye.g. the prefilter is what transfromse the request into the numa version14:28
sean-k-mooneybut it will be a cluster wide thing14:28
efriedyes, exactly sean-k-mooney. if the scheduler (sorry, not conductor) option is use_placement_for_numa=True, we use the prefilter to translate the request into the numa-y placement-y version. If not, not.14:29
efriediow operator has to opt into placement-for-numa by14:29
efried1) setting the compute-level conf opt on each host they want NUMA instances to land on14:29
efried2) setting the sched-level opt to True14:29
sean-k-mooneyefried: sure but the prefilter was also going to have a bool flag14:29
efriedbut they don't have to change their flavors at all, which I think is important.14:29
efriedsean-k-mooney: That *is* the bool flag.14:29
sean-k-mooneyefried: no my point was all prefilters have a bool flage to enable them14:30
efriedwe must be talking about different prefilters14:30
sean-k-mooneyso rather then use_placement_for_numa there woudl be a enable_numa_prefilter config14:30
efriedI'm talking about nova.scheduler.request_filter14:30
efriedsure, whatever you want to call it.14:31
sean-k-mooneyefried: the numa spec proposed a prefilter14:31
sean-k-mooneyefried: we were going to add a new one14:31
sean-k-mooneyjust fo numa14:31
efriedone boolean at the scheduler level that tells us whether to use placement-y numa-y GET /a_c or not.14:31
sean-k-mooneyand we were going to add a new one for cpu pinning too14:31
cdentit's a shame that we have turn on "use placementy numay" instead of the presence of appropriate inventory being enough. But I guess we have translate request specs whatever we do...14:34
efriedyeah, the conductor has to have a way to know that the computes are representing NUMA topo in placement shapes.14:35
*** artom has quit IRC14:35
*** artom has joined #openstack-placement14:35
efriedIf there were a way to use the *same* GET /a_c request syntax to request NUMA-y stuff whether the computes are treed or not... we'd be done.14:36
* cdent nods14:36
efriedcdent: procedurally, we clearly aren't holding up code work pending merging the spec, since arbitrary suffixes already merged...14:36
cdentyeah, this is sparta, or something14:37
efried...but (with all dramatic irony) do you think it would be a good or bad idea to split can_split into its own spec?14:37
cdentI was thinking the same thing earlier today, if what we want is to be able to move independently. If that does happen I would keep can_split on the same gerrit review and move all the other stuff to the new one14:38
efriedyeah, to preserve the massive history14:41
efriedcdent: I don't know what the f to do now. Reached a point of thorough analysis paralysis.14:42
cdenthuh, I was thinking we just made something approaching a plan14:43
cdentwhich is:14:43
cdenta) split can_split to own spec14:43
cdentb) say to ops our next steps on numa (pointing to both specs, indicating can_split is something we'd prefer not to do, ask if they will live)14:44
cdentc) write code14:44
cdentd) align nova's queries with whatever needs to happen (which has to happen anyway)14:45
efriedokay14:46
edleafe[t hiCo]14:47
purplerbot<efried> If there were a way to use the *same* GET /a_c request syntax to request NUMA-y stuff whether the computes are treed or not... we'd be done. [2019-05-29 14:36:00.313264] [n hiCo]14:47
cdentyou give off the vague air of someone not entirely convinced (which is probably totally appropriate)14:47
edleafeefried: You seem to be teasing me...14:48
cdentedleafe: c'mon, you're high, this probably doesn't go away with a different database14:48
cdents/probably/problem/14:48
edleafecdent: Whether I'm high or not is irrelevant :)14:48
cdentefried: which of those tasks would you like me to take?14:50
efriedcdent: Thanks, I was about to ask14:51
efrieda), and possibly b)?14:51
efriededleafe: I'm not teasing you. Graph db does not figure into my thinking at all, in any context. Because regardless of whether the theory is workable, I don't believe there is a practical path to making it a reality. (And also what cdent said.)14:52
cdentefried: yeah can do both, but tomorrow14:52
efriedcdent: Okay, cool, thank you. I'm pretty far "behind" (cf. previously discussed definition) and that'll help me "catch up".14:53
cdentrad14:53
edleafeefried: And I don't believe that there is a practical path to making nested/shared/etc. a reality with sqla14:53
efriedI may be way off here, but I don't feel like our challenge right now is around "how would we make this work"14:54
efriedit's around defining what we want "this" to look like at all.14:54
cdentyes14:54
edleafeefried: I get that. The difficulty in making a particular choice is always influenced by the dread of implementing it, though14:55
efriedmeh, I'm not worried about that. Mainly because I have faith in tetsuro :)14:55
*** artom has quit IRC14:55
edleafeefried: ...and magic15:00
efriedsean-k-mooney: In your commentary you say that we recommend not mixing numa and non-numa instances - can you point me to the doc that says that?15:15
efriedcause it semes to me that gives us a pretty decent case for saying we're going to simply not allow it at all anymore.15:15
sean-k-mooneyefried: i know its in some of the downstream nfv tuning guides i will check and see if its upstream15:25
sean-k-mooneystephenfin: ^ do you know where this would have been documented upstream?15:25
stephenfinI've no idea if we document it or not15:27
stephenfinIf it was anywhere, it would be in https://docs.openstack.org/nova/latest/admin/cpu-topologies.html15:28
*** altlogbot_2 has quit IRC15:35
*** altlogbot_0 has joined #openstack-placement15:36
*** irclogbot_0 has quit IRC15:36
*** irclogbot_3 has joined #openstack-placement15:38
*** helenafm has quit IRC15:49
cdentefried, edleafe: anyone else, just a proof of concept for now but go to https://cdent.github.io/placeview/ and make uri something like https://burningchrome.com/placement/allocation_candidates?resources=DISK_GB:5&resources1=VCPU:1,MEMORY_MB:1024&resources2=VCPU:1,MEMORY_MB:1024&group_policy=isolate and auth 'admin'15:51
*** artom has joined #openstack-placement16:08
efriedneat16:15
cdentefried: I'll do the email and spec stuff tomorrow morning after any overnight fixups16:18
cdentbut for now, I'm away16:18
efriedo/16:21
*** cdent has quit IRC16:22
sean-k-mooneyefried: we have the note below about seperating hosts for pinned instance form non pinned https://docs.openstack.org/nova/latest/admin/cpu-topologies.html#customizing-instance-cpu-pinning-policies16:51
sean-k-mooney"Host aggregates should be used to separate pinned instances from unpinned instances as the latter will not respect the resourcing requirements of the former."16:51
sean-k-mooneybut that really shoudl be more general16:52
*** e0ne has quit IRC16:58
openstackgerritKashyap Chamarthy proposed openstack/os-traits master: hw: cpu: Rework the directory layout; add missing traits  https://review.opendev.org/65519317:11
mriedemcan someone just ram this osc-placement stable/queens test only change through? https://review.opendev.org/#/c/556635/17:49
*** dklyle has quit IRC19:21
efriedmriedem: Looking...19:28
efriedmriedem: Why is osc-placement a required-project?19:28
*** dklyle has joined #openstack-placement19:29
mriedemthis is (a) a backport and (b) the job definition stuff was originally pulled from openstack-zuul-jobs to move in-tree so this is what was in that repo when i originally copied it19:30
efriedso it's not actually necessary? Or we want to keep it there in case... some other project picks it up??19:31
efriedmriedem: rammed19:34
mriedemidk19:35
mriedemif we remove it, we should remove it from master19:35
mriedemrealize that openstack/nova is in required-projects for nova's zuul yaml19:35
mriedemas is placement for placement-perfload19:35
*** mriedem has quit IRC19:48
*** mriedem has joined #openstack-placement19:55
openstackgerritMerged openstack/osc-placement stable/queens: Migrate legacy-osc-placement-dsvm-functional job in-tree  https://review.opendev.org/55663520:10
openstackgerritEric Fried proposed openstack/placement master: Add RequestGroupSearchContext class  https://review.opendev.org/65877820:23
openstackgerritEric Fried proposed openstack/placement master: Move search functions to the research context file  https://review.opendev.org/66004820:23
openstackgerritEric Fried proposed openstack/placement master: Cache provider ids in requested aggregates  https://review.opendev.org/66004920:23
openstackgerritEric Fried proposed openstack/placement master: Remove normalize trait map func  https://review.opendev.org/66005020:23
openstackgerritEric Fried proposed openstack/placement master: Move seek providers with resource to context  https://review.opendev.org/66005120:23
openstackgerritEric Fried proposed openstack/placement master: Reuse cache result for sharing providers capacity  https://review.opendev.org/66005220:23
*** mriedem has quit IRC21:36
*** jaypipes has quit IRC21:41
*** efried has quit IRC21:49
*** takashin has joined #openstack-placement21:49
*** efried has joined #openstack-placement21:51
*** artom has quit IRC23:05
*** artom has joined #openstack-placement23:42

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!