Thursday, 2017-04-20

*** VW has joined #openstack-operators00:05
*** emagana has quit IRC00:14
*** emagana has joined #openstack-operators00:28
*** Apoorva_ has joined #openstack-operators00:30
*** rebase has quit IRC00:31
*** emagana has quit IRC00:32
*** Apoorva has quit IRC00:34
*** Apoorva_ has quit IRC00:35
*** VW has quit IRC00:36
*** catintheroof has quit IRC00:52
*** gyee has quit IRC01:03
*** jamesden_ has joined #openstack-operators01:16
*** jamesden_ has quit IRC01:20
*** mriedem has quit IRC01:22
*** jamesden_ has joined #openstack-operators01:23
*** jamesden_ has quit IRC01:31
*** VW has joined #openstack-operators01:39
*** cemason1 has joined #openstack-operators03:07
*** cemason has quit IRC03:07
*** emagana has joined #openstack-operators03:15
*** emagana has quit IRC03:20
*** emagana has joined #openstack-operators03:28
*** emagana has quit IRC03:32
*** fandi has joined #openstack-operators03:55
*** fandi has quit IRC03:56
*** fandi has joined #openstack-operators03:57
*** fandi has quit IRC03:59
*** fandi has joined #openstack-operators04:00
*** Rockyg has quit IRC04:01
*** fandi has quit IRC04:01
*** fandi has joined #openstack-operators04:03
*** fragatin_ has joined #openstack-operators04:15
*** fragatina has quit IRC04:19
*** fragatin_ has quit IRC04:20
*** fragatina has joined #openstack-operators04:35
*** fragatina has quit IRC04:39
*** fragatina has joined #openstack-operators04:55
*** fragatina has quit IRC04:57
*** fragatina has joined #openstack-operators04:58
*** simon-AS5591 has joined #openstack-operators05:07
*** fragatina has quit IRC05:17
*** aojea has joined #openstack-operators05:21
*** jbadiapa has quit IRC05:34
*** yprokule has joined #openstack-operators05:39
*** aojea has quit IRC05:39
*** emagana has joined #openstack-operators05:40
*** emagana has quit IRC05:45
*** fandi has quit IRC05:45
*** emagana has joined #openstack-operators05:56
*** simon-AS5591 has quit IRC05:58
*** rarcea has joined #openstack-operators05:59
*** Oku_OS-away is now known as Oku_OS06:04
*** cemason1 is now known as cemason06:06
*** rcernin has joined #openstack-operators06:28
*** slaweq has joined #openstack-operators06:37
*** pcaruana has joined #openstack-operators06:40
*** jbadiapa has joined #openstack-operators06:52
*** tesseract has joined #openstack-operators06:56
*** d0ugal has quit IRC06:56
*** aojea has joined #openstack-operators07:16
*** aojea has quit IRC07:18
*** aojea has joined #openstack-operators07:18
*** manheim has joined #openstack-operators07:20
*** arnewiebalck_ has joined #openstack-operators07:35
*** manheim has quit IRC07:37
*** manheim has joined #openstack-operators07:38
*** arnewiebalck_ has quit IRC07:44
*** slaweq has quit IRC07:48
*** slaweq has joined #openstack-operators07:49
*** bjolo has joined #openstack-operators08:00
*** racedo has joined #openstack-operators08:01
*** arnewiebalck_ has joined #openstack-operators08:02
*** slaweq has quit IRC08:08
*** slaweq has joined #openstack-operators08:09
*** paramite has joined #openstack-operators08:15
*** vinsh_ has quit IRC08:22
*** vinsh has joined #openstack-operators08:27
*** arnewiebalck_ has quit IRC08:36
*** cartik has joined #openstack-operators08:59
*** cartik has quit IRC09:07
*** dbecker has quit IRC09:07
*** dbecker has joined #openstack-operators09:08
*** electrofelix has joined #openstack-operators09:20
*** derekh has joined #openstack-operators09:33
*** cartik has joined #openstack-operators09:39
*** rmart04 has joined #openstack-operators09:51
*** electrofelix has quit IRC10:05
*** electrofelix has joined #openstack-operators10:08
*** aojea has quit IRC10:08
*** slaweq has quit IRC10:19
*** aojea has joined #openstack-operators10:22
*** aojea has quit IRC10:39
*** rmart04 has quit IRC10:45
*** fragatina has joined #openstack-operators10:47
*** aojea has joined #openstack-operators10:47
*** rmart04 has joined #openstack-operators10:50
*** slaweq has joined #openstack-operators11:00
*** fragatina has quit IRC11:03
*** fragatina has joined #openstack-operators11:03
*** markvoelker has quit IRC11:06
*** markvoelker has joined #openstack-operators11:06
*** markvoelker has quit IRC11:11
*** rarcea has quit IRC11:11
*** alexpilotti has quit IRC11:26
*** alexpilotti has joined #openstack-operators11:27
*** dalees has quit IRC11:30
*** alexpilotti has quit IRC11:31
*** dalees has joined #openstack-operators11:33
*** Miouge has joined #openstack-operators11:37
*** alexpilotti has joined #openstack-operators11:43
*** alexpilotti has quit IRC11:47
*** emagana has quit IRC11:54
*** dalees has quit IRC11:57
*** benj_ has quit IRC12:00
*** alexpilotti has joined #openstack-operators12:00
*** cartik has quit IRC12:01
*** alexpilotti has quit IRC12:04
*** alexpilotti has joined #openstack-operators12:05
*** dalees has joined #openstack-operators12:08
*** alexpilotti has quit IRC12:09
*** zenirc369 has joined #openstack-operators12:19
*** fragatina has quit IRC12:22
*** fragatina has joined #openstack-operators12:23
*** catintheroof has joined #openstack-operators12:27
*** catintheroof has quit IRC12:30
*** catintheroof has joined #openstack-operators12:30
*** benj_ has joined #openstack-operators12:32
*** markvoelker has joined #openstack-operators12:39
*** catintheroof has quit IRC12:40
*** catintheroof has joined #openstack-operators12:41
*** catintheroof has quit IRC12:41
*** catintheroof has joined #openstack-operators12:41
*** catintheroof has quit IRC12:43
*** catintheroof has joined #openstack-operators12:43
*** pontusf3 has quit IRC12:45
*** pontusf3 has joined #openstack-operators12:45
*** liverpooler has joined #openstack-operators12:46
*** zenirc369 has quit IRC12:55
*** alexpilotti has joined #openstack-operators13:01
*** slaweq has quit IRC13:03
*** bjolo has quit IRC13:15
*** alexpilo_ has joined #openstack-operators13:19
*** mriedem has joined #openstack-operators13:20
*** dalees has quit IRC13:21
*** slaweq has joined #openstack-operators13:22
*** slaweq has quit IRC13:22
*** manheim has quit IRC13:22
*** alexpilotti has quit IRC13:22
*** erhudy has joined #openstack-operators13:23
*** alexpilotti has joined #openstack-operators13:24
*** VW_ has joined #openstack-operators13:24
*** alexpilo_ has quit IRC13:26
*** VW has quit IRC13:28
*** dalees has joined #openstack-operators13:29
*** VW_ has quit IRC13:29
*** ig0r_ has joined #openstack-operators13:34
*** cartik has joined #openstack-operators13:42
erhudyanyone have any feedback (good/bad) on block live migration these days?13:43
*** jamesden_ has joined #openstack-operators13:44
erhudyas of right now i can't use it because in liberty doesn't seem to work with LVM, but i'm interested to hear if people have positive experiences with it now in m/n/o13:45
*** cartik has quit IRC13:48
*** manheim has joined #openstack-operators13:48
*** chlong has joined #openstack-operators13:51
*** maishsk has joined #openstack-operators14:03
*** manheim_ has joined #openstack-operators14:14
*** manheim has quit IRC14:17
mnaserhow much memory does everyone reserve for compute nodes these days14:17
mnaserwe have 8 reserved and we're seeing some VMs OOM :\14:18
*** slaweq has joined #openstack-operators14:22
*** fragatina has quit IRC14:23
*** alexpilotti has quit IRC14:25
*** alexpilotti has joined #openstack-operators14:26
*** slaweq has quit IRC14:27
mrhillsmanmnaser depends on what you are running on the compute node14:28
mdormanmnaser:   i think we do 3 or 4 GB.    i’m surprised 8 isn’t enough, what else are you running?14:28
mrhillsmani would think 2 is probably good14:28
mnasermrhillsman / mdorman 2 ceph osd processes, but they are only consuming 1gb of ram each (hence the 8)14:29
mrhillsmanbut depends on a few factors indeed14:29
mnaserdo you have swap on your compute nodes14:29
mnaserwonder if that might help14:29
cnfmaybe something is leaking14:29
mnaserthe other thing is we have rbd cache.. i wonder if the rbd cache on each instance is increasing the space used by each vm14:30
cnfcache should be marked as releasable14:30
*** jbadiapa has quit IRC14:30
mnaseri haven't yet done *heavy* investigating14:30
mnaserbut ive seen two cases of oom's14:30
cnfout of mana sucks :/14:31
mnaserthe only pattern is that they are high mem instance (one was 96, one was 64g)14:31
mnaserdo you run swap compute nodes?  i wonder if that would help14:32
mnaser[12842635.802674] Killed process 25433 (qemu-kvm) total-vm:68897952kB, anon-rss:67541888kB, file-rss:0kB14:33
mnaserso that one had around 4gb more14:35
mdormanmnaser:  one thing we’ve thought about doing is adjusting the oom_score_adj setting for qemu-kvm processes, to hopefully prevent vms from getting oom killed.  haven’t actually set that up yet, but the theory is that vms would normally not be the cause of an oom situation (assume you’re not doing crazy oversubscription, etc.)  and it’s usually something else on teh HVs that’s ballooning memory.  so by forcing the oom killer to never kill14:45
mdormanqemu-kvm processes, theoretically it would choose to kill the thing that’s actually the source of the problem.   http://www.oracle.com/technetwork/articles/servers-storage-dev/oom-killer-1911807.html is one article that kind of explains how.14:45
mdormanwe have thought about doing that for rmq as well, b/c we’ve had situations when rmq gets oom killed, too.   we’ve seen a handful of vms get oom killed as well, but its not a systematic problem for us.14:45
*** marst has quit IRC14:48
*** marst has joined #openstack-operators14:55
*** maishsk has quit IRC14:58
*** rcernin has quit IRC15:01
klindgrenmnaser, how much space are you giving the HV?  IE how much are you reserving in your nova configs?15:02
*** jamemxx has joined #openstack-operators15:03
*** JillS has joined #openstack-operators15:03
jamemxxHi15:03
klindgrenhi15:04
mrhillsmanhi15:04
*** cemason has quit IRC15:04
jamemxxI'm here for the LDT meeting. But I can see from ML Matt could not attend15:05
jamemxxHe suggested next Thursday you 4/27 same time, so I advocate for that as well.15:07
*** Oku_OS is now known as Oku_OS-away15:07
*** cemason has joined #openstack-operators15:07
*** zenirc369 has joined #openstack-operators15:07
jamemxxI'll respond in the ML.15:08
mnasermdorman that worries me that instead... ceph osd process will be killed, or openvswitch... so im not sure what would be better to kill15:10
mnaserklindgren reserving 8gb in nova.conf15:10
mdormanmnaser:  yeah damned if you do, damned if you don’t, hah.15:11
mnasermdorman do you have swap setup on your compute nodes?15:11
mnaseri wonder if that can alleviate some of the mess15:11
mdormanjamemxx:  next week is better for me too15:11
mdormanmnaser:   yes, but we do our best to make sure it’s not used15:11
mdormanmnaser:  does seem like that could help you situation a little.  at least you’d swap instead of oomkill15:12
*** alexpilo_ has joined #openstack-operators15:14
*** alexpilotti has quit IRC15:14
*** pcaruana has quit IRC15:14
mnasermdorman this looks really weird in the oom kill (unless im misunderstanding things)15:16
mnaser359055 total pagecache pages, 0 pages in swap cache15:16
mnaseran oom kill with that much page cache?15:16
mnasergetconf PAGESIZE => 4096 bytes .. 359055 * 4096 = 1470689280 bytes => 1.47gb of page cache15:18
mnaseri guess thats reasonable during an oom kill15:18
*** dminer has joined #openstack-operators15:21
*** alexpilotti has joined #openstack-operators15:23
mnaserso based on my simple math: i'm seeing an average of 1~2gb to 4gb extra memory per vm15:24
mnaserwith 4gb reserved i can imagine how that can easily oom15:24
*** rmart04 has quit IRC15:24
*** alexpilo_ has quit IRC15:26
klindgrenthought swapping would easily kill the box as well15:27
zioprotohey folks, for those of you testing the upgrade to Newton... have a look at this https://bugs.launchpad.net/nova/+bug/168486115:27
openstackLaunchpad bug 1684861 in OpenStack Compute (nova) "Database online_data_migrations in newton fail due to missing keypairs" [Undecided,New]15:27
*** simon-AS559 has joined #openstack-operators15:28
mnaserklindgren the way im thinking is that unused memory would be swapped out15:28
mnasergiving us more memory to work with15:29
mnaserim pretty sure in linux swap doesnt mean use when you're out of memory but rather "work with this extra scratch space"15:29
*** slaweq has joined #openstack-operators15:29
*** simon-AS5591 has joined #openstack-operators15:32
*** simon-AS559 has quit IRC15:32
klindgrenthe issue is with a situation in which oomkiller gets triggers is that you can easily run into a problem where the server is paging processes into/out of swap15:33
klindgrenwhich causes the box to slowdown due to iowait15:33
mnaserklindgren agreed, that's valid as well15:34
mnasermaybe start by figuring out why there is such a huge overhead15:34
mnaseri really wonder if its the rbd cache15:34
*** alexpilo_ has joined #openstack-operators15:36
*** VW has joined #openstack-operators15:36
*** alexpilo_ has quit IRC15:39
*** alexpilotti has quit IRC15:39
*** alexpilotti has joined #openstack-operators15:39
*** VW has quit IRC15:40
*** VW has joined #openstack-operators15:40
*** VW has quit IRC15:45
*** Miouge has quit IRC15:45
*** aojea has quit IRC15:52
*** chyka has joined #openstack-operators15:54
erhudymnaser: we do 16 but with converged ceph, 5 was not enough and 16 has given us a little more headroom15:57
erhudywe also don't oversub memory15:57
*** ig0r_ has quit IRC16:01
*** liverpooler has quit IRC16:01
*** newmember has joined #openstack-operators16:03
*** liverpooler has joined #openstack-operators16:03
*** VW has joined #openstack-operators16:06
*** manheim_ has quit IRC16:06
*** zenirc369 has quit IRC16:07
*** VW has quit IRC16:09
*** VW has joined #openstack-operators16:09
*** rebase has joined #openstack-operators16:09
*** VW has quit IRC16:13
erhudythat's with no swap, we turn swap off on HVs16:15
erhudynever had an oomkill16:15
*** newmember has quit IRC16:22
*** newmember has joined #openstack-operators16:23
*** yprokule has quit IRC16:24
*** paramite has quit IRC16:24
logan-mnaser: no swap here but like yourself we are heavy ceph users and rbd cache is turned on. also I have seen some ram overconsumption memory heavy instances. that's an interesting suspicion though with rbd cache maybe causing the over consumption.16:24
logan-we run converged, 1-2 OSD per blade, and reserve 8 i believe16:26
logan-erhudy: regarding the block migrates, I don't have any bad stories on xenial/newton w/ libvirt 1.3.1 yet. but we've only been on it a month or two. also I think LVM block migration is not a thing yet even in Pike16:30
*** chyka has quit IRC16:30
*** chyka has joined #openstack-operators16:31
logan-that's been the big show stopper for me to even look seriously at replacing file-based disks with LVM. from the performance side it seems like a no brainer16:32
*** tesseract has quit IRC16:34
*** chyka_ has joined #openstack-operators16:35
*** chyka has quit IRC16:35
erhudyyeah, i'm trying to strategize up how to offer people local disks while still having some instance mobility if we need to evac16:37
logan-yep- sparsity and live migrate are the big reqs for me. file backed is the only way I know of to get that currently. but the performance is atrocious16:39
erhudyno preallocate?16:39
logan-nope16:40
*** manheim has joined #openstack-operators16:41
*** zenirc369 has joined #openstack-operators16:43
logan-this is some testing I did a while back https://docs.google.com/spreadsheets/d/1_A3SYBvUObZS4ZYaU0gGkF2j-zeynLhGXP4yGKw1vC4/edit?usp=sharing16:43
logan-I have not tested preallocate=full16:43
logan-the LVM/qcow2 tests were performed on instances backed by the local storage used for the "metal" tests16:44
*** fragatina has joined #openstack-operators16:45
erhudyseems very sensitive to block sizes16:45
erhudywere you setting the block size inside the instance or on the HV?16:45
*** manheim has quit IRC16:45
erhudyer, sorry, i'm conflating that with readahead16:46
erhudymixing it up with some testing of a similar nature i did a while back16:46
*** dminer has quit IRC16:46
*** derekh has quit IRC16:48
*** zenirc369 has quit IRC16:49
*** zenirc369 has joined #openstack-operators16:49
*** manheim has joined #openstack-operators16:49
*** newmember has quit IRC16:56
*** alexpilo_ has joined #openstack-operators16:57
dmsimardmnaser: I've dealt with ram overhead issues before. i.e, 32GB VMs really using 35ish16:58
*** simon-AS5591 has quit IRC16:58
erhudydoing it with preallocate=full would be interesting16:59
dmsimardIt's been a while though... IIRC there was no easy solution and we ended up reducing the amount of RAM in the flavors to be less "pretty" but account for a small amount of overhead16:59
dmsimardKSM is not super reliable either16:59
*** vinsh has quit IRC17:00
*** alexpilotti has quit IRC17:01
erhudywe have RBD caching on and i can't say we've ever had memory issues, but that might be because of page merging balancing the overhead out17:01
*** alexpilo_ has quit IRC17:01
*** vinsh has joined #openstack-operators17:02
*** liverpooler has quit IRC17:03
*** liverpooler has joined #openstack-operators17:06
erhudyon my hardest-working HV with 420GB of instances scheduled, about 300GB of memory are actually wired, and if my napkin math is right KSM has merged about 30GB of pages together on that system17:06
erhudyso that seems like a pretty good rate of recovery to me17:06
*** dminer has joined #openstack-operators17:08
mnaserlooks like a large # of operators that do converged :)17:10
logan-i didnt notice the cell notes didn't copy correctly but I just added them all back. the test cmd lines and raw results are in there now17:10
erhudyi am trying to deconverge because it's a pain operationally17:10
mnaserbut, thanks for the comments everyone.. i still haven't done a thorough investigation but 4gb of ram on top of a 64gb instance is a lot, 2gb on top of 16gb is even more17:11
mnasererhudy same here to be honest, it's not on the #1 priority on the list17:11
logan-mnaser: would be really interested to hear of any developments if you research it further.17:11
mnaserwe used to have 32gb memory reserved but that was a lot of wasted space17:11
mnasergiven that we run 2x 1TB OSDs (SSDs)17:12
mnaserso we dropped it to 8 as a start and then these issues started happening17:12
erhudywhat kernel are you running?17:12
mnaserstock centos17:13
erhudyanecdotally at the same time we changed from 5 to 16 GB reserved, we also moved from 3.13 to 4.4 on trusty17:13
erhudyand 4.4 made _such_ a difference17:13
*** racedo has quit IRC17:13
erhudyif you're on centos i have no idea17:13
mnaseri hear new kernels are the nice luxury but17:13
mnaserwe like to stick with rdo's packaging17:13
mnaserand we're not about to start rolling out our own kernels, seems like a crazy path17:13
logan-the nodes where i'm seeing similar overconsumption are 4.417:13
logan-ubuntu xenial17:13
erhudywe used to have a constant stream of blocked ops warnings from ceph that totally disappeared with 4.417:13
logan-ceph jewel17:14
erhudybetter network performance, etc17:14
erhudyjewel? lucky you17:14
erhudywe're still on hammer17:14
logan-:(17:14
mnaserwe're on jewel but one thing that we did that uncovered a lot of issues17:14
mnaserchanging the blocked ops timeout17:14
mnaser32 seconds is a really really REALLY long time for an i/o to complete17:14
mnaserso we dropped it down to 2 seconds and hello ssd hell17:15
mnaserhelped us identify a lot of bad drives17:15
logan-do you monitor ceph osd perf at all? I wonder if high perf readings would correlate to the bad drives you found17:16
mnaserlogan- we tried.  but there's just SO much data and so many numbers17:16
mnaserwe couldn't make anything useful out of it :\17:16
logan-yeah that's a struggle i've had with ceph metrics too17:16
mnaserceph -s / ceph health detail was the most productive useful thing for us at the end of teh day17:16
logan-'ceph osd perf' is pretty concise though17:16
mnaseroh thats an interesting one17:17
*** aojea has joined #openstack-operators17:17
*** manheim has quit IRC17:18
mnaseri completetly forgot about this command, thanks for bringing it back to mind logan-17:18
*** alexpilotti has joined #openstack-operators17:18
*** manheim has joined #openstack-operators17:19
*** jamemxx has quit IRC17:21
mnaseranyone has recommendations on ssds for ceph that they've been using? the ones we usually get (S3520 960GB) are sold out everywhere :\17:21
*** aojea has quit IRC17:22
logan-newer deploys use 3520's, older ones have some micron 510dc's17:22
mnaseryeah our usual vendor and the few other ones seem to be sold out on the s3520s :<17:23
*** manheim has quit IRC17:25
*** ig0r_ has joined #openstack-operators17:27
*** manheim has joined #openstack-operators17:28
*** Apoorva has joined #openstack-operators17:28
*** rebase has quit IRC17:31
*** rebase has joined #openstack-operators17:32
*** electrofelix has quit IRC17:37
*** aojea has joined #openstack-operators17:37
*** rebase has quit IRC17:40
*** aojea has quit IRC17:42
*** dtrainor has quit IRC17:48
*** dtrainor has joined #openstack-operators17:49
*** VW has joined #openstack-operators17:51
*** alexpilotti has quit IRC17:53
*** VW has quit IRC17:56
*** VW has joined #openstack-operators17:57
*** fragatina has quit IRC17:58
*** Caterpillar has joined #openstack-operators18:05
erhudyi think we're doing 3610s or 3710s now18:08
erhudynext gen stuff we want to do on NVMe18:08
*** VW has quit IRC18:16
yankcrimemnaser: we managed to secure some stock of SM863's, but yeah - availability is generally poor and the price just keeps on going up18:21
yankcrime3.13 was a _terrible_ kernel for us, fwiw18:22
erhudy3.13 got the job done but now that we're on 4.4 i can see the places where 3.13 was holding us back18:23
*** eqvist has joined #openstack-operators18:47
*** alexpilotti has joined #openstack-operators18:52
*** simon-AS559 has joined #openstack-operators18:56
*** alexpilotti has quit IRC18:57
*** manheim has quit IRC19:04
*** fragatina has joined #openstack-operators19:16
*** eqvist1 has joined #openstack-operators19:20
*** eqvist has quit IRC19:22
*** eqvist1 has left #openstack-operators19:22
*** rarcea has joined #openstack-operators19:22
*** newmember has joined #openstack-operators19:23
*** newmember has quit IRC19:27
*** newmember has joined #openstack-operators19:28
erhudyquestion for the other ceph operators: do any of you have cinder deployed with multiple independent RBD AZs?19:30
erhudyi'm trying to work out how COW images would work in this setup, because right now the COW between glance/cinder happens across pools in the same ceph cluster19:31
erhudybut if glance and cinder were in different ceph clusters entirely, i don't know how that would work, e.g. if glance can cache images from a master ceph cluster in other clusters and then have cinder shallow clone from the cached copies19:32
*** newmember has quit IRC19:34
*** newmember has joined #openstack-operators19:35
*** shasha___ has joined #openstack-operators19:45
*** bollig has quit IRC19:47
*** fragatina has quit IRC19:55
*** chyka_ has quit IRC19:55
*** chyka has joined #openstack-operators19:57
*** bollig has joined #openstack-operators19:59
*** aojea has joined #openstack-operators20:06
*** aojea has quit IRC20:10
logan-erhudy: i haven't tried it but I think there is a check based on the cluster fsid that will fall back from cow clones to a straight image copy when the glance fsid doesn't match the target20:11
erhudyideally it would copy the image into a cache pool on the target ceph cluster and then COW from that, but i suspect that is hoping for Too Much20:11
logan-the same way it'll always do a image copy when you don't have the rbd:// path exposed in glance20:11
*** chyka has quit IRC20:16
*** liverpooler has quit IRC20:16
*** chyka has joined #openstack-operators20:17
erhudyright now on all our existing clusters glance's pool is in RBD and images are either COW cloned to another ceph pool or copied to disk for local storage, i have no experience with glance and cinder being in different ceph clusters entirely20:17
*** chyka has quit IRC20:22
*** simon-AS559 has quit IRC20:24
*** chyka has joined #openstack-operators20:27
*** aojea_ has joined #openstack-operators20:27
*** simon-AS559 has joined #openstack-operators20:28
*** aojea_ has quit IRC20:31
*** simon-AS5591 has joined #openstack-operators20:32
*** simon-AS559 has quit IRC20:32
*** rarcea has quit IRC20:32
*** simon-AS559 has joined #openstack-operators20:34
*** shasha___ has quit IRC20:34
*** simon-AS5591 has quit IRC20:35
*** simon-AS5591 has joined #openstack-operators20:40
*** simon-AS559 has quit IRC20:40
dmsimardIn a past life where ceph cache layer was a thing, I meant to have a 16x 3.5" chassis with spindle drives in pairs of raid 0's (8 OSDs) with hardware raid cache/writeback and then have those nice intel NVME drives (P3600's?) for the cache pool - it felts like a nice balance of price/cost/performance -- you don't need as much ram/cpu to handle 8 OSDs and you'd need for 16, etc.20:42
dmsimarder, not price/cost/performance -- I meant price/storage/performance20:42
dmsimardbut I haven't been directly involved in operating ceph for a while, I hear the cache pool thing got axed20:43
*** simon-AS559 has joined #openstack-operators20:43
*** simon-AS5591 has quit IRC20:43
*** simon-AS5591 has joined #openstack-operators20:46
*** simon-AS559 has quit IRC20:46
*** aojea has joined #openstack-operators20:46
*** aojea has quit IRC20:51
*** fragatina has joined #openstack-operators21:01
*** dminer has quit IRC21:14
*** dminer has joined #openstack-operators21:15
*** catintheroof has quit IRC21:34
*** manheim has joined #openstack-operators21:35
*** manheim has quit IRC21:36
*** manheim has joined #openstack-operators21:41
*** zenirc369 has quit IRC21:45
*** jamesden_ has quit IRC21:46
*** Apoorva_ has joined #openstack-operators22:08
*** Caterpillar has quit IRC22:09
*** Apoorva has quit IRC22:11
*** marst_ has joined #openstack-operators22:17
*** marst_ has quit IRC22:18
*** marst_ has joined #openstack-operators22:18
*** dminer has quit IRC22:20
*** marst has quit IRC22:21
*** VW has joined #openstack-operators22:22
*** marst_ has quit IRC22:25
*** VW has quit IRC22:31
*** VW has joined #openstack-operators22:32
*** Apoorva_ has quit IRC22:41
*** Apoorva has joined #openstack-operators22:41
*** manheim has quit IRC22:48
*** vinsh has quit IRC23:07
*** slaweq has quit IRC23:20
*** slaweq has joined #openstack-operators23:20
*** simon-AS5591 has quit IRC23:21
*** simon-AS559 has joined #openstack-operators23:22
*** slaweq has quit IRC23:24
*** chyka has quit IRC23:33
*** markvoelker has quit IRC23:39
*** simon-AS5591 has joined #openstack-operators23:49
*** simon-AS5591 has quit IRC23:49
*** simon-AS559 has quit IRC23:53

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!