Thursday, 2017-08-31

*** b1airo has quit IRC00:03
*** jmlowe has quit IRC02:00
*** jmlowe has joined #scientific-wg02:01
*** jmlowe has quit IRC03:13
*** jmlowe has joined #scientific-wg03:14
*** rbudden has quit IRC03:34
masbergood morning, we run a "basic" HPC which means no MPI nor inifinband, just a bunch of server ~25. Software is dpeloyed through Rocks cluster and we use sge as job scheduler, we use Panasas as a high performance storage appliance and we would like to change it to ceph. Has anyone used ceph for HPC and give me some thoughts in regards performance? thank you06:19
*** trandles has quit IRC07:53
*** priteau has joined #scientific-wg10:04
*** priteau has quit IRC10:28
*** priteau has joined #scientific-wg10:31
*** rbudden has joined #scientific-wg11:28
jmlowebollig might have an opinion, we use ceph to back our openstack cluster for jetstream13:33
jmlowewith the active-active mds being soup now I'm very interested in how it stacks up against lustre13:34
jmloweKansas state, I think, uses it for hpc cluster homedirs13:34
jmloweThere's a guy at one of the federal reserve banks that uses it for hpc13:35
jmloweThere are deep hpc roots in ceph, Sage formulated the idea for ceph while working on his phd at one of the national labs13:37
*** trandles has joined #scientific-wg15:21
*** simon-AS559 has joined #scientific-wg15:42
jmloweSo, who's in for Berlin in 2018?15:52
*** simon-AS559 has quit IRC16:39
rbuddenjmlowe: hopefully me!18:12
*** priteau has quit IRC21:49
*** rbudden has quit IRC23:06
*** rbudden has joined #scientific-wg23:11
*** b1airo has joined #scientific-wg23:34
*** priteau has joined #scientific-wg23:49
*** priteau has quit IRC23:54

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!