*** b1airo has quit IRC | 00:03 | |
*** jmlowe has quit IRC | 02:00 | |
*** jmlowe has joined #scientific-wg | 02:01 | |
*** jmlowe has quit IRC | 03:13 | |
*** jmlowe has joined #scientific-wg | 03:14 | |
*** rbudden has quit IRC | 03:34 | |
masber | good morning, we run a "basic" HPC which means no MPI nor inifinband, just a bunch of server ~25. Software is dpeloyed through Rocks cluster and we use sge as job scheduler, we use Panasas as a high performance storage appliance and we would like to change it to ceph. Has anyone used ceph for HPC and give me some thoughts in regards performance? thank you | 06:19 |
---|---|---|
*** trandles has quit IRC | 07:53 | |
*** priteau has joined #scientific-wg | 10:04 | |
*** priteau has quit IRC | 10:28 | |
*** priteau has joined #scientific-wg | 10:31 | |
*** rbudden has joined #scientific-wg | 11:28 | |
jmlowe | bollig might have an opinion, we use ceph to back our openstack cluster for jetstream | 13:33 |
jmlowe | with the active-active mds being soup now I'm very interested in how it stacks up against lustre | 13:34 |
jmlowe | Kansas state, I think, uses it for hpc cluster homedirs | 13:34 |
jmlowe | There's a guy at one of the federal reserve banks that uses it for hpc | 13:35 |
jmlowe | There are deep hpc roots in ceph, Sage formulated the idea for ceph while working on his phd at one of the national labs | 13:37 |
*** trandles has joined #scientific-wg | 15:21 | |
*** simon-AS559 has joined #scientific-wg | 15:42 | |
jmlowe | So, who's in for Berlin in 2018? | 15:52 |
*** simon-AS559 has quit IRC | 16:39 | |
rbudden | jmlowe: hopefully me! | 18:12 |
*** priteau has quit IRC | 21:49 | |
*** rbudden has quit IRC | 23:06 | |
*** rbudden has joined #scientific-wg | 23:11 | |
*** b1airo has joined #scientific-wg | 23:34 | |
*** priteau has joined #scientific-wg | 23:49 | |
*** priteau has quit IRC | 23:54 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!