*** crushil has quit IRC | 00:16 | |
*** goldenfri has quit IRC | 08:44 | |
*** zhipeng has joined #openstack-cyborg | 13:46 | |
*** jkilpatr has quit IRC | 14:03 | |
*** jkilpatr has joined #openstack-cyborg | 14:16 | |
zhipeng | jkilpatr ttk2[m] i've written a coarse draft for my idea mentioned last team meeting | 14:17 |
---|---|---|
zhipeng | https://etherpad.openstack.org/p/cyborg-feature-tag | 14:17 |
*** zhipeng has quit IRC | 14:17 | |
ttk2[m] | zhipeng: so lets say we do this and people can schedule based on types, is their application now responsible fo rhandling supporting all the different types of accerlators they may get with the same request? | 14:19 |
ttk2[m] | I guss you can narrow it down | 14:19 |
ttk2[m] | deep learning + cuda X support = get different types of gpu's but the code alway sworks | 14:20 |
zhipengh[m] | That is another important part if the equation needs to be solved | 14:24 |
zhipengh[m] | But for HPC and NFV, it should work without too much trouble | 14:25 |
zhipengh[m] | For preloaded FPGA programs | 14:26 |
ttk2[m] | well yes, provided we handle compatibility checking behind the scenes some. | 14:29 |
*** crushil has joined #openstack-cyborg | 14:59 | |
zhipengh[m] | ttk2: the main point here is about scalability. Name and type won't get us far for accelerator, that is the fundamental problem for us | 15:35 |
ttk2[m] | how so? too specific? | 15:36 |
zhipengh[m] | Yep, especially for FPGA | 15:37 |
*** goldenfri has joined #openstack-cyborg | 17:40 | |
*** crushil has quit IRC | 18:26 | |
*** crushil has joined #openstack-cyborg | 18:49 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!