Like the Google framework MapReduce; MR describes a way of implementing parallelism using the Map function which splits a large data into multiple key-value pairs.
如Google框架MapReduce; MR描述了一种使用Map功能实现并行性的方法,它将大型数据分割成多个键-值对。
How are the latency different when kernel-function called from CPU-host or by using dynamic parallelism from GPU?
如何在延迟时间的不同的核函数调用时,CPU的主机或利用GPU的动态并行?
Based on the idea of data parallelism, a parallel training model for RBF (radial basis function) neural network in time-series prediction to improve the training speed is proposed.
根据数据并行的思想,提出了在时序预测中并行训练神经网络的模型,以提高训练速度。
应用推荐