由于关注浮点运算的执行,我们打算用一种规格化因素将这20字符统计分开来,并以此培训我们的网络。
For reasons that concern the implementation of floating point arithmetic, we decided to train our net with these twenty counts divided by a normalizing factor.
传统的浮点乘加部件采用“乘法-加法-规格化-舍入”的结构。
The structure of traditional FMAF is "multiply - add - normalize - round".
应用推荐