In application of neural networks based short-term load forecasting model, the main problems are over many training samples, thus resulting long training time and slow convergence speed.
在神经网络负荷预测实际应用中,突出的问题是训练样本大、训练时间长、收敛速度慢。
People put forward radial basis function networks considering the conventional BP algorithm problems of slow convergence speed and easily getting into local dinky value.
对于传统BP算法存在的收敛速度慢和易陷入局部极小值问题,人们提出了径向基函数网络。
But if single neuron PID controller designed in terms of BPNN Theory is adopted, the control effect is not satisfactory because the learning rate and speed of convergence are slow.
使用常规pid控制很难满足手指精确位置控制的要求,而采用依据BPNN原理设计成的常规单神经元pid控制器又因学习速率低,收敛速度慢,控制效果不能令人满意。
Let be a -mixing random variable sequence, and it is proved to be a theorem of complete convergence under the condition of slow mixing speed and non-identity distribution.
设为随机变量序列,文章在较弱的混合速度且非同分布的条件下证明了其完全收敛性的一个结果。
However, an existing significant problem for IGA is that users have to evaluate a large number of individuals when the convergence speed of genetic operations is too slow.
但交互式遗传算法存在的主要问题是:当遗传操作的收敛速度慢时,用户需对大量个体进行评估,尤其是在个体间相似程度较高时,容易产生疲劳现象。
Back propagation (BP) algorithm is often used for the weights training of neural network, but the convergence speed of BP algorithm is slow.
反向传播(BP)算法常常用于神经网络的权值训练中,但是BP算法收敛慢。
It also calculates the conditional convergence speed of the province by applying Solow-Swan model with panel data, and finds that a slow but steady catch-up does exist in the province.
利用斯旺-索洛模型,借助面板数据,对福建省各地区经济增长条件收敛进行分析,估算条件收敛速度,揭示了福建省落后地区以较低速度实现对发达地区的赶超的事实。
Q-learning is a typical Reinforcement Learning (RL) method with a slow convergence speed especially as the scales of the state space and action space increase.
学习是一种典型的强化学习,其学习效率较低,尤其是当状态空间和决策空间较大时。
Q-learning is a typical Reinforcement Learning (RL) method with a slow convergence speed especially as the scales of the state space and action space increase.
学习是一种典型的强化学习,其学习效率较低,尤其是当状态空间和决策空间较大时。
应用推荐