模型的激励函数采用了模糊模式识别模型。
The Fuzzy Pattern Recognition Model developed by Chen Shouyu is used as the stimulation function of hidden nodes.
新的网络激励函数和训练算法切实满足过程控制的需要。
It is proved that the new network activation function and the improved BP training algorithm practically applying to the requirement of process control.
用周期函数,有限项傅立叶级数,作为激励函数来获取训练样本。
A periodic function, finite Fourier series, is used to activate the actuator for obtaining training samples.
使用了高斯函数作为神经网络的激励函数,并以最小二乘准则对字符进行识别。
Gauss function is used as neural network's inspirit function, and least square rule is used to recognize the character.
以JK触发器为例,提出了一种基于触发器行为的J、K激励函数的最小化技术。
Taking JK flip-flop as an example, a minimization technique of J and K excitation functions based on behaviors of flip-flop was proposed.
在对神经网络的激励函数的三个假设下,研究了具有离散时滞的神经网络的稳定性。
We analyze global stability of a class of neural networks with discrete delays under three assumptions of activation functions.
从激励函数、权值修正方法、目标函数三方面对多个改进算法进行收敛性能的比较。
The convergence performances of many algorithms are compared from three aspects: activation function, weight modification methods, and target function.
本文提出基于新的激励函数BP算法建立误差预测模型,修正新型广义预测算法的预测输出。
In the paper presents the predictive out of a new generalized predictive Control is corrected by the error predictive model based on a new excite function BP arithmetic.
经数值计算结果表明,选择径向基函数作为隐层的激励函数,可以得到较好的样本拟合效果。
Trial numerical computation indicates that taking radial basic function as exciting function of a hidden layer brings good sample fitting effect.
结论反向传播网络在函数逼近方面差的原因是激励函数的全局性、隐层结点数目的不确定性。
Conclusion Because of the inspirit function's globaling and the number of the Hidden Layer'node uncertainty the BPNN was not done well.
同时在激励函数单调递增的条件减弱的情况下,给出了两条渐近稳定的定理,并给了严格的数学证明。
With the condition of inspirit functions increasing by degrees weakening, two new global asymptotic stable theorems and strict mathematic proof were given.
并且详细叙述了神经网络结构参数如隐含层神经元个数、激励函数、网络收敛精度等的确定原则和方法。
The principle and methods to determine the network parameters such as number of neuron in hidden layer, excitation function and the convergence accuracy have been analyzed in detail.
利用具有不同激励函数的分组方法提高网络性能,并利用随机梯度算法确保学习过程不会陷入局部极值。
The network performance is much enhanced by using the method with different stimulating functions. The algorithm of random grading can efficiently avoid falling into local minimums.
以往的BP算法调节神经元网络的权值,其网络的隐层结点数、网络学习快慢程度及网络的泛化能力都与网络的激励函数有关的。
BP algorithm is often used to correct weights of neural network because number of hidden nodes, studying speed and generation ability of neural network are related to activation function.
以往的BP算法调节神经元网络的权值,其网络的隐层结点数、网络学习快慢程度及网络的泛化能力都与网络的激励函数有关的。
BP algorithm is often used to correct weights of neural network because number of hidden nodes, studying speed and generation ability of neural network are related to activation function.
应用推荐