梯度下降法与混沌优化法均具有各自的缺点。
The decreasing gradient algorithm and chaos algorithm both have shortcomings for optimization problems.
但BP神经网络本质是梯度下降法,容易陷入局部最优。
BP neural network, as its nature of gradient descent method, is easy to fall into local optimum.
该模型采用五层前向模糊神经网络,学习算法为梯度下降法。
The model was a Feedforward Fuzzy neural network possessing five layers, and Gradient Descent was adopted as learning algorithm.
然后利用梯度下降法推导了基于最优模式中心的NLDA算法。
The second algorithm calculates the optimum classes' centres of NLDA by method of grads descending.
最简单的梯度下降算法通常在参数变化和有噪声时也很稳定,有效。
Even the simplest gradient algorithm is effective and stable when there are noises or algorithmic parameters...
然而基于梯度下降的BP网络存在收敛速度慢、易陷入局部极小的缺陷。
However, BP network with gradient descent has some defects such as low convergence speed, fall in local minima.
实现了一种新的基于遗传算法和梯度下降方法的快速模糊系统学习算法。
Moreover, a new fast learning method of fuzzy systems both based on genetic algorithms and gradient method is proposed.
有两种方法可以用来设计网络,一是能量下降法,另一个是梯度下降法。
Two ways are used to design the network, the one is the direct energy descent method, and the other is the gradient descent method.
目前使用的最多的学习算法仍然是基于梯度下降的BP算法和遗传算法。
The known study algorithms which are used to be Fuzzy Neural Network parameter study algorithms are BP algorithm with gradient descent and inheritance algorithm.
将权矩阵的学习过程归结为用梯度下降法求一组矛盾线性方程组的过程;
Then, the learning of the weight matrix can be done by means of solving a group of systems of linear equations. Last, the mathematical base of the outer-product leaming method is pointed out.
基于梯度下降思想提出离散修复算子,提高算法对非线性约束的处理能力。
Discrete repair operator is presented on the basis of the theory of gradient to improve the performance of the algorithm in solving non-linear constraint.
就随机并行梯度下降(SPGD)最优化算法在光束净化系统中的应用展开研究。
This paper researches the application of the stochastic parallel gradient descent (SPGD) optimization algorithm on the beam cleanup system.
随机并行梯度下降(SPGD)算法可以对系统性能指标直接优化来校正畸变波前。
The stochastic parallel gradient descent (SPGD) algorithm can optimize the system performance indexes directly to correct wavefront aberration.
首先研究了离线训练的滑模控制器,然后,给出了利用梯度下降法的在线训练方法。
First, a study of a sliding mode controller under on off training is made and then the on line learning algorithm using a gradient decent method is designed.
并提出分别采用共轭梯度法、二元自适应收缩法以及梯度下降法对以上优化问题求解。
And the three optimization problems are solved respectively by the conjugate gradient method, the adaptive bivariate shrinking and the gradient descent method.
其实质是采用梯度下降法使权值的改变总是朝着误差变小的方向改进,最终达到最小误差。
The essence of back propagation networks is that make the change of weights become little by gradient descent method and finally attain the minimal error.
本文研究了BP网络,实现了“梯度下降法”的网络训练方法,获得了较传统方法好的效果。
This paper studies BP network, realizes the method of gradient descent, gets better result than traditional one.
运用最优梯度下降法使目标的计算值与期望值误差最小来迭代优选权重,建立迭代算法模型。
A few or all of the parameters of the controller are adjusted by using the gradient descent algorithm to minimize the output error.
目前BP网络采用误差逆传播算法学习训练神经网络,该算法是基于网络误差函数梯度下降的。
BP network adopt error against propagate algorithm to leam and train the neural network at present, this algorithm is on the basis of the error function gradient of the network.
基于梯度下降的神经网络训练算法易于陷入局部最小,从而使网络不能对输入模式进行准确分类。
Neural network BP training algorithm based on gradient descend technique may lead to entrapment in local optimum so that the network inaccurately classifies input patterns.
对小波神经网络采用最速梯度下降法优化网络参数,并对学习率采用自适应学习速率方法自动调节。
In WNN the most fast grads descent methodology was adopted to adjust the network parameters and the learning rate by self adapting learning rate method.
RBF神经网络采用离线学习在线修正权值和阈值,为加快收敛速度,应用带惯性项的梯度下降法。
RBF neural network adopts the off-line training and the on-line adaptation of weight and threshold value. In order to speed up the convergence, the grads descent method with inertia item was used.
然后通过梯度下降法和最小二乘法相结合的混合学习算法,对控制器参数进行调整以提高其控制精度。
Then some parameters of the controller are modulated by hybrid learning algorithm of ladder descent (LD) and least square error (LSE) so as to attain better control precision.
它对一定时间间隔内两次采样得到的图象进行运算,用最陡梯度下降法直接迭代出图象位移的估计值。
The algorithm which utilizes the Steepest Descent Method can give the estimation of displacement between two frames of image directly.
在遗传算法中嵌入一个梯度下降算子,使得混合算法既有较快的收敛性,又能以较大概率得到全局极值。
Through embedding a gradient descend operator into the generic algorithm, a hybrid algorithm is achieved with fast convergence and great probability for global optimization.
梯度下降算法是训练多层前向神经网络的一种有效方法,该算法可以以增量或者批量两种学习方式实现。
Gradient descent algorithm is an efficient method to train FNN, and it can be realized in batch or incremental manner.
梯度下降算法是训练多层前向神经网络的一种有效方法,该算法可以以增量或者批量两种学习方式实现。
Gradient descent algorithm is an efficient method to train FNN, and it can be realized in batch or incremental manner.
应用推荐