发现基本蚁群优化算法存在慢收敛且易停滞等问题。
The ant colony optimization algorithm has found slow convergence and easy to stagnation.
论述了TCP/IP网络中距离向量(V D)算法中的慢收敛问题,以及一些解决方法的不足。
The thesis describes some solutions for the slow convergence (count to infinite) problem in Vector-Distance algorithm in TCP/IP and points some shortages of them.
分析了BP算法的基本原理,指出了BP算法具有收敛速度慢、易陷入局部极小点等缺陷以及这些缺陷产生的根源。
Then some defects such as slow convergence rate and getting into local minimum in BP algorithm are pointed out, and the root of the defects is presented.
计算阻抗矩阵元素时,由于被积函数振荡性很强,收敛慢,难于计算。
The integrand exhibits slowly convergence and highly oscillatory, which leads to difficulties when attempting to evaluate the impedance matrix elements.
在神经网络负荷预测实际应用中,突出的问题是训练样本大、训练时间长、收敛速度慢。
In application of neural networks based short-term load forecasting model, the main problems are over many training samples, thus resulting long training time and slow convergence speed.
补偿后收敛速度比架空线路慢很多,对其原因也进行了探讨。
After compensation, the convergence speed is much slower than that of the overhead line.
对于传统BP算法存在的收敛速度慢和易陷入局部极小值问题,人们提出了径向基函数网络。
People put forward radial basis function networks considering the conventional BP algorithm problems of slow convergence speed and easily getting into local dinky value.
新算法选择很广一类的隐层神经元函数,可以直接求得全局最小点,不存在BP算法的局部极小、收敛速度慢等问题。
The algorithm can get global minimum easily with a wide variety of functions of hidden neurons, and no problems such as local minima and slow rate of convergence are suffered like BP algorithm.
由于坐标测量机几何误差变化规律复杂,采用一般的BP神经网络模型算法,速度慢且难以收敛。
Owing to the complicated variable rule of CMMs geometry error, it's difficult to convergence for using common BP neural network model arithmetic with a slow velocity.
但这些方法通常存在计算量大、收敛慢及参数敏感等不足。
However, the amount of calculation, the speed of the convergence and the sensitivity of parameters of these algorithms are less encouraged.
确定性信号的收敛速率慢于不相关信号的速率。
The convergence rate of deterministic signal is slower than the one of uncorrelated signal.
该算法克服了DFS算法收敛性差和模拟退火(SA)算法收敛速度慢的缺点。
Compared with traditional DFS algorithm and simulated annealing (SA) algorithm, the proposed algorithm possesses better convergency and high convergence rate.
本文介绍了动态对角递归网络,并针对BP算法收敛慢的缺点,将递推预报误差学习算法应用到神经网络权值和域值的训练。
To overcome the slow convergence of the BP algorithm, recursive prediction error algorithm is proposed, which can train both the weight and the bias.
由于PBD算法需要在估计样本函数的同时估计PSF的参数,一般采用的PSF的模型较为复杂,计算量大,收敛慢;
The PBD algorithm needed to simultaneously estimate the specimen function and the parameters of the PSF, while the PSF model was complicated, needed a large number of computation and converged slowly.
它对收敛慢的大型线性计算特别有效。
This method is very effective to large slowly convergent linear computation.
并对以前神经网络中的两个难点:局部极小和收敛速度慢的问题进行分析。
Our previous work has shown that the network was effective in improving two difficulties, a convergence to local minimal and a slow learning speed.
使用常规pid控制很难满足手指精确位置控制的要求,而采用依据BPNN原理设计成的常规单神经元pid控制器又因学习速率低,收敛速度慢,控制效果不能令人满意。
But if single neuron PID controller designed in terms of BPNN Theory is adopted, the control effect is not satisfactory because the learning rate and speed of convergence are slow.
本文针对BP神经网络收敛速度慢的缺点,提出了改进方案。
This paper puts forward an improvement plan to counter the slowness of convergence rate of BP artificial neural network.
针对多层前馈网络的误差反传算法存在的收敛速度慢,且易陷入局部极小的缺点,提出了采用微粒群算法(PSO)训练多层前馈网络权值的方法。
The particle swarm optimization(PSO) algorithm, is used to train neural network to solve the drawbacks of BP algorithms which is local minimum and slow convergence.
反向传播(BP)算法常常用于神经网络的权值训练中,但是BP算法收敛慢。
Back propagation (BP) algorithm is often used for the weights training of neural network, but the convergence speed of BP algorithm is slow.
然而基于梯度下降的BP网络存在收敛速度慢、易陷入局部极小的缺陷。
However, BP network with gradient descent has some defects such as low convergence speed, fall in local minima.
经验证(PSO)优化算法可以有效地克服BP神经网络存在的学习效率低,收敛速度慢以及容易陷入局部极小点等固有缺点。
It is confirmed that PSO could overcome intrinsic shortcomings of BP neural network, including low learning efficiency, slow convergence rate, being easy to fall into local minima, etc.
但盲估计算法,如子空间分解法等,需要较大的样本值,收敛速率慢,不利于实时信道估计。
But the blind estimation algorithm, for example, the sub-space decomposition, is not good for real time estimation because it requires large received signals, and has low estimation convergence rate.
由核密度估计推导获得的高斯核均值漂移算法因收敛速度慢在应用中效率不高。
The Gaussian kernel mean-shift algorithm which is deduced from kernel density estimation has not been widely employed in applications because of its low convergence rate.
MD接收机的应用受到了LMP算法收敛速度慢的限制。
The limitation of adaptive MD receiver with LMP algorithm is that the algorithm convergences slowly.
针对传统常数模算法收敛速度慢的缺点,提出了一种基于动量算法的常数模算法。
In order to overcome the slow convergence rate of traditional CMA (Constant modulus algorithm), a Momentum algorithm based Constant modulus algorithm (MCMA) is proposed.
但由于定点数运算会引起量化累积误差,均衡器的收敛速度比浮点数运算均衡器的收敛速度要慢,收敛后的稳定性也较差。
However, since the fixed-point calculation maybe cause cumulative quantization error, the convergent performance of fixed-point equalizer is not better than that of floating-point equalizer.
对于高维复杂函数,传统的确定性算法易陷入局部最小,而单一的全局随机搜索算法收敛速度慢。
For complex functions with high dimensions, canonical optimization methods are easy to be trapped in local minima and simple random search methods are slow on convergence.
对于高维复杂函数,传统的确定性算法易陷入局部最小,而单一的全局随机搜索算法收敛速度慢。
For complex functions with high dimensions, canonical optimization methods are easy to be trapped in local minima and simple random search methods are slow on convergence.
应用推荐