A mixed NN model is constructed using BP algorithm improved with dynamic learning rate.
利用动态学习率改进BP算法,建立了混合神经网络模型。
This is accomplished by introducing alpha (0 < alpha < 1), which is called the learning rate.
这个规则通过引入alpha (0 < alpha <1)完成。 我们把 alpha 称为 学习率。
The serious cold had disturbed my routine even badly gotten my learning rate of progress behind.
那严重的感冒干扰了我的日常作息,甚至使得我的学习进度严重落后。
Here is my attempt to mathematically calculate the piano learning rate of the methods of this book.
针对本书对于钢琴学习速率提高的帮助,我在这里尝试做一下基本的数学计算。
Here I give some references to the selection of the number of hidden-layer nodes and learning rate.
在这里我们对于隐含层节点数选择、学习速率选择等问题提出一些参考意见。
This paper presents an improved BP algorithm, which can adapt learning rate using gold-segmentation.
本文提出了一种改进的BP算法,该算法基于黄金分割法自适应调整网络学习速率。
Restults show that the gain, learning rate and momentum are critical for network convergence and stability.
结果表明,网络的增益、学习速率和动量是影响网络收敛和稳定性的关键参数。
The most common method to evaluate the learning rate is to plot a prediction quality versus number of items.
最常用的评估学习速率方法是绘制预测质量-项目个数的散点图。
The new algorithm, compared to the BP algorithm, has the fast learning rate and good convergence properties.
该算法有效地改进了神经元网络的学习收敛速度,取得了比常规BP算法更好的收敛性能和学习速度。
Based on the current algorithms, the new one had improved the learning rate factor and the winning neuron algorithm.
该算法在现有压缩算法的基础上改进了学习速率因子和获胜神经元算法的求法。
Compared to the standard BP, this algorithm integrated the additional momentum method with the adaptive learning rate method.
与标准BP算法比较,该系统通过结合附加动量法和自适应学习速率形成新的BP改进算法。
In the training process, the adaptive learning rate and error batch-mode process are introduced to accelerate the training rate.
在神经网络自学习过程中,引入了自适应学习速率和误差批处理法,加快了学习速度。
At last, a kind of BP algorithm with adaptive learning rate is evaluated and compared with standard BP algorithm by XOR problem.
最后结合XOR问题把一种自适应学习率BP算法和标准BP算法进行了比较和评价。
The influence to convergent speed and recognition accuracy of both concealed layer node count and the learning rate is considered.
考察了隐层节点数对网络收敛速度、识别正确率的影响以及学习率对收敛速度的影响,改进了网络训练算法。
In the control algorithm, methods of multiple level control during initial stage of studying with the adaptive learning rate are put forward.
在控制算法中提出了自调整学习速率和学习初期的分层控制方法。
Provided with the Algorithm to determine the pre-possessing of sample data, learning rate, momentum factor, the Number of Hidden Layer Nodes, etc.
给出了样本数据的预处理以及学习因子、动量系数、隐含层结点数等诚然定方法。
Aiming at this question, this paper proposes a parameter model under evidence loss and deduce an EM updating algorithm which contains learning rate.
针对这样的问题,本文提出一种证据丢失参数模型,并推导出包含学习率的EM更新算法。
The learning rate is an important parameter for the learning process of a neural network (NN) which influents the stability and quickness of the NN.
学习速率是控制神经网络学习过程的一个重要参数,影响神经网络的稳定性和快速性。
The self-adaptive learning rate and momentum coefficient are used to avoid the local minimum point in the training process of wavelet neural network.
小波神经网络的训练采用自适应调整学习率及动量系数的方法,以避免陷入局部极小值。
In WNN the most fast grads descent methodology was adopted to adjust the network parameters and the learning rate by self adapting learning rate method.
对小波神经网络采用最速梯度下降法优化网络参数,并对学习率采用自适应学习速率方法自动调节。
In this paper, making use of Kalman filtering, we derive a new back-propagation algorithm whose learning rate is computed by Riccati difference equation.
本文运用卡尔曼滤波原理,提出了一种新的神经网络学习算法。该算法的学习速度是由带时间参数的里卡蒂微分方程来确定的。
It is often tempting to hurry the child beyond his natural learning rate, but this can set up dangerous feelings of failure and states of anxiety in the child.
拔苗助长、超前学习常常诱惑着家长,但这会使孩子养成一种害怕失败,焦虑不安的心理状态。
The effects of neural network parameters including gain, learning rate, and momentum on network convergence and DPV computation results have been investigated.
详细地讨论了增益、学习速率、动量等网络参数对神经网络收敛速度和导数脉冲伏安法计算结果的影响。
First, this paper investigates the effect of initial weight ranges, learning rate, and regularization co-efficient on generalization performance and learning speed.
首先研究了初始权值的范围、学习率和正则项系数对泛化性能和学习速度的影响。
During the self learning process, the adaptive learning rate and momentum gene are introduced to accelerate the rate of convergence and advance the identify accuracy.
在神经网络自学习过程中,引入了自适应学习速率和动量法,加快了网络的收敛速度,提高了网络的辨识精度。
The algorithm of two-dimensional Kohonen network is improved from serval aspects such as neighborhood function, learning rate, etc. It is applied into tobacco clustering.
本章重点从邻域函数、学习率调整等方面研究了二维网络的改进算法,并将之应用于烟叶动态分类问题。
There are quite a few parameters to be optimized, like the network layer structure, the number of iterations, the learning rate, the momentum term, just to mention a few.
有相当多的参数进行优化,如网络层结构,迭代次数,学习率,动量项,只是仅举几例。
The simulation and motor control show that the new algorithm has fast learning rate, good convergence properties and can overcome the defects of traditional PID algorithm.
仿真实验及在伺服电机转速控制中的应用表明,该算法具有较快的学习速度及良好的收敛性能,并有效地克服了传统PID算法的缺陷。
The simulation and motor control show that the new algorithm has fast learning rate, good convergence properties and can overcome the defects of traditional PID algorithm.
仿真实验及在伺服电机转速控制中的应用表明,该算法具有较快的学习速度及良好的收敛性能,并有效地克服了传统PID算法的缺陷。
应用推荐