对网络进行学习训练,并进行了网络结构及其学习参数优化;
Training was conduced by artificial neural networks, the training parameters are optimized.
随后,动态优化学习参数算法动态地调整和选取优化的学习参数。
The dynamic optimization of learning parameters can adjust learning parameters dynamically and select optimal learning parameters.
神经网络模型的学习参数为0.01,网络训练迭代次数为500。
The learning parameter is set as 0.01 and the training iteration is taken as 500.
提出了根据算法亲合度自适应调节学习参数的方法,以提高算法的全局寻优能力。
The learning parameter adjusts adaptively with the affinity to promote the global search ability.
针对BP神经网络的缺点,研究了一种动态自适应调整学习参数的改进型BP算法。
To BP neural shortcoming of network, study one dynamic self-adaptation is it study improvement type BP algorithm of parameter to adjust.
一种是动态优化法,每次学习使用的学习参数由单纯形优化器在之前的学习过程中进行预测。
One is the dynamical optimization method that the learning factors at every step are previously predicted by the simplex optimizer.
我来举个例子,解释为什么我们用向量来学习参数曲线,毕竟我们学了这么多的都只能在坐标上完成。
OK, so let me show you a nice example of why we might want to use vectors to study parametric curves because, after all, a lot of what's here you can just do in coordinates.
自生成神经网络(SGNN)是一类自组织神经网络,它不需要用户指定网络结构和学习参数,而且不需要迭代学习,是一类特点突出的神经网络。
Self-generating neural network (SGNN) is a self-organization neural network, whose network structures and parameters need not to be set by users, and its learning process needs no iteration.
您既然已经了解了如何添加一个数据参数到一个规则扩展中,那么您就已经准备好学习如何在您的代码中使用那些值了。
Now that you know how to add custom data parameters to a rule extension, you're ready to learn how to use those values in your code.
在本文的后面部分,您将学习如何使用php为这些脚本传递参数。
Later in this article, you'll learn how to pass arguments to these scripts using PHP.
我们马上就要学习直线的参数方程。
OK, so we are going to learn about parametric equations of lines.
还要学习如何对部署的Web服务施加限制,比如限制其他用户可以使用的方法和参数。
You'll also learn how to deploy your Web service restrictively, limiting the methods and parameters available for other users to work with.
该参数实际上是另一个bean,稍后您将学习如何配置该bean。
This parameter is actually another bean, which you will learn how to configure in just a moment.
学习了这些升级影响、系统目录变更、新onconfig参数、以及降级影响之后,现在您就可以充分利用Version 11.70中的新功能了。
After learning about this upgrade impact, the system catalog changes, new ONCONFIG parameters, and reversion impact, you are now ready to take full advantage of all the new features in version 11.70.
学习至今,您知道了Groovy闭包是代码块,可以被引用、带参数、作为方法参数传递、作为返回值从方法调用返回。
From your studies so far, you know that Groovy closures are code blocks that can be referenced, parameterized, passed as a method parameter, and delivered as the return value from a method call.
第一步,您应该学习shell的特性:Unix将命令行参数传递给Perl的方式及这些参数的Perl解释方法。
For the first step, you should learn your shell's peculiarities, the way that Unix passes command-line arguments to Perl and Perl's interpretation of those arguments.
然后您学习了如何传递参数到您的服务器端方法,以及如何从服务器中返回更复杂的数据。
You then learned how to pass arguments to your server-side methods, and how to return more complex data in your responses from the server.
31岁那年,我开始了在法国图卢兹大学学习数学的新生活,在这之后我完成了我的博士论文,是关于 参数化
Then at the age of 31, I started a new life with basic studies in mathematics at the University of Toulouse, France.
然后通过梯度下降法和最小二乘法相结合的混合学习算法,对控制器参数进行调整以提高其控制精度。
Then some parameters of the controller are modulated by hybrid learning algorithm of ladder descent (LD) and least square error (LSE) so as to attain better control precision.
在本章中稍后的部分,我们将会学习关于路由参数的更多知识。
We'll learn more about route parameters later in this chapter.
结果表明,网络的增益、学习速率和动量是影响网络收敛和稳定性的关键参数。
Restults show that the gain, learning rate and momentum are critical for network convergence and stability.
该网络通过学习并记忆PID参数调整规则,实现了在线调整PID参数。
Through learning and remembering the adjusting rule of PID parameters, the PID parameters are adjusted on line by the network.
在神经网络的训练当中存在“过学习”现象以及参数难以选择的困难。
There exist over learning and the difficulties of selecting suitable parameters when training neural network.
第二层计算局部节点的加权响应和,混合参数作为学习加权。
The second layer accumulates the responses of these local nodes, weighted by the learning mixing parameters.
最后讨论了离散尺度与小波核函数的构造,核函数选择与核参数学习。
Finally, the construction of discrete scaling and wavelet kernels, the kernel selection and the kernel parameter learning are discussed.
分离系统的线性部分和非线性部分参数学习都采用自然梯度算法。
The natural gradient method is applied for parameter learning of the linear and nonlinear parts of the separating system.
并采用具有模式增强输入的BP网络进行决策参数估计,加快学习的收敛。
BP neural networks with pattern extended input are used to estimate control parameters, and the learning speed is increased.
分析了动态递归神经网络系统辨识的参数学习算法。
The parameter learning algorithm of dynamic recurrent neural network based on system identification is analyzed. D.
分析了动态递归神经网络系统辨识的参数学习算法。
The parameter learning algorithm of dynamic recurrent neural network based on system identification is analyzed. D.
应用推荐