XML节点到关系列的映射隐含在xmltable函数中。
The mapping of XML nodes to relational columns is implicit in the XMLTABLE function.
(注意显式命名空间tk:在这里是必需的,因为画布部件定义了自己的bind函数是隐含继承的)。
Note that the explicit namespace Tk: : is necessary here, since the Canvas widget defines its own bind function that hides the inherited one.
这些函数要么在返回值和参数上缺乏一致性,要么隐含着所谓的“截断误差”(truncation errors)错误,要么无法提供足够强大的功能。
The current functions have inconsistent return values and parameters, truncation errors, and lack advanced functionality.
并且详细叙述了神经网络结构参数如隐含层神经元个数、激励函数、网络收敛精度等的确定原则和方法。
The principle and methods to determine the network parameters such as number of neuron in hidden layer, excitation function and the convergence accuracy have been analyzed in detail.
确切的说,对于某一天的期权报价,隐含波动率是执行价格和存续期的二元函数,呈现曲面的形态。
More precisely, for one day 'quote, the implied volatility is a function of two parameters: the strike price and the time to maturity, exhibiting a surfaced shape.
通过定义所有上面的函数变种,你隐含的创建了N - 1个函数。
By defining all of the function variations upfront, you implicitly create N-1 unused functions.
而SVM(支持向量机)引进核函数隐含的映射把低维特征空间中的样本数据映射到高维特征空间来实现分类。
The SVM (Support vector Machine) classifies the data by mapping the vector from low-dimensional space to high-dimensional space using kernel function.
由于消极方法使用很多不同的局部线性函数来形成对目标函数隐含的全局逼近,具有比积极方法更丰富的假设空间。
Because lazy method USES many different local linear functions to form implicitly the overall situation for that goal, it has more rich assumption space than positive method.
由于模板构造函数终究不是拷贝构造函数,因此这种模板的出现并不会隐藏原来隐含的拷贝构造函数之声明。
Because a template constructor is never a copy constructor, the presence of such a template does not suppress the implicit declaration of a copy constructor.
而遗传算法不需要待优化函数具有连续可微性,并具有很强的通用性和隐含并行性。
While the GA does not require that the optimized functions possess continuum differentiability, GA is proposed to optimize the parameters of fuzzy controller.
径向基函数(RBF)神经网络隐含层R由一组径向基函数构成。
The hidden-layer R of RBF neural network consists of a set of RBF (radial base function).
概念不清、忽略定义域、轻视值域和遗漏隐含条件是函数问题常见错误。
Some common mistakes made on function problem are as follows, the concept unclear, the neglect domain of definition, the contempt value territory and the omission concealment condition.
研究步骤为:首先将离散的隐含波动率数据转换为函数形式,然后进行函数型数据分析。
The research is intended to do like this: firstly, to change the discrete data of implied volatility into function.
径向基函数神经网络的隐含层到输出层的线性连接权值,则是由最小二乘法来计算得到的。
The connection weights between hidden layer and output layer are got by Least Mean Square Algorithm.
提出一种基于正交基函数的小波神经网络设计方法,采用多分辨率学习确定隐含层结构,并用收敛较快的阻尼最小二乘法训练权值。
In this approach the network structure is determined by multiresolution learning, and the weights are trained by damped least squares which has fast convergent rate.
提出一种基于正交基函数的小波神经网络设计方法,采用多分辨率学习确定隐含层结构,并用收敛较快的阻尼最小二乘法训练权值。
In this approach the network structure is determined by multiresolution learning, and the weights are trained by damped least squares which has fast convergent rate.
应用推荐