本文在经典统计学习理论的基础上,讨论了可能性空间上学习过程一致收敛速度的界。
In this paper, the bounds on the rate of uniform convergence of the learning processes on possibility space are discussed based on the classic Statistical learning Theory.
最后证明基于双重随机样本的统计学习理论的关键定理并讨论学习过程一致收敛速度的界。
Finally the key theorem of statistical learning theory based on random rough samples is proved, and the bounds on the rate of uniform convergence of learning process are discussed.
本文在截尾样本下构造了失效率的一种截尾非参数估计,并给出了其均方收敛及强相合性的局部一致收敛速度。
In this paper, a nonparametric estimator terminated of failure rate is constructed based on censored data, and its rates of convergence in mean square and strong consistency are given respectively.
支持向量机(SVM)是一种新的通用学习机器,它从结构风险最小化的角度,分析了学习过程的一致性、收敛速度等。
Support vector machine (SVM) is a new general learning machine, which analyzes the consistency of learning and speed of convergence from structure risk minimization principle.
本文讨论了一类新的半参数回归模型,在一组比较基本的条件下,得到了估计量的较好的一致强收敛速度。
In this paper, we have considered a new class of semiparametric regression model Under some mild conditions we have obtained better uniformly strong convergence rates for the proposed estimators.
计算机仿真结果与理论分析相一致,证实了该算法比通常的补对算法和传统的LMS算法有更快的收敛速度。
Computer simulation results confirms the theoretical analysis and shows the new algorithm provides faster convergence speed than the complementary pair algorithm and usual LMS algorithm.
第三节给出了本文的主要结果,即各估计量的一致强收敛速度。
In the third section, we have presented the main result of the paper, namely the uniformly strong convergence rates for each estimator.
不仅提高了团划分算法求解功能单元分配问题的计算结果一致性,而且使算法的收敛速度得以提高。
The consistency of the allocation result and the convergence speed of clique partition algorithm can be improved greatly by the heuristic method.
试验证明,当采用所提出的“混合比例分割法”时,能同时获得快的收敛速度和良好的一致收敛性。
Experimental results show that the merits in the respects of interactive speed and consistent convergence can be gained, when the mixed proportional division method proposed here is applied.
在这篇文章中,我们提出了最近邻估计在任意紧集上一致强收敛速度的概念,得到了一些较好的收敛速度。
In this paper, we propose the concept of rates of strong uniform convergence of nearest neighbor density estimates on any compact set and obtain some better convergence rates.
还使用了“一致代价法”的思想改进搜索过程,提高了算法的收敛速度和精度。
Then, the thought of uniform-cost search is applied to improve the algorithm.
该自组织BP网络算法能够根据当前收敛状态自动调整学习率,使得网络收敛速度与学习率变化保持一致。
The algorithm can change network's learning rate followed by network's convergence state, and can adjust network's structure based on the neurons' change and their relationship.
该自组织BP网络算法能够根据当前收敛状态自动调整学习率,使得网络收敛速度与学习率变化保持一致。
The algorithm can change network's learning rate followed by network's convergence state, and can adjust network's structure based on the neurons' change and their relationship.
应用推荐