This paper presents a training algorithm for probabilistic neural networks using the MCE criterion.
提出了一种基于最小分类错误准则的概率神经网络的训练算法。
A corollary of this principle is that a learning algorithm should never be evaluated for its results in the training set because this shows no evidence of an ability to generalize to unseen instances.
这个原理的一个推论是,一种学习算法永远不会对它训练集的结果进行评估,因为对于一种未知的事例而言,没有证据表明算法具有概括它们的能力。
This gives you a much larger training set for each trial, meaning that your algorithm will have enough data to learn from, but it also gives a fairly large number of tests (20 instead of 5 or 10).
对每次尝试来说,训练集都非常大,这意味着你的算法有足够的数据进行学习,而且这样一来也提供了足够多的测试次数(20次,而不是5次或10次)。
应用推荐