This paper analysis three different kinds of improved algorithms based onthe K-NN classification:(1) Editing technique, (2) Boundary extraction, (3) Boundarypatching.
本文研究三种在近邻分类法基础上的改进算法:(1)编辑技术,(2)边界抽取,(3)边界补缀。
By revising a K-NN classification method based on evidence theory, a new K-NN classification method based on evidence reasoning model is got, which made the classification result more accurate.
本文对基于证据理论的k -NN分类方法进行了修正,得到了基于证据推理模型的k - NN分类方法,使分类结果更加精确。
The 1-Nearest-Neighbor Rule (1-NN) is the simplest and most natural classification rule.
最邻近法则(1-NN)是最简单和最为自然的分类法则。
In those lazy learning algorithms most extensively used is nearest neighbor classification (NN) algorithm.
其中消极学习型中应用最广泛的是最近邻分类算法。
In order to validate character validity, use NearestNeighbor (NN) and probabilistic neural network (PNN) classification identify target, gain content identification probability.
为了验证特征的有效性,使用最近邻及概率神经网络分类器进行了目标识别,得到满意的识别率。
A learning algorithm based on a hard limiter for feedforward neural networks (NN) is presented, and is applied in solving classification problems on separable convex sets and disjoint sets.
提出了基于硬限幅功能函数的前向神经网络的分类学习算法,并将其应用于可分凸集或不交集合的分类。
A learning algorithm based on a hard limiter for feedforward neural networks (NN) is presented, and is applied in solving classification problems on separable convex sets and disjoint sets.
提出了基于硬限幅功能函数的前向神经网络的分类学习算法,并将其应用于可分凸集或不交集合的分类。
应用推荐