Word-based approach is the mainstream on Japanese word segmentation.
基于词的方法是日语分词的主流方法。
Aiming at the dissatisfied effect of Chinese word segmentation to Email texts, an improved Maximum Match Based Approach is presented.
针对邮件文本分词效果较差的特点,提出采用一种改进的最大匹配法来进行中文分词的方法。
In the corpus-based approach, concordance data is extracted from a corpus on a random basis and examined within a colligational framework to generalize the collocational behaviour of a key word.
基于数据的方法以语料库索引为基本依据,在传统的句法框架内对词项的搭配进行检查与概括;
Basing on the model and combining the advantage of rule-based approach in word sense choosing and lexis deforming, a new analogy translation generation method for EBMT was realized.
基于该计算模型,同时结合规则方法在词义选择和词汇变形处理方面的优势,实现了一种新的EBMT类比译文构造方法。
The approach is based on the belief that the meaning of a word can be divided into meaning components, which are called semantic features.
这种方法所基于的观点是:词义可以分成不同的意义成分,叫做语义特征。
In this paper, we extend a word-based trigram modeling to Chinese word segmentation and Chinese named entity recognition, by proposing a unified approach to SLM.
在本文中,我们提出了一种统一的统计语言模型方法用来汉语自动分词和中文命名实体识别,这种方法对基于词的三元语言模型进行了很好的扩展。
It discusses and analyzes the actuality of Chinese word segmentation, and describes an approach to correcting the Chinese word segmentation automatically based on rules.
该方法通过对机器分词语料和人工校对语料的学习,自动获取中文文本的分词校对规则,并应用规则对机器分词结果进行自动校对。
It discusses and analyzes the actuality of Chinese word segmentation, and describes an approach to correcting the Chinese word segmentation automatically based on rules.
该方法通过对机器分词语料和人工校对语料的学习,自动获取中文文本的分词校对规则,并应用规则对机器分词结果进行自动校对。
应用推荐