汉语自动分词是计算机中文信息处理中的难题,也是文献内容分析中必须解决的关键问题之一。
Chinese automatic segmentation is one of the most difficult problems in computer Chinese information disposal and the key problem that document content analysis must resolve.
汉语自动分词是进行中文信息处理的基础。
Automatic Chinese word segmentation is the basis of Chinese information processing.
汉语自动分词是进行中文信息处理的基础。
Automatic Chinese segmentation is the basis of Chinese information processing.
汉语自动分词是计算机中文信息处理中的难题,也是文献内容分析中必须解决的关键问题之一。
Chinese automatic seg mentation is one of the most difficult problems in computer Chinese information disposal and the key problem that document content analysis must resolve.
汉语自动分词是中文信息处理的首要工作。
Chinese automatic word segmentation is the first work in Chinese information processing.
汉语的自动分词,是计算机中文信息处理领域中一个基础而困难的课题。
Automatic word segmentation for the Chinese language is a fundamental and difficult problem in the field of computer Chinese language information processing.
汉语分词是信息检索、机器翻译、文本校对等中文信息处理重要领域的基础。
Chinese word segmentation is a basic research issue on Chinese NLP areas such as information retrieval, machine translation, text correction, and so on.
对中文文本挖掘中的词汇处理技术进行了较深入的探讨,提出了针对汉语语言特点的无词典分词算法。
The dealing technology of words in Chinese text mining is discussed, and an arithmetic of "no Dictionary Cutting word" is brought forward.
现代汉语文本自动分词是中文信息处理的重要基石,为此提供一个通用的分词接口是非常重要的。
Automatic word segmentation of modern Chinese text is the base of Chinese information processing. So a general purpose application interface for word segmentation is important.
汉语自动分词是中文信息处理中的基础课题。
Word segmentation is a basic task of Chinese information processing.
汉语自动分词是中文信息处理的重要基石。
汉语分词卸载软件已经从我的办公室电脑,当前还没有中文的软件提供给我的膝上型电脑(因为这是苹果风扇、苹果)。
The Chinese word software has been uninstalled from my office computers, and currently there is no Chinese word software available for my laptop (because it is the MacBook, Apple).
在本文中,我们提出了一种统一的统计语言模型方法用来汉语自动分词和中文命名实体识别,这种方法对基于词的三元语言模型进行了很好的扩展。
In this paper, we extend a word-based trigram modeling to Chinese word segmentation and Chinese named entity recognition, by proposing a unified approach to SLM.
在本文中,我们提出了一种统一的统计语言模型方法用来汉语自动分词和中文命名实体识别,这种方法对基于词的三元语言模型进行了很好的扩展。
In this paper, we extend a word-based trigram modeling to Chinese word segmentation and Chinese named entity recognition, by proposing a unified approach to SLM.
应用推荐