有没有人比较过来自 Lucene 的这些词干分析器(包 org.tartarus.snowball.ext):英语Stemmer、PorterStemmer、LovinsStemmer?它们背后的算法的优点/缺点是什么?什么时候应该使用它们?或者也许有更多的算法可用于英语单词词干提取?
Have anybody compared these stemmers from Lucene (package org.tartarus.snowball.ext): EnglishStemmer, PorterStemmer, LovinsStemmer? What are the strong/weak points of algorithms behind them? When each of them should be used? Or maybe there are some more algorithms available for english words stemming?
谢谢.
Lovins 词干分析器是一个 非常古老的算法,没有太多实际用途,因为 Porter 词干分析器要强大得多.基于对源代码的一些快速浏览,似乎 PorterStemmer
实现了 Porter 的 原始 (1980) 算法,而 EnglishStemmer
实现了他的 更新版本,应该会更好.
The Lovins stemmer is a very old algorithm that is not of much practical use, since the Porter stemmer is much stronger. Based on some quick skimming of the source code, it seems PorterStemmer
implements Porter's original (1980) algorithm, while EnglishStemmer
implements his updated version, which should be better.
Stanford NLP 工具中提供了更强大的词干提取算法(实际上是词形还原器).这里 (API 文档).
A stronger stemming algorithm (actually a lemmatizer) is available in the Stanford NLP tools. A Lucene-Stanford NLP by yours truly bridge is available here (API docs).
另见 Manning, Raghavan &Schütze 了解有关词干提取和词形还原的一般信息.
See also Manning, Raghavan & Schütze for general info about stemming and lemmatization.
这篇关于Lucene 词干分离器的区别:EnglishStemmer、PorterStemmer、LovinsStemmer的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!