<tfoot id='KBVfg'></tfoot>
  • <legend id='KBVfg'><style id='KBVfg'><dir id='KBVfg'><q id='KBVfg'></q></dir></style></legend>
      <bdo id='KBVfg'></bdo><ul id='KBVfg'></ul>

    1. <small id='KBVfg'></small><noframes id='KBVfg'>

      1. <i id='KBVfg'><tr id='KBVfg'><dt id='KBVfg'><q id='KBVfg'><span id='KBVfg'><b id='KBVfg'><form id='KBVfg'><ins id='KBVfg'></ins><ul id='KBVfg'></ul><sub id='KBVfg'></sub></form><legend id='KBVfg'></legend><bdo id='KBVfg'><pre id='KBVfg'><center id='KBVfg'></center></pre></bdo></b><th id='KBVfg'></th></span></q></dt></tr></i><div id='KBVfg'><tfoot id='KBVfg'></tfoot><dl id='KBVfg'><fieldset id='KBVfg'></fieldset></dl></div>

        在 Lucene 中使用 WikipediaTokenizer 的示例

        时间:2023-09-29
        • <bdo id='Qf4Ti'></bdo><ul id='Qf4Ti'></ul>

              <tbody id='Qf4Ti'></tbody>
          1. <i id='Qf4Ti'><tr id='Qf4Ti'><dt id='Qf4Ti'><q id='Qf4Ti'><span id='Qf4Ti'><b id='Qf4Ti'><form id='Qf4Ti'><ins id='Qf4Ti'></ins><ul id='Qf4Ti'></ul><sub id='Qf4Ti'></sub></form><legend id='Qf4Ti'></legend><bdo id='Qf4Ti'><pre id='Qf4Ti'><center id='Qf4Ti'></center></pre></bdo></b><th id='Qf4Ti'></th></span></q></dt></tr></i><div id='Qf4Ti'><tfoot id='Qf4Ti'></tfoot><dl id='Qf4Ti'><fieldset id='Qf4Ti'></fieldset></dl></div>

              <tfoot id='Qf4Ti'></tfoot>

                <small id='Qf4Ti'></small><noframes id='Qf4Ti'>

              1. <legend id='Qf4Ti'><style id='Qf4Ti'><dir id='Qf4Ti'><q id='Qf4Ti'></q></dir></style></legend>
                  本文介绍了在 Lucene 中使用 WikipediaTokenizer 的示例的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                  问题描述

                  我想在 lucene 项目中使用 WikipediaTokenizer - http://lucene.apache.org/java/3_0_2/api/contrib-wikipedia/org/apache/lucene/wikipedia/analysis/WikipediaTokenizer.html 但我从未使用过 lucene.我只想将维基百科字符串转换为令牌列表.但是,我看到这个类中只有四种方法可用,end、incrementToken、reset、reset(reader).谁能给我举个例子来使用它.

                  I want to use WikipediaTokenizer in lucene project - http://lucene.apache.org/java/3_0_2/api/contrib-wikipedia/org/apache/lucene/wikipedia/analysis/WikipediaTokenizer.html But I never used lucene. I just want to convert a wikipedia string into a list of tokens. But, I see that there are only four methods available in this class, end, incrementToken, reset, reset(reader). Can someone point me to an example to use it.

                  谢谢.

                  推荐答案

                  在 Lucene 3.0 中,next() 方法被移除.现在您应该使用 incrementToken 来遍历令牌,当您到达输入流的末尾时它会返回 false.要获取每个令牌,您应该使用 AttributeSource 类.根据您要获取的属性(术语、类型、有效负载等),您需要使用 addAttribute 方法将相应属性的类类型添加到您的分词器中.

                  In Lucene 3.0, next() method is removed. Now you should use incrementToken to iterate through the tokens and it returns false when you reach the end of the input stream. To obtain the each token, you should use the methods of the AttributeSource class. Depending on the attributes that you want to obtain (term, type, payload etc), you need to add the class type of the corresponding attribute to your tokenizer using addAttribute method.

                  以下部分代码示例来自WikipediaTokenizer的测试类,您可以在下载Lucene的源代码时找到它.

                  Following partial code sample is from the test class of the WikipediaTokenizer which you can find if you download the source code of the Lucene.

                  ...
                  WikipediaTokenizer tf = new WikipediaTokenizer(new StringReader(test));
                  int count = 0;
                  int numItalics = 0;
                  int numBoldItalics = 0;
                  int numCategory = 0;
                  int numCitation = 0;
                  TermAttribute termAtt = tf.addAttribute(TermAttribute.class);
                  TypeAttribute typeAtt = tf.addAttribute(TypeAttribute.class);
                  
                  while (tf.incrementToken()) {
                    String tokText = termAtt.term();
                    //System.out.println("Text: " + tokText + " Type: " + token.type());
                    String expectedType = (String) tcm.get(tokText);
                    assertTrue("expectedType is null and it shouldn't be for: " + tf.toString(), expectedType != null);
                    assertTrue(typeAtt.type() + " is not equal to " + expectedType + " for " + tf.toString(), typeAtt.type().equals(expectedType) == true);
                    count++;
                    if (typeAtt.type().equals(WikipediaTokenizer.ITALICS)  == true){
                      numItalics++;
                    } else if (typeAtt.type().equals(WikipediaTokenizer.BOLD_ITALICS)  == true){
                      numBoldItalics++;
                    } else if (typeAtt.type().equals(WikipediaTokenizer.CATEGORY)  == true){
                      numCategory++;
                    }
                    else if (typeAtt.type().equals(WikipediaTokenizer.CITATION)  == true){
                      numCitation++;
                    }
                  }
                  ...
                  

                  这篇关于在 Lucene 中使用 WikipediaTokenizer 的示例的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                  上一篇:集成 Lucene Index 和 Amazon AWS 下一篇:Lucene TermQuery 和 QueryParser

                  相关文章

                  最新文章

                • <i id='FyJzZ'><tr id='FyJzZ'><dt id='FyJzZ'><q id='FyJzZ'><span id='FyJzZ'><b id='FyJzZ'><form id='FyJzZ'><ins id='FyJzZ'></ins><ul id='FyJzZ'></ul><sub id='FyJzZ'></sub></form><legend id='FyJzZ'></legend><bdo id='FyJzZ'><pre id='FyJzZ'><center id='FyJzZ'></center></pre></bdo></b><th id='FyJzZ'></th></span></q></dt></tr></i><div id='FyJzZ'><tfoot id='FyJzZ'></tfoot><dl id='FyJzZ'><fieldset id='FyJzZ'></fieldset></dl></div>

                    <legend id='FyJzZ'><style id='FyJzZ'><dir id='FyJzZ'><q id='FyJzZ'></q></dir></style></legend>
                      • <bdo id='FyJzZ'></bdo><ul id='FyJzZ'></ul>
                      <tfoot id='FyJzZ'></tfoot>

                      <small id='FyJzZ'></small><noframes id='FyJzZ'>