<i id='0jv2e'><tr id='0jv2e'><dt id='0jv2e'><q id='0jv2e'><span id='0jv2e'><b id='0jv2e'><form id='0jv2e'><ins id='0jv2e'></ins><ul id='0jv2e'></ul><sub id='0jv2e'></sub></form><legend id='0jv2e'></legend><bdo id='0jv2e'><pre id='0jv2e'><center id='0jv2e'></center></pre></bdo></b><th id='0jv2e'></th></span></q></dt></tr></i><div id='0jv2e'><tfoot id='0jv2e'></tfoot><dl id='0jv2e'><fieldset id='0jv2e'></fieldset></dl></div>

      <bdo id='0jv2e'></bdo><ul id='0jv2e'></ul>
    <tfoot id='0jv2e'></tfoot>
      <legend id='0jv2e'><style id='0jv2e'><dir id='0jv2e'><q id='0jv2e'></q></dir></style></legend>
    1. <small id='0jv2e'></small><noframes id='0jv2e'>

    2. 如何在 Lucene 3.5.0 中提取文档术语向量

      时间:2023-09-29
      1. <tfoot id='98FV1'></tfoot>

              <bdo id='98FV1'></bdo><ul id='98FV1'></ul>
                <tbody id='98FV1'></tbody>

                <small id='98FV1'></small><noframes id='98FV1'>

              • <i id='98FV1'><tr id='98FV1'><dt id='98FV1'><q id='98FV1'><span id='98FV1'><b id='98FV1'><form id='98FV1'><ins id='98FV1'></ins><ul id='98FV1'></ul><sub id='98FV1'></sub></form><legend id='98FV1'></legend><bdo id='98FV1'><pre id='98FV1'><center id='98FV1'></center></pre></bdo></b><th id='98FV1'></th></span></q></dt></tr></i><div id='98FV1'><tfoot id='98FV1'></tfoot><dl id='98FV1'><fieldset id='98FV1'></fieldset></dl></div>

                <legend id='98FV1'><style id='98FV1'><dir id='98FV1'><q id='98FV1'></q></dir></style></legend>
                本文介绍了如何在 Lucene 3.5.0 中提取文档术语向量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                问题描述

                我正在使用 Lucene 3.5.0,我想输出每个文档的术语向量.例如,我想知道一个词在所有文档和每个特定文档中的频率.我的索引代码是:

                I am using Lucene 3.5.0 and I want to output term vectors of each document. For example I want to know the frequency of a term in all documents and in each specific document. My indexing code is:

                import java.io.FileFilter;
                import java.io.FileReader;
                import java.io.IOException;
                
                import java.io.File;
                import java.io.FileReader;
                import java.io.BufferedReader;
                
                import org.apache.lucene.index.IndexWriter;
                import org.apache.lucene.document.Field;
                import org.apache.lucene.document.Document;
                import org.apache.lucene.store.RAMDirectory;
                import org.apache.lucene.analysis.standard.StandardAnalyzer;
                import org.apache.lucene.store.Directory;
                import org.apache.lucene.store.FSDirectory;
                import org.apache.lucene.util.Version;
                
                public class Indexer {
                public static void main(String[] args) throws Exception {
                        if (args.length != 2) {
                        throw new IllegalArgumentException("Usage: java " + Indexer.class.getName() + " <index dir> <data dir>");
                    }
                
                    String indexDir = args[0];
                    String dataDir = args[1];
                    long start = System.currentTimeMillis();
                    Indexer indexer = new Indexer(indexDir);
                    int numIndexed;
                    try {
                        numIndexed = indexer.index(dataDir, new TextFilesFilter());
                    } finally {
                        indexer.close();
                    }
                    long end = System.currentTimeMillis();
                    System.out.println("Indexing " + numIndexed + " files took " + (end - start) + " milliseconds");
                }
                
                private IndexWriter writer;
                
                public Indexer(String indexDir) throws IOException {
                    Directory dir = FSDirectory.open(new File(indexDir));
                    writer = new IndexWriter(dir,
                        new StandardAnalyzer(Version.LUCENE_35),
                        true,
                        IndexWriter.MaxFieldLength.UNLIMITED);
                }
                
                public void close() throws IOException {
                    writer.close();
                }
                
                public int index(String dataDir, FileFilter filter) throws Exception {
                    File[] files = new File(dataDir).listFiles();
                    for (File f: files) {
                        if (!f.isDirectory() &&
                        !f.isHidden() &&
                        f.exists() &&
                        f.canRead() &&
                        (filter == null || filter.accept(f))) {
                            BufferedReader inputStream = new BufferedReader(new FileReader(f.getName()));
                            String url = inputStream.readLine();
                            inputStream.close();
                            indexFile(f, url);
                        }
                    }
                    return writer.numDocs();
                }
                
                private static class TextFilesFilter implements FileFilter {
                    public boolean accept(File path) {
                        return path.getName().toLowerCase().endsWith(".txt");
                    }
                }
                
                protected Document getDocument(File f, String url) throws Exception {
                    Document doc = new Document();
                    doc.add(new Field("contents", new FileReader(f)));
                    doc.add(new Field("urls", url, Field.Store.YES, Field.Index.NOT_ANALYZED));
                    doc.add(new Field("filename", f.getName(), Field.Store.YES, Field.Index.NOT_ANALYZED));
                    doc.add(new Field("fullpath", f.getCanonicalPath(), Field.Store.YES, Field.Index.NOT_ANALYZED));
                    return doc;
                }
                
                private void indexFile(File f, String url) throws Exception {
                    System.out.println("Indexing " + f.getCanonicalPath());
                    Document doc = getDocument(f, url);
                    writer.addDocument(doc);
                }
                }
                

                谁能帮我写一个程序来做到这一点?谢谢.

                can anybody help me in writing a program to do that? thanks.

                推荐答案

                首先,你不需要为了只知道词在文档中出现的频率而存储词向量.尽管如此,Lucene 还是存储了这些数字以用于 TF-IDF 计算.您可以通过调用 IndexReader.termDocs(term) 并遍历结果来访问此信息.

                First of all, you don't need to store term vectors in order to know solely the frequency of term in documents. Lucene stores these numbers nevertheless to use in TF-IDF calculation. You can access this information by calling IndexReader.termDocs(term) and iterating over the result.

                如果您有其他目的并且您确实需要访问术语向量,那么您需要告诉 Lucene 存储它们,方法是将 Field.TermVector.YES 作为Field 构造函数.然后,您可以检索向量,例如与 IndexReader.getTermFreqVector().

                If you have some other purpose in mind and you actually need to access the term vectors, then you need to tell Lucene to store them, by passing Field.TermVector.YES as the last argument of Field constructor. Then, you can retrieve the vectors e.g. with IndexReader.getTermFreqVector().

                这篇关于如何在 Lucene 3.5.0 中提取文档术语向量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                上一篇:简单使用 Solr 时如何解决“锁定获取超时"? 下一篇:是否有适用于 Lucene 的快速、准确的荧光笔?

                相关文章

                最新文章

                  1. <i id='833W7'><tr id='833W7'><dt id='833W7'><q id='833W7'><span id='833W7'><b id='833W7'><form id='833W7'><ins id='833W7'></ins><ul id='833W7'></ul><sub id='833W7'></sub></form><legend id='833W7'></legend><bdo id='833W7'><pre id='833W7'><center id='833W7'></center></pre></bdo></b><th id='833W7'></th></span></q></dt></tr></i><div id='833W7'><tfoot id='833W7'></tfoot><dl id='833W7'><fieldset id='833W7'></fieldset></dl></div>

                  2. <legend id='833W7'><style id='833W7'><dir id='833W7'><q id='833W7'></q></dir></style></legend>

                    <small id='833W7'></small><noframes id='833W7'>

                    <tfoot id='833W7'></tfoot>
                    • <bdo id='833W7'></bdo><ul id='833W7'></ul>