以倒排索引格式打印Lucene

3
根据我的理解,Lucene使用倒排索引。是否有一种方式可以以倒排索引格式提取/打印Lucene索引(Lucene 6):
term1   <doc1, doc100, ..., doc555>
term1   <doc1, ..., doc100, ..., do89>
term1   <doc3, doc2, doc5, ...>
.
.
.
termn   <doc10, doc43, ..., dock>
3个回答

1
您可以使用TermEnum来迭代反向索引中的词项。然后,对于每个词项,您应该使用其PostingsEnum来迭代帖子。如果您具有单个段的索引(Lucene版本:6_5_1),则以下代码将起作用:
String indexPath = "your_index_path"
String field = "your_index_field"
try (FSDirectory directory = FSDirectory.open(Paths.get(indexPath));
            IndexReader reader = DirectoryReader.open(directory)) {
        Terms terms = MultiFields.getTerms(reader, field);
        final TermsEnum it = terms.iterator();
        BytesRef term = it.next();
        while (term != null) {
            String termString = term.utf8ToString();
            System.out.print(termStirng + ": ");
            for (LeafReaderContext lrc : reader.leaves()) {
                LeafReader lr = lrc.reader();
                PostingsEnum pe = lr.postings(new Term(field, termString));
                int docId = pe.nextDoc();
                while (docId != PostingsEnum.NO_MORE_DOCS) {
                    postingSize++;
                    Document doc = lr.document(docId);
                    // here you can print your document title, id, etc
                    docId = pe.nextDoc();
                }
            }
            term = it.next();
        }
    } catch (IOException e) {
        e.printStackTrace();
    }

如果您的索引有多个段落,则$reader.leaves()$将返回具有其他读取器作为其叶子节点的读取器(将其视为索引读取器树)。在这种情况下,您应该遍历树以到达叶子节点,并针对每个叶子节点重复for循环中的代码。

1

我正在使用Lucene 6.x.x,不确定是否有简单的方法,但有解决方案总比没有好。像这样使用MatchAllDocsQuery对我有效。

private static void printWholeIndex(IndexSearcher searcher) throws IOException{
        MatchAllDocsQuery query = new MatchAllDocsQuery();
        TopDocs hits = searcher.search(query, Integer.MAX_VALUE);

        Map<String, Set<Integer>>  invertedIndex = new HashMap<>();


        if (null == hits.scoreDocs || hits.scoreDocs.length <= 0) {
            System.out.println("No Hits Found with MatchAllDocsQuery");
            return;
        }

        for (ScoreDoc hit : hits.scoreDocs) {
            Document doc = searcher.doc(hit.doc);

            List<IndexableField> allFields = doc.getFields();

            for(IndexableField field:allFields){



            //Single document inverted index 
            Terms terms = searcher.getIndexReader().getTermVector(hit.doc,field.name());

            if (terms != null )  {
                TermsEnum termsEnum = terms.iterator();
                while(termsEnum.next() != null){
                if(invertedIndex.containsKey(termsEnum.term().utf8ToString())){
                    Set<Integer> existingDocs = invertedIndex.get(termsEnum.term().utf8ToString());
                    existingDocs.add(hit.doc);
                    invertedIndex.put(termsEnum.term().utf8ToString(),existingDocs);

                }else{
                    Set<Integer> docs = new TreeSet<>();
                    docs.add(hit.doc);
                    invertedIndex.put(termsEnum.term().utf8ToString(), docs);
                }
                }
            }
        }
        }

        System.out.println("Printing Inverted Index:");

        invertedIndex.forEach((key , value) -> {System.out.println(key+":"+value);
        });
    }

有两点需要注意:

1.最大支持文档数量为Integer.MAX_VALUE。虽然我没有尝试过,但可能可以使用搜索器的searchAfter方法并执行多次搜索来消除此限制。

2.doc.getFields()仅返回存储的字段。如果您所有索引的字段都没有存储,则可能需要保留一个静态字段数组,因为行Terms terms = searcher.getIndexReader().getTermVector(hit.doc,field.name());也适用于未存储的字段。


请注意,此解决方案效率不高(例如,对于包含3天推文的索引需要很长时间)。 - sareem
“3天推文索引”是无关紧要的,提及文档数量即可。同时,我已经明确表示我对你所问的逻辑不熟悉,性能方面也没有考虑过。我会进一步研究性能方面的问题。如果这对于小型文档集合可以正常工作,那么您可以考虑将该逻辑扩展到更大规模的应用中。 - Sabir Khan

0

已经开发出一个可以打印Lucene 6.6的docId:tokenPos版本。

Directory directory = new RAMDirectory();
Analyzer analyzer = new StandardAnalyzer();
IndexWriterConfig iwc = new IndexWriterConfig(analyzer);
iwc.setOpenMode(OpenMode.CREATE);
IndexWriter writer = new IndexWriter(directory, iwc);

FieldType type = new FieldType();
type.setStoreTermVectors(true);
type.setStoreTermVectorPositions(true);
type.setStoreTermVectorOffsets(true);
type.setIndexOptions(IndexOptions.DOCS);

Field fieldStore = new Field("text", "We hold that proof beyond a reasonable doubt is required.", type);
Document doc = new Document();
doc.add(fieldStore);
writer.addDocument(doc);

fieldStore = new Field("text", "We hold that proof requires reasoanble preponderance of the evidenceb.", type);
doc = new Document();
doc.add(fieldStore);
writer.addDocument(doc);

writer.close();

DirectoryReader reader = DirectoryReader.open(directory);
IndexSearcher searcher = new IndexSearcher(reader);

MatchAllDocsQuery query = new MatchAllDocsQuery();
TopDocs hits = searcher.search(query, Integer.MAX_VALUE);

Map<String, Set<String>> invertedIndex = new HashMap<>();
BiFunction<Integer, Integer, Set<String>> mergeValue = 
    (docId, pos)-> {TreeSet<String> s = new TreeSet<>(); s.add((docId+1)+":"+pos); return s;};

for ( ScoreDoc scoreDoc: hits.scoreDocs ) {
    Fields termVs = reader.getTermVectors(scoreDoc.doc);
    Terms terms = termVs.terms("text");
    TermsEnum termsIt = terms.iterator();
    PostingsEnum docsAndPosEnum = null;
    BytesRef bytesRef;
    while ( (bytesRef = termsIt.next()) != null ) {
        docsAndPosEnum = termsIt.postings(docsAndPosEnum, PostingsEnum.ALL);
        docsAndPosEnum.nextDoc();
        int pos = docsAndPosEnum.nextPosition();
        String term = bytesRef.utf8ToString();
        invertedIndex.merge(
            term, 
            mergeValue.apply(scoreDoc.doc, pos), 
            (s1,s2)->{s1.addAll(s2); return s1;}
        );
    }
}
System.out.println( invertedIndex);

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接