Full Text Index

(4Q18)


This article provides information on configuring and using eXist-db's full text index.

Introduction

The full text index module is based on Apache Lucene.

The full-text index module is tightly integrated with eXist-db's modularized indexing architecture: the index behaves like a plug-in which adds itself to the database's index pipelines. Once configured, the index will be notified of relevant events, like adding/removing a document, removing a collection or updating single nodes. No manual re-indexing is required to keep the index up-to-date.

The full-text index module also implements common interfaces which are shared with other indexes, for instance for highlighting matches (see KWIC). It is easy to switch between the Lucene index and, for instance, the ngram index without rewriting much XQuery code.

Enabling the Lucene Module

The Lucene full text index is enabled by default (since eXist-db version 1.4). In case it is not enabled in your installation, here's how to get it up and running:

  1. Enable it according to the instructions in the article on index modules.

  2. Then (re-)build eXist-db using the provided build.sh or build.bat script. The build process downloads the required Lucene jars automatically. If everything builds ok, you'll find a jar exist-lucene-module.jar in the lib/extensions directory.

  3. Edit the main configuration file, conf.xml and un-comment the Lucene-related section:

    <modules>
        <module id="lucene-index" class="org.exist.indexing.lucene.LuceneIndex" buffer="32"/>
        ...
    </modules>

The index has a single configuration parameter on the <modules>/<module> element called buffer. It defines the amount of memory (in megabytes) Lucene will use for buffering index entries before they are written to disk. See the Lucene Javadocs.

Configuring the Index

Like other indexes, you create a Lucene index by configuring it in a collection.xconf document as explained in documentation. For example:

<collection xmlns="http://exist-db.org/collection-config/1.0">
    <index xmlns:wiki="http://exist-db.org/xquery/wiki" xmlns:html="http://www.w3.org/1999/xhtml" xmlns:atom="http://www.w3.org/2005/Atom">
        <!-- Disable the old full text index -->
        <fulltext default="none" attributes="false"/>
	<!-- Lucene index is configured below -->
        <lucene>
	        <analyzer class="org.apache.lucene.analysis.standard.StandardAnalyzer"/>
            <analyzer id="ws" class="org.apache.lucene.analysis.core.WhitespaceAnalyzer"/>
	        <text qname="TITLE" analyzer="ws"/>
	        <text qname="p">
	            <inline qname="em"/>
            </text>
            <text match="//foo/*"/>
            <!-- "inline" and "ignore" can be specified globally or per-index as
                 shown above -->
	        <inline qname="b"/>
	        <ignore qname="note"/>
        </lucene>
    </index>
</collection>
collection.xconf for version 2.2
<collection xmlns="http://exist-db.org/collection-config/1.0">
    <index xmlns:wiki="http://exist-db.org/xquery/wiki" xmlns:html="http://www.w3.org/1999/xhtml" xmlns:atom="http://www.w3.org/2005/Atom">
        <!-- Lucene index is configured below -->
        <lucene>
	        <analyzer class="org.apache.lucene.analysis.standard.StandardAnalyzer"/>
            <analyzer id="ws" class="org.apache.lucene.analysis.core.WhitespaceAnalyzer"/>
	        <text qname="TITLE" analyzer="ws"/>
	        <text qname="p">
	            <inline qname="em"/>
            </text>
            <text match="//foo/*"/>
            <!-- "inline" and "ignore" can be specified globally or per-index as
                 shown above -->
	        <inline qname="b"/>
	        <ignore qname="note"/>
        </lucene>
    </index>
</collection>
collection.xconf for version 3.0 and above.

You can define a Lucene index on a single element or attribute (qname="...") or a node path with wildcards (match="...", see below).

It is important make sure to choose the right context for an index, which has to be the same as in your query. To better understand this, let's have a look at how the index creation is handled by eXist-db and Lucene. For example: <text qname="SPEECH">

This creates an index on <SPEECH> only. What is passed to Lucene is the string value of <SPEECH>, which also includes the text of all its descendant text nodes (except those filtered out by an optional <ignore>).

Consider the fragment:

<SPEECH>
    <SPEAKER>Second Witch</SPEAKER>
    <LINE>Fillet of a fenny snake,</LINE>
    <LINE>In the cauldron boil and bake;</LINE>
</SPEECH>

If you have an index on <SPEECH>, Lucene will use the text "Second Witch Fillet of a fenny snake, In the cauldron boil and bake;" and index it. eXist-db internally links this Lucene document to the <SPEECH> node, but Lucene itself has no knowledge of that (it doesn't know anything about XML nodes).

Given this, take the following query:

//SPEECH[ft:query(., 'cauldron')]

This searches the index and finds the text, which eXist-db can trace back to the <SPEECH> node in the XML document.

However, it is required that you use the same context (<SPEECH>) for creating and querying the index. For instance:

//SPEECH[ft:query(LINE, 'cauldron')]

This will not return anything, even though <LINE> is a child of <SPEECH> and cauldron was indexed. This particular cauldron is linked to its ancestor <SPEECH> , not its parent <LINE>.

However, you are free to give the user both options, i.e. use <SPEECH> and <LINE> as context at the same time. For this define a second index on <LINE>:

<text qname="SPEECH"/>
<text qname="LINE"/>

Let's use a different example to illustrate this. Assume you have a document with encoded place names:

<p>He loves <placeName>Paris</placeName>.</p>

For a general query you probably want to search through all paragraphs. However, you may also want to provide an advanced search option, which allows the user to restrict his/her queries to place names. To make this possible, simply define an index on <placeName> as well:

<lucene>
    <text qname="p"/>
    <text qname="placeName"/>
</lucene>

Based on this setup, you'll be able to query for the word 'Paris' anywhere in a paragraph:

//p[ft:query(., 'paris')]

And also on 'Paris' occurring within a <placeName>:

//p[ft:query(placeName, 'paris')]

Using match="..."

In addition to defining an index on a given qualified name, you can also specify a "path" with wildcards. This feature might be subject to change, so please be careful when using it.

Assume you want to define an index on all the possible elements below <SPEECH>. You can do this by creating one index for every element:

<text qname="LINE"/>
<text qname="SPEAKER"/>

As a shortcut, you can use a match attribute with a wildcard:

<text match="//SPEECH/*"/>

This will create a separate index on each child element of SPEECH it encounters. Please note that the argument to match is a simple path pattern, not a full XPath expression. For the time being, it only allows:

  • / and // to denote child or descendant steps,

  • * wildcard selector to match an arbitrary element,

  • matching a single attribute's value, e.g. foo[@bar = 'xyz']

As explained above, you have to figure out which parts of your document will likely be interesting as context for a full text query. The full text index works best if the context isn't too narrow. For example, if you have a document structure with section <div>s, headings and paragraphs, you would probably want to create an index on the <div>s and maybe on the headings, so the user can differentiate between the two.

In some cases, you could decide to put the index on the paragraph level. Then you don't need the index on the section, since you can always get from the paragraph back to the section.

If you query a larger context, you can use the KWIC module to show the user text surrounding each match. Or you can ask eXist-db to highlight each match with an <exist:match> tag, which you can later use to locate the matches within the text.

Whitespace Treatment and Ignored Content

Inlined elements

By default, eXist-db's indexer assumes that element boundaries break on a word or token. For example, if you have an element:

<size><width>12</width><height>8</height></size>

You want 12 and 8 to be indexed as separate tokens, even though there's no whitespace between the elements. eXist-db will pass the content of the two elements to Lucene as separate strings and Lucene will see two tokens (instead of just 128).

However, you usually don't want this behaviour for mixed content nodes. For example:

<p>This is <b>un</b>clear.</p>

In this case, you want unclear to be indexed as a single word. This can be done by telling eXist-db which nodes are inline nodes. The example configuration above uses:

<inline qname="b"/>

The <inline> option can both be specified globally or per-index:

<text qname="p">
    <inline qname="em"/>
</text>

Ignored elements

It is sometimes necessary to skip the content of an inline element. Notes are a good example:

<p>This is a paragraph
<note>containing an inline note</note>.</p>

Use an <ignore> element in the collection configuration to have eXist-db ignore the note:

<ignore qname="note"/>

Basically, <ignore> simply allows you to hide a chunk of text before Lucene sees it.

Like the <inline> tag, <ignore> may appear both globally or within a single index definition.

The <ignore> only applies to descendants of an indexed element. You can still create another index on the ignored element itself. For example, you can have index definitions for <p> and <note>:

<lucene>
    <text qname="p"/>
    <text qname="note"/>
    <ignore qname="note"/>
</lucene>

If <note> appears within <p>, it will not be added to the index on <p>, only to the index on <note>. For example:

//p[ft:query(., "note")]

This may not return a hit if "note" occurs within a <note>, while this finds a match:

//p[ft:query(note, "note")]

Boost

A boost value can be assigned to an index to give it a higher score. The score for each match will be multiplied by the boost factor (default is: 1.0). For example, you may want to rank matches in titles higher than other matches.

Here's how to configure the documentation search indexes in eXist-db:

<lucene>
    <analyzer class="org.apache.lucene.analysis.standard.StandardAnalyzer"/>
    <text qname="section">
        <ignore qname="title"/>
        <ignore qname="programlisting"/>
        <ignore qname="screen"/>
        <ignore qname="synopsis"/>
    </text>
    <text qname="para"/>
    <text qname="title" boost="2.0"/>
    <ignore qname="title"/>
</lucene>

The <title> index gets a boost of 2.0 to make sure that its matches get a higher score. Since the <title> element occurs within <section>, we add an ignore rule to the index definition on the section and create a separate index on title. We also ignore titles occurring inside paragraphs. Without this, title would be matched two times.

Because the title is now indexed separately, we need to query it explicitly. For example, to search the section and the title at the same time, one could issue the following query:

for $sect in /book//section[ft:query(., "ngram")] | /book//section[ft:query(title, "ngram")]
order by ft:score($sect) descending 
return $sect

Attribute boost

Starting with eXist-db 3.0 a boost value can also be assigned to an index by attribute. This can be used to weight your search results, even if you have flat data structures with the same attribute value pairs in attributes throughout your documents. Two flavours of dynamic weighting are available through the new pairs <match-sibling-attribute>, <has-sibling-attribute> and <match-attribute>, <has-attribute> child elements in the full-text index configuration.

If you have data in Lexical metadata framework (LMF) format you will recognize these repeated structures of <feat> elements with att and val attributes within <LexicalEntry> elements. For instance <feat att='writtenForm' val='LMF feature value'>. The attribute boosting allows you to weight the results based on the value of the att attribute so that hits in definitions come before hits in comments and examples. This behaviour is enabled by adding a child <match-sibling-attr> to a Lucene configuration <text> element. An example index configuration for it looks like this:

<text qname="@val">
    <match-sibling-attr boost="25" qname="att" value="writtenForm"/>
</text>

This means that the ft:score#1 function will boost hits in val attributes with a factor of 25 times for the writtenForm value of the att attribute.

In the same way <match-attr> would be used for element qnames in the <text> element.

If you do not care about any value of the sibling attribute, use the <has-attribute> index configuration variant. An example index configuration with <has-attr> looks like this:

<text qname="feat">
    <has-attr boost="0" qname="xml:lang"/>
</text>

This means that if your <feat> elements have an attribute <xml:lang> it will score them nil and push them last of the pack, which might be useful to demote hits in features in other languages than the main entry language.

In the same way <has-sibling-attr> would be used for attributes in the <text> element.

Analyzers

One of the strengths of Lucene is that it allows the developer to determine nearly every aspect of text analysis. This is done through analyzer classes, which combine a tokenizer with a chain of filters to post-process the tokenized text. eXist-db's Lucene module already allows different analyzers to be used for different indexes.

<lucene>
    <analyzer class="org.apache.lucene.analysis.standard.StandardAnalyzer"/>
    <analyzer id="ws" class="org.apache.lucene.analysis.core.WhitespaceAnalyzer"/>
    <text match="//SPEECH//*"/>
    <text qname="TITLE" analyzer="ws"/>
</lucene>

In the example above, we define that Lucene's StandardAnalyzer should be used by default (the <analyzer> element without id attribute). We provide an additional analyzer and assign it the id ws, by which the analyzer can be referenced in the actual index definitions.

The whitespace analyzer is the most basic one. As the name implies, it tokenizes the text at white space characters, but treats all other characters - including punctuation - as part of the token. The tokens are not converted to lower case and there's no stopword filter applied.

Configuring the Analyzer

You can send configuration parameters to the instantiation of the Analyzer. These parameters must match a Constructor signature on the underlying Java class of the Analyzer, please review the Javadoc for the Analyzer that you wish to configure.

We currently support passing the following types:

  • String (default if no type is specified)

  • java.io.FileReader (since Lucene 4) or file

  • java.lang.Boolean or boolean

  • java.lang.Integer or int

  • org.apache.lucene.analysis.util.CharArraySet or set

  • java.lang.reflect.Field

The value Version#LUCENE_CURRENT is always added as first parameter for the analyzer constructor (a fallback mechanism is present for older analyzers). The previously valid values java.io.File and java.util.Set can not be used since Lucene 4.

For instance to add a stopword list, use one of the following constructions:

<analyzer id="stdstops" class="org.apache.lucene.analysis.standard.StandardAnalyzer">
    <param name="stopwords" type="java.io.FileReader" value="/tmp/stop.txt"/>
</analyzer>
<analyzer id="stdstops" class="org.apache.lucene.analysis.standard.StandardAnalyzer">
    <param name="stopwords" type="org.apache.lucene.analysis.util.CharArraySet">
        <value>the</value>
        <value>this</value>
        <value>and</value>
        <value>that</value>
    </param>
</analyzer>

Using the Snowball analyzer requires you to add additional libraries to lib/user.

<analyzer id="sbstops" class="org.apache.lucene.analysis.snowball.SnowballAnalyzer">
    <param name="name" value="English"/>
    <param name="stopwords" type="org.apache.lucene.analysis.util.CharArraySet">
        <value>the</value>
        <value>this</value>
        <value>and</value>
        <value>that</value>
    </param>
</analyzer>

Defining Fields

Sometimes you want to define different Lucene indexes on the same set of elements, for instance to use a different analyzer. eXist-db allows to name a certain index using the field attribute:

<text field="title" qname="title" analyzer="en"/>

Such an index is called named index. See Query a Named Index on how to query these indexes.

Querying the Index

Querying full text from XQuery is straightforward. For example:

for $m in //SPEECH[ft:query(., "boil bubble")]
order by ft:score($m) descending
return $m

The query function takes a query string in Lucene's default query syntax. It returns a set of nodes which are relevant with respect to the query. Lucene assigns a relevance score or rank (a decimal number) to each match. This score is preserved by eXist-db and can be accessed through the score function.

The higher the score, the more relevant the text. You can use Lucene's features to "boost" a certain term in the query: give it a higher or lower influence on the final rank.

Please note that the score is computed relative to the root context of the index. If you created an index on <SPEECH>, all scores will be computed based on text in <SPEECH> nodes, even though your actual query may only return <LINE> children of <SPEECH>.

The Lucene module is fully supported by eXist-db's query-rewriting optimizer. This means that the query engine can rewrite the XQuery expression to make best use of the available indexes. All the rules and hints given in the tuning guide fully apply to the Lucene index.

To present search results in a Keywords in Context format, you may want to have a look at eXist-db's KWIC module.

Query a Named Index

To query a named index (see Defining Fields), use the ft:query-field($fieldName, $query) instead of ft:query:

ft:query-field("title", "xml")

ft:query-field works exactly like ft:query, except that the set of nodes to search is determined by the nodes in the named index. The function returns the nodes selected by the query, which would be <title> elements in the example above.

You can use ft:query-field with an XPath filter expression, just as you would call ft:query:

//section[ft:query-field("title", "xml")]

Describing Queries in XML

Lucene's default query syntax does not provide access to all available features. However, eXist-db's ft:query function also accepts a description of the query in XML, as an alternative to passing a query string. The XML description closely mirrors Lucene's query API. It is transformed into an internal tree of query objects, which is directly passed to Lucene for execution. This has several advantages, for example you can specify if the order of terms should be relevant for a phrase query:

let $query :=
    <query>
        <near ordered="no">miserable nation</near>
    </query>
return
    //SPEECH[ft:query(., $query)]

The following elements may occur within a query description:

<term>

Defines a single term to be searched in the index. If the root query element contains a sequence of term elements, wrap them in <bool/> and they will be combined as in a boolean "or" query. For example:

let $query :=
    <query>
        <bool><term>nation</term><term>miserable</term></bool>
    </query>
return
//SPEECH[ft:query(., $query)]

This finds all <SPEECH> elements containing either nation or miserable or both.

<wildcard>

A string with a * wildcard in it. This will be matched against the terms of a document. Can be used instead of a <term> element. For example:

let $query :=
    <query>
        <bool><term>nation</term><wildcard>miser*</wildcard></bool>
    </query>
return
//SPEECH[ft:query(., $query)]
<regex>

A regular expression which will be matched against the terms of a document. Can be used instead of a <term> element. For example:

let $query :=
    <query>
        <bool><term>nation</term><regex>miser.*</regex></bool>
    </query>
return
//SPEECH[ft:query(., $query)]
<bool>

Constructs a boolean query from its children. Each child element may have an occurrence indicator, which could be either must, should or not:

must

this part of the query must be matched

should

this part of the query should be matched, but doesn't need to

not

this part of the query must not be matched

For instance:

let $query :=
    <query>
        <bool><term occur="must">boil</term><term occur="should">bubble</term></bool>
    </query>
return //SPEECH[ft:query(LINE, $query)]
<phrase>

Searches for a group of terms occurring in the correct order. The element may either contain explicit <term> elements or text content. Text will be automatically tokenized into a sequence of terms. For example:

let $query :=
    <query>
        <phrase>cauldron boil</phrase>
    </query>
return //SPEECH[ft:query(., $query)]

This has the same effect as:

let $query :=
    <query>
        <phrase><term>cauldron</term><term>boil</term></phrase>
    </query>
return //SPEECH[ft:query(., $query)]

The attribute slop can be used for a proximity search: Lucene will try to find terms which are within the specified distance:

let $query :=
    <query>
        <phrase slop="10"><term>frog</term><term>dog</term></phrase>
    </query>
return //SPEECH[ft:query(., $query)]
<near>

<near> is a powerful alternative to <phrase> and one of the features not available through the standard Lucene query parser.

If the element has text content only, it will be tokenized into terms and the expression behaves like <phrase>. Otherwise it may contain any combination of <term>, <first> and nested <near> elements. This makes it possible to search for two sequences of terms which are within a specific distance. For example:

let $query :=
    <query>
        <near slop="20"><term>snake</term><near slop="1">tongue dog</near></near>
    </query>
return //SPEECH[ft:query(., $query)]

Element <first> matches a span against the start of the text in the context node. It takes an optional attribute end to specify the maximum distance from the start of the text. For example:

let $query :=
    <query>
        <near slop="50"><first end="2"><near>second witch</near></first><near
slop="1">tongue dog</near></near>
    </query>
    return //SPEECH[ft:query(., $query)]

As shown above, the content of <first> can again be text, a <term> or <near>.

Contrary to <phrase>, <near> can be told to ignore the order of its components. Use parameter ordered="yes|no" to change near's behaviour. For example:

let $query :=
    <query>
        <near slop="100" ordered="no"><term>bubble</term><term>fillet</term></near>
    </query>
return //SPEECH[ft:query(., $query)]

All elements in a query may have an optional boost parameter (float). The score of the nodes matching the corresponding query part will be multiplied by this factor.

Additional parameters

The ft:query function allows a third parameter for passing additional settings to the query engine. This parameter must be an XML fragment which lists the configuration properties to be set as child elements:

let $options :=
    <options>
        <default-operator>and</default-operator>
        <phrase-slop>1</phrase-slop>
        <leading-wildcard>no</leading-wildcard>
        <filter-rewrite>yes</filter-rewrite>
    </options>
return
    //SPEECH[ft:query(., $query, $options)]

The meaning of those properties is as follows

filter-rewrite

Controls how terms are expanded for wildcard or regular expression searches. If set to yes, Lucene will use a filter to pre-process matching terms. If set to no, all matching terms will be added to a single boolean query which is then executed. This may generate a "too many clauses" exception when applied to large data sets. Setting filter-rewrite to yes avoids those issues.

default-operator

The default operator with which multiple terms will be combined. Allowed values: or, and.

phrase-slop

Sets the default slop for phrases. If 0, then exact phrase matches are required. Default value is 0.

leading-wildcard

When set to yes, * or ? are allowed as the first character of a PrefixQuery and WildcardQuery. Note that this can produce very slow queries on big indexes.

Adding Constructed Fields to a Document

This feature allows to add arbitrary fields to a binary or XML document and have them indexed with Lucene. It was developed as part of the content extraction framework, to attach metadata extracted from for instance a PDF to the binary document. It works equally well for XML documents though and is an efficient method to attach computed fields to a document, containing information which does not exist in the XML as such.

The field indexes are not configured via collection.xconf. Instead we add fields programmatically from an XQuery (which could be run via a trigger):

ft:index("/db/demo/test.xml", <doc>
    <field name="title" store="yes">Indexing</field>
    <field name="author" store="yes">Me</field>
    <field name="date" store="yes">2013</field>
</doc>)

The store attribute indicates that the fields content should be stored as a string. Without this attribute, the content will be indexed for search, but you won't be able to retrieve the contents.

To get the contents of a field, use the ft:get-field function:

ft:get-field("/db/demo/test.xml", "title")

To query this index, use the ft:search function:

ft:search("/db/demo/test.xml", "title:indexing and author:me")

Custom field indexes are automatically deleted when their parent document is removed. If you want to update fields without removing the document, you need to delete the old fields first though. This can be done using the ft:remove-index function:

ft:remove-index("/db/demo/test.xml")