MeiliSearch/deep-dive.md
2019-05-16 12:04:08 +02:00

8.4 KiB

A deep dive in MeiliDB

On the 15 of May 2019.

MeiliDB is a full text search engine based on a final state transducer named fst and a key-value store named sled. The goal of a search engine is to store data and to respond to queries as accurate and fast as possible. To achieve this it must save the matching words in an inverted index.

Where is the data stored?

MeiliDB is entirely backed by a key-value store like any good database (i.e. Postgres, MySQL). This brings a great flexibility in the way documents can be stored and updates handled along time.

sled will brings some of the A.C.I.D. properties to help us be sure the saved data is consistent.

What does the key-value store contains?

It contain the inverted word index, the schema and the documents fields.

The inverted word index

The inverted word index is a sled Tree dedicated to store and give access to all documents that contains a specific word. The information stored under the word is simply a big ordered array of where in the document the word has been found. In other word, a big list of DocIndex.

A final state transducer

...also abbreviated fst

This is the first entry point of the engine, you can read more about how it work with the beautiful blog post of @BurntSushi, Index 1,600,000,000 Keys with Automata and Rust.

To make it short it is a powerful way to store all the words that are present in the indexed documents. You construct it by giving it all the words you want to index. When you want to search in it you can provide any automaton you want, in MeiliDB a custom levenshtein automaton is used.

Document indexes

The fst will only return the words that match with the search automaton but the goal of the search engine is to retrieve all matches in all the documents when a query is made. You want it to return some sort of position in an attribute in a document, an information about where the given word matched.

To make it possible we retrieve all of the DocIndex corresponding to all the matching words in the fst, we use the WordsIndex Tree to get the DocIndexes corresponding the words.

The schema

The schema is a data structure that represents which documents attributes should be stored and which should be indexed. It is stored under a the MainIndex Tree and given to MeiliDB only at the creation of an index.

Each document attribute is associated to a unique 16 bit number named SchemaAttr.

In the future, this schema type could be given along with updates, the database could be able to handled a new schema and reindex the database according to the new one.

Document attributes

When the engine handle a query the result that the requester want is a document, not only the Matches associated to it, fields of the original document must be returned too.

So MeiliDB again uses the power of the underlying key-value store and save the documents attributes marked as STORE in the schema. The dedicated Tree for this information is the DocumentsIndex.

When a document field is saved in the key-value store its value is binary encoded using message pack, so a document must be serializable using serde.

How is a request processed?

Now that we have our inverted index we are able to return results based on a query. In the MeiliDB universe a query is a simple string containing words.

Query lexemes

The first step to be able to call the underlying structures is to split the query in words, for that we use a custom tokenizer. Note that a tokenizer is specialized for a human language, this is the hard part.

Automatons and query index

So to query the fst we need an automaton, in MeiliDB we use a levenshtein automaton, this automaton is constructed using a string and a maximum distance. According to the Algolia's blog post we created the DFAs with different settings.

Thanks to the power of the fst library it is possible to union multiple automatons on the same fst set. The Stream is able to return all the matching words. We use these words to find the whole list of DocIndexes associated.

With all these informations it is possible to reconstruct a list of all the DocIndexes associated with the words queried.

Sort by criteria

Now that we are able to get a big list of DocIndexes it is not enough to sort them by criteria, we need more informations like the levenshtein distance or the fact that a query word match exactly the word stored in the fst. So we stuff it a little bit, and aggregate all these Matches for each document. This way it will be easy to sort a simple vector of document using a bunch of functions.

With this big list of documents and associated matches we are able to sort only the part of the slice that we want using bucket sorting. Each criterion is evaluated on each subslice without copy, thanks to GroupByMut which, I hope will soon be merged.

Note that it is possible to customize the criteria used by using the QueryBuilder::with_criteria constructor, this way you can implement some custom ranking based on the document attributes using the appropriate structure and the document method.

At this point, MeiliDB work is over 🎉