MeiliSearch/deep-dive.md
2018-10-21 18:21:04 +02:00

7.2 KiB

A deep dive in pentium

On the 21 of october 2018.

Pentium is a full text search engine based on a final state transducer named fst and a key-value store named RocksDB. The goal of a search engine is to store data and to respond to queries as accurate and fast as possible. To achieve this it must save the data as an inverted index.

What is an index ?

For pentium, an index is composed of a final state transducer, a document indexes file and some key-values.

The final state transducer

This is the first entry point of the engine, you can read more about how it work with the beautiful blog post of burntsushi Index 1,600,000,000 Keys with Automata and Rust.

To make it short it is a powerful way to store all the words that are present in the indexed documents. You construct it by giving all the words you want to index associated with a value that, for the moment, can only be an u64. When you want to search in it you can provide any automaton you want, in pentium a custom levenshtein automaton is used.

Note that the number under each word is auto-incremental, each new word have a new number that is greater than the prevous one.

Another powerful feature of fst is that it can nearly avoid using RAM and be streamed to disk, the problem is that the keys must be always added in lexicographic order, so you must sort them before, for the moment pentium uses a BTreeMap.

The document indexes

As it has been specified, the fst can only store a number under a word an u64 but the goal of the search engine is to retrieve a match in a document when a query is made. You want it to return so sort of position in an attribute in a document, an information about where the given word match.

To make it possible, a custom datastructure have been developped, the document indexes are stored in a file. this file is composed of two arrays , the first represent a range (i.e. start and end) that gives a view of where to read all the DocIndexes corresponding to this number/word. The datastructure is pretty simple to construct and to read. Another advantage is that the slices are accessible in O(1) when you know the word associated number.

doc-indexes

The key-value file

When the engine handle a query the result that the requester want is a document, not only the match associated to it, fields of the original document must be returned too.

So pentium is backed by a key-value store named RocksDB. At index time, the key-values of the documents are stored (if marked to be stored) using key structure of the form {document id}-{field name}. We wanted the index to be manipulable, RocksDB have a file format that allow us to compute the index in advance.

The SST file have the same disadvantage as the fst, it needs its keys to be ordered.

How a query is handled ?

Now that we have our index we are able to return results based on a query, in the pentium universe a query is single string.

As we described it above, the logic imbrication of datastructures is schematized as the fst is queried with an automaton, this automaton returns words associated with a number and this number gives us documents indexes. We will not talk about the key-value store here.

Query lexemes

The first step to be able to query to the underlying structures is to split the query in words, for that we use a custom tokenizer that is not finished for the moment, there is an open issue. Note that a tokenizer is based on a specific language, this is hard.

Automatons and query index

So to query the fst we need an automaton, in pentium we use a levenshtein automaton, this automaton is constructed using a string and a maximum distance. According to the Algolia's blog post we create the DFAs with different settings.

Thanks to the power of the fst library it is possible to union multiple automatons on the same index, it will allow us to know which automaton returns a word according to its index. The Stream is able to return all the numbers associated to the words in the fst.

We use the number to find the whole list of DocIndexes associated and do a set operation. For the moment, the only one that is used is the union of all the DocIndexes (all set operations are supported by sdset). It means that only positive indexes are supported not negative ones.

With all these informations it is possible to reconstruct a list of all the DocIndexes associated with the words queried.

Sort by criteria

Know that we are able to get a big list of DocIndexes it is not enough to sort them by criteria, we need more informations like the levenshtein distance, the fact that the word match exactly. So we stuff it a little bit, and aggregate all these Matches for each document. This way it will be easy to sort a simple vector of document using a bunch of functions.

With this big list of documents and associated matches we are able to sort only the part of the slice that we want using bucket sorting, currently the algorithm is not optimal. Each criterion is evaluated on each subslice without copy, thanks to GroupByMut which, I hope, will soon be merged.

🎉 pentium work is over 🎉