MeiliSearch/benchmarks
bors[bot] ea4bb9402f
Merge #483
483: Enhance matching words r=Kerollmops a=ManyTheFish

# Summary

Enhance milli word-matcher making it handle match computing and cropping.

# Implementation

## Computing best matches for cropping

Before we were considering that the first match of the attribute was the best one, this was accurate when only one word was searched but was missing the target when more than one word was searched.

Now we are searching for the best matches interval to crop around, the chosen interval is the one:
1) that have the highest count of unique matches
> for example, if we have a query `split the world`, then the interval `the split the split the` has 5 matches but only 2 unique matches (1 for `split` and 1 for `the`) where the interval `split of the world` has 3 matches and 3 unique matches. So the interval `split of the world` is considered better.
2) that have the minimum distance between matches
> for example, if we have a query `split the world`, then the interval `split of the world` has a distance of 3 (2 between `split` and `the`, and 1 between `the` and `world`) where the interval `split the world` has a distance of 2. So the interval `split the world` is considered better.
3) that have the highest count of ordered matches
> for example, if we have a query `split the world`, then the interval `the world split` has 2 ordered words where the interval `split the world` has 3. So the interval `split the world` is considered better.

## Cropping around the best matches interval

Before we were cropping around the interval without checking the context.

Now we are cropping around words in the same context as matching words.
This means that we will keep words that are farther from the matching words but are in the same phrase, than words that are nearer but separated by a dot.

> For instance, for the matching word `Split` the text:
`Natalie risk her future. Split The World is a book written by Emily Henry. I never read it.`
will be cropped like:
`…. Split The World is a book written by Emily Henry. …`
and  not like:
`Natalie risk her future. Split The World is a book …`


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-04-19 11:42:32 +00:00
..
benches Merge #483 2022-04-19 11:42:32 +00:00
scripts improve the comparison script 2021-09-16 11:25:51 +02:00
src move the benchmarks to another crate so we can download the datasets automatically without adding overhead to the build of milli 2021-06-02 11:11:50 +02:00
.gitignore add a gitignore to avoid pushing the autogenerated file 2021-06-02 17:03:30 +02:00
build.rs Add a new movies benchmark to test multi batch indexing 2022-02-23 16:20:29 +01:00
Cargo.toml Add first benchmarks on formatting 2022-04-12 16:31:58 +02:00
README.md Merge #357 2021-09-21 16:08:06 +00:00

Benchmarks

TOC

Run the benchmarks

On our private server

The Meili team has self-hosted his own GitHub runner to run benchmarks on our dedicated bare metal server.

To trigger the benchmark workflow:

  • Go to the Actions tab of this repository.
  • Select the Benchmarks workflow on the left.
  • Click on Run workflow in the blue banner.
  • Select the branch on which you want to run the benchmarks and select the dataset you want (default: songs).
  • Finally, click on Run workflow.

This GitHub workflow will run the benchmarks and push the critcmp report to a DigitalOcean Space (= S3).

The name of the uploaded file is displayed in the workflow.

More about critcmp.

💡 To compare the just-uploaded benchmark with another one, check out the next section.

On your machine

To run all the benchmarks (~5h):

cargo bench

To run only the search_songs (~1h), search_wiki (~3h), search_geo (~20m) or indexing (~2h) benchmark:

cargo bench --bench <dataset name>

By default, the benchmarks will be downloaded and uncompressed automatically in the target directory.
If you don't want to download the datasets every time you update something on the code, you can specify a custom directory with the environment variable MILLI_BENCH_DATASETS_PATH:

mkdir ~/datasets
MILLI_BENCH_DATASETS_PATH=~/datasets cargo bench --bench search_songs # the four datasets are downloaded
touch build.rs
MILLI_BENCH_DATASETS_PATH=~/datasets cargo bench --bench songs # the code is compiled again but the datasets are not downloaded

Comparison between benchmarks

The benchmark reports we push are generated with critcmp. Thus, we use critcmp to show the result of a benchmark, or compare results between multiple benchmarks.

We provide a script to download and display the comparison report.

Requirements:

List the available file in the DO Space:

./benchmarks/script/list.sh
songs_main_09a4321.json
songs_geosearch_24ec456.json
search_songs_main_cb45a10b.json

Run the comparison script:

# we get the result of ONE benchmark, this give you an idea of how much time an operation took
./benchmarks/scripts/compare.sh son songs_geosearch_24ec456.json
# we compare two benchmarks
./benchmarks/scripts/compare.sh songs_main_09a4321.json songs_geosearch_24ec456.json
# we compare three benchmarks
./benchmarks/scripts/compare.sh songs_main_09a4321.json songs_geosearch_24ec456.json search_songs_main_cb45a10b.json

Datasets

The benchmarks uses the following datasets:

  • smol-songs
  • smol-wiki
  • movies
  • smol-all-countries

Songs

smol-songs is a subset of the songs.csv dataset.

It was generated with this command:

xsv sample --seed 42 1000000 songs.csv -o smol-songs.csv

Download the generated smol-songs dataset.

Wiki

smol-wiki is a subset of the wikipedia-articles.csv dataset.

It was generated with the following command:

xsv sample --seed 42 500000 wiki-articles.csv -o smol-wiki-articles.csv

Download the smol-wiki dataset.

Movies

movies is a really small dataset we uses as our example in the getting started

Download the movies dataset.

All Countries

smol-all-countries is a subset of the all-countries.csv dataset It has been converted to jsonlines and then edited so it matches our format for the _geo field.

It was generated with the following command:

bat all-countries.csv.gz | gunzip | xsv sample --seed 42 1000000 | csv2json-lite | sd '"latitude":"(.*?)","longitude":"(.*?)"' '"_geo": { "lat": $1, "lng": $2 }' | sd '\[|\]|,$' '' | gzip > smol-all-countries.jsonl.gz

Download the smol-all-countries dataset.