Commit Graph

2365 Commits

Author SHA1 Message Date
bors[bot]
2fdf520271
Merge #514
514: Stop flattening every field r=Kerollmops a=irevoire

When we need to flatten a document:
* The primary key contains a `.`.
* Some fields need to be flattened

Instead of flattening the whole object and thus creating a lot of allocations with the `serde_json_flatten_crate`, we instead generate a minimal sub-object containing only the fields that need to be flattened.
That should create fewer allocations and thus index faster.

---------

```
group                                                             indexing_main_e1e362fa                 indexing_stop-flattening-every-field_40d1bd6b
-----                                                             ----------------------                 ---------------------------------------------
indexing/Indexing geo_point                                       1.99      23.7±0.23s        ? ?/sec    1.00      11.9±0.21s        ? ?/sec
indexing/Indexing movies in three batches                         1.00      18.2±0.24s        ? ?/sec    1.01      18.3±0.29s        ? ?/sec
indexing/Indexing movies with default settings                    1.00      17.5±0.09s        ? ?/sec    1.01      17.7±0.26s        ? ?/sec
indexing/Indexing songs in three batches with default settings    1.00      64.8±0.47s        ? ?/sec    1.00      65.1±0.49s        ? ?/sec
indexing/Indexing songs with default settings                     1.00      54.9±0.99s        ? ?/sec    1.01      55.7±1.34s        ? ?/sec
indexing/Indexing songs without any facets                        1.00      50.6±0.62s        ? ?/sec    1.01      50.9±1.05s        ? ?/sec
indexing/Indexing songs without faceted numbers                   1.00      54.0±1.14s        ? ?/sec    1.01      54.7±1.13s        ? ?/sec
indexing/Indexing wiki                                            1.00     996.2±8.54s        ? ?/sec    1.02   1021.1±30.63s        ? ?/sec
indexing/Indexing wiki in three batches                           1.00    1136.8±9.72s        ? ?/sec    1.00    1138.6±6.59s        ? ?/sec
```

So basically everything slowed down a liiiiiittle bit except the dataset with a nested field which got twice faster

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-26 11:50:33 +00:00
Tamo
f19d2dc548
Only flatten the required fields
apply review comments

Co-authored-by: Kerollmops <kero@meilisearch.com>
2022-04-26 12:33:46 +02:00
bors[bot]
5adeac8047
Merge #516
516: Fix the indexing fuzzer r=irevoire a=irevoire



Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-26 08:35:03 +00:00
Clémentine Urquizar
7cb7643565
Make nightly CI run every week
Update CI

Fix CI
2022-04-25 18:52:27 +02:00
Clémentine Urquizar
d138b3c704
Update version 2022-04-25 18:43:46 +02:00
Tamo
fa6f495662
fix the indexing fuzzer 2022-04-25 18:32:06 +02:00
bors[bot]
8cc86d5a8d
Merge #515
515: Improve the README r=curquiza a=Kerollmops

This PR closes #512 by adding more content to the README. We listed all of the subcrates of the repository, changed the descriptions of the subcrates, and added a simple example usage in the README.

Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-04-25 16:15:12 +00:00
Clémentine Urquizar - curqui
5e562ffecf
Update README.md 2022-04-25 18:14:43 +02:00
Clémentine Urquizar - curqui
2277172f9c
Update README.md 2022-04-25 18:14:39 +02:00
Clémentine Urquizar - curqui
2db3d60259
Update README.md 2022-04-25 18:14:35 +02:00
Kerollmops
7e19bf1c0e
Add an example usage of the library in the README 2022-04-25 17:25:46 +02:00
Kerollmops
fb192aaa9f
Update the list of milli's subcrates 2022-04-25 15:55:38 +02:00
bors[bot]
e1e362fa43
Merge #509
509: Remove pr_status from bors settings r=Kerollmops a=curquiza

Because of multiple issue we had with bors.
https://github.com/bors-ng/bors-ng/issues/1492

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-04-25 11:45:37 +00:00
Clémentine Urquizar
08753d002a
Remove pr_status from bors settings 2022-04-25 13:39:45 +02:00
Clément Renault
8d15ae37a1
Merge pull request #503 from meilisearch/improve-flatten-fuzzer
Improve the fuzzer of the flatten crate
2022-04-25 13:38:43 +02:00
Clément Renault
3e53791de3
Merge pull request #508 from meilisearch/contributing
First version of new CONTRIBUTING.md
2022-04-25 13:36:41 +02:00
bors[bot]
8010eca9c7
Merge #505
505: normalize exact words r=curquiza a=MarinPostma

Normalize the exact words, as specified in the specification.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-25 09:35:32 +00:00
Clémentine Urquizar
dc0d4addd9
First version of new CONTRIBUTING.md 2022-04-21 19:02:22 +02:00
Clément Renault
71414630fc
Merge pull request #504 from meilisearch/test-long-words
Add a test to make sure that long words are handled
2022-04-21 16:06:13 +02:00
ad hoc
2e0089d5ff
normalize exact words 2022-04-21 15:38:40 +02:00
ad hoc
3a2451fcba
add test normalize exact words 2022-04-21 13:52:09 +02:00
Clément Renault
eb5830aa40
Add a test to make sure that long words are handled 2022-04-21 13:45:28 +02:00
Tamo
d81a3f4a74
improve the fuzzer of the flatten crate 2022-04-20 16:11:23 +02:00
bors[bot]
c7d0097c97
Merge #498
498: Get rid of the threshold when comparing benchmarks r=curquiza a=irevoire

It just hides things

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-19 14:04:11 +00:00
Tamo
152a10344c
Get rid of the threshold when comparing benchmarks
It just hide things
2022-04-19 15:39:58 +02:00
bors[bot]
04eb32e539
Merge #499
499: fix min-word-len-for-typo not reset properly r=Kerollmops a=MarinPostma

fix min word len for typo not resettign properly, as reported in https://github.com/meilisearch/meilisearch/issues/2330


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-19 13:22:19 +00:00
ad hoc
8b14090927
fix min-word-len-for-typo not reset properly 2022-04-19 15:20:16 +02:00
bors[bot]
ea4bb9402f
Merge #483
483: Enhance matching words r=Kerollmops a=ManyTheFish

# Summary

Enhance milli word-matcher making it handle match computing and cropping.

# Implementation

## Computing best matches for cropping

Before we were considering that the first match of the attribute was the best one, this was accurate when only one word was searched but was missing the target when more than one word was searched.

Now we are searching for the best matches interval to crop around, the chosen interval is the one:
1) that have the highest count of unique matches
> for example, if we have a query `split the world`, then the interval `the split the split the` has 5 matches but only 2 unique matches (1 for `split` and 1 for `the`) where the interval `split of the world` has 3 matches and 3 unique matches. So the interval `split of the world` is considered better.
2) that have the minimum distance between matches
> for example, if we have a query `split the world`, then the interval `split of the world` has a distance of 3 (2 between `split` and `the`, and 1 between `the` and `world`) where the interval `split the world` has a distance of 2. So the interval `split the world` is considered better.
3) that have the highest count of ordered matches
> for example, if we have a query `split the world`, then the interval `the world split` has 2 ordered words where the interval `split the world` has 3. So the interval `split the world` is considered better.

## Cropping around the best matches interval

Before we were cropping around the interval without checking the context.

Now we are cropping around words in the same context as matching words.
This means that we will keep words that are farther from the matching words but are in the same phrase, than words that are nearer but separated by a dot.

> For instance, for the matching word `Split` the text:
`Natalie risk her future. Split The World is a book written by Emily Henry. I never read it.`
will be cropped like:
`…. Split The World is a book written by Emily Henry. …`
and  not like:
`Natalie risk her future. Split The World is a book …`


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-04-19 11:42:32 +00:00
ManyTheFish
f1115e274f Use Copy impl of FormatOption instead of clonning 2022-04-19 10:35:50 +02:00
Clémentine Urquizar - curqui
a68e3a79fb
Merge pull request #497 from meilisearch/v0.26.1
Update version for the next release (v0.26.1)
2022-04-14 11:53:31 +02:00
Clémentine Urquizar
8d630a6f62
Update version for the next release (v0.26.1) 2022-04-14 11:44:06 +02:00
Clémentine Urquizar - curqui
d362278a41
Merge pull request #494 from meilisearch/flatten-what-is-needed
Only flatten the required objects
2022-04-14 11:43:28 +02:00
Tamo
00f78d6b5a
Apply code suggestions
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-04-14 11:14:08 +02:00
Tamo
399fba16bb
only flatten an object if it's nested 2022-04-14 11:14:08 +02:00
Tamo
c2469b6765
create the json-depth-checker crate 2022-04-14 11:14:08 +02:00
bors[bot]
7791ef90e7
Merge #493
493: Use smartstring to store the external id in our hashmap r=Kerollmops a=irevoire

We need to store all the external id (primary key) in a hashmap
associated to their internal id.
The smartstring remove heap allocation / memory usage and should
improve the cache locality.

I ran the benchmarks to measure the impact of this PR on the indexing time.
I think we should merge it whatever happens thought because it'll decrease the memory consumption.

---------

This improve really sliiiiiightly the performances but improve the memory usage thus it should be merged.
```
group                                                             indexing_main_6b073738                 indexing_use-smartsring_3f343511
-----                                                             ----------------------                 --------------------------------
indexing/Indexing geo_point                                       1.02      25.2±0.20s        ? ?/sec    1.00      24.8±0.13s        ? ?/sec
indexing/Indexing movies in three batches                         1.00      18.2±0.10s        ? ?/sec    1.00      18.2±0.23s        ? ?/sec
indexing/Indexing movies with default settings                    1.00      17.5±0.09s        ? ?/sec    1.01      17.7±0.11s        ? ?/sec
indexing/Indexing songs in three batches with default settings    1.00      68.3±1.01s        ? ?/sec    1.00      68.0±0.95s        ? ?/sec
indexing/Indexing songs with default settings                     1.00      63.2±0.78s        ? ?/sec    1.00      63.0±0.58s        ? ?/sec
indexing/Indexing songs without any facets                        1.02      59.6±1.00s        ? ?/sec    1.00      58.5±1.03s        ? ?/sec
indexing/Indexing songs without faceted numbers                   1.00      62.8±0.38s        ? ?/sec    1.00      62.6±1.02s        ? ?/sec
indexing/Indexing wiki                                            1.01   1009.2±25.25s        ? ?/sec    1.00    998.1±11.27s        ? ?/sec
indexing/Indexing wiki in three batches                           1.01    1142.0±9.97s        ? ?/sec    1.00   1134.4±11.21s        ? ?/sec
```

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-13 20:28:28 +00:00
Tamo
ee64f4a936
Use smartstring to store the external id in our hashmap
We need to store all the external id (primary key) in a hashmap
associated to their internal id during.
The smartstring remove heap allocation / memory usage and should
improve the cache locality.
2022-04-13 21:22:07 +02:00
bors[bot]
456887a54a
Merge #496
496: Improve the performances of the flattening subcrate r=irevoire a=Kerollmops

This PR adds some benchmarks to the _flatten-serde-json_ crate, this crate is responsible for transforming the original documents into flat versions that the engine can understand. It can probably be speed-up and this is why I added benchmarks to it.

I make some interesting performance improvements when I replaced the `json!` macro calls.

```
flatten/simple          time:   [452.44 ns 453.31 ns 454.18 ns]
                        change: [-15.036% -14.751% -14.473%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 2 outliers among 100 measurements (2.00%)
  2 (2.00%) high mild

Benchmarking flatten/complex: Collecting 100 samples in estimated 5.0007 s (4.9M i                                                                                  flatten/complex         time:   [1.0101 us 1.0131 us 1.0160 us]
                        change: [-18.001% -17.775% -17.536%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 6 outliers among 100 measurements (6.00%)
  5 (5.00%) high mild
  1 (1.00%) high severe
```

---

_I removed this particular commit from this PR._ The reason is that the two other commits were enough for this PR to give enough impact and be merged. We will continue to explore where we can get performances later.

But when I changed the flattening function to accept an owned version of the objects, we lost a lot of performances. Yes, I rewrote the benchmarks (locally) to clone the input object (and measured both, previous and new versions, with the cloning benchmarks). Maybe cloning the benchmark inputs is not the right thing to do...

```
Benchmarking flatten/simple: Collecting 100 samples in estimated 5.0005 s (6.7M it                                                                                  flatten/simple          time:   [746.46 ns 749.59 ns 752.70 ns]
                        change: [+40.082% +40.714% +41.347%] (p = 0.00 < 0.05)
                        Performance has regressed.

Benchmarking flatten/complex: Collecting 100 samples in estimated 5.0047 s (2.9M i                                                                                  flatten/complex         time:   [1.7311 us 1.7342 us 1.7368 us]
                        change: [+40.976% +41.398% +41.807%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) low mild
```

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-04-13 11:14:29 +00:00
Kerollmops
b3cec1a383
Prefer using direct method calls instead of using the json macros 2022-04-13 13:12:57 +02:00
Kerollmops
436d2032c4
Add benchmarks to the flatten-serde-json subcrate 2022-04-13 13:12:57 +02:00
bors[bot]
3828635fb2
Merge #489
489: fix distinct count bug r=curquiza a=MarinPostma

fix https://github.com/meilisearch/meilisearch/issues/2152

I think the issue was that we didn't take off the excluded candidates from the initial candidates when returning the candidates with the search result.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-13 10:15:30 +00:00
ad hoc
dda28d7415
exclude excluded canditates from search result candidates 2022-04-13 12:10:35 +02:00
ad hoc
cd83014fff
add test for disctinct nb hits 2022-04-13 12:10:35 +02:00
ad hoc
bbb6728d2f
add distinct attributes to cli 2022-04-13 12:10:35 +02:00
bors[bot]
49fbbacafc
Merge #492
492: Add the new `Specify breaking` check to bors.toml r=curquiza a=curquiza

Should prevent this problem: https://github.com/meilisearch/milli/pull/489#issuecomment-1094988060

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-04-13 08:59:40 +00:00
Clémentine Urquizar - curqui
7ad582f39f
Update bors.toml 2022-04-13 10:56:56 +02:00
Clémentine Urquizar - curqui
aa896f0e7a
Update bors.toml 2022-04-13 10:56:56 +02:00
Clémentine Urquizar - curqui
0261a0e3cf
Add the new Specify breaking check to bors.toml 2022-04-13 10:56:55 +02:00
ManyTheFish
5809d3ae0d Add first benchmarks on formatting 2022-04-12 16:31:58 +02:00
ManyTheFish
827cedcd15 Add format option structure 2022-04-12 13:42:14 +02:00