Commit Graph

1756 Commits

Author SHA1 Message Date
Clémentine Urquizar - curqui
2db3d60259
Update README.md 2022-04-25 18:14:35 +02:00
Kerollmops
7e19bf1c0e
Add an example usage of the library in the README 2022-04-25 17:25:46 +02:00
Kerollmops
fb192aaa9f
Update the list of milli's subcrates 2022-04-25 15:55:38 +02:00
bors[bot]
e1e362fa43
Merge #509
509: Remove pr_status from bors settings r=Kerollmops a=curquiza

Because of multiple issue we had with bors.
https://github.com/bors-ng/bors-ng/issues/1492

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-04-25 11:45:37 +00:00
Clémentine Urquizar
08753d002a
Remove pr_status from bors settings 2022-04-25 13:39:45 +02:00
Clément Renault
8d15ae37a1
Merge pull request #503 from meilisearch/improve-flatten-fuzzer
Improve the fuzzer of the flatten crate
2022-04-25 13:38:43 +02:00
Clément Renault
3e53791de3
Merge pull request #508 from meilisearch/contributing
First version of new CONTRIBUTING.md
2022-04-25 13:36:41 +02:00
bors[bot]
8010eca9c7
Merge #505
505: normalize exact words r=curquiza a=MarinPostma

Normalize the exact words, as specified in the specification.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-25 09:35:32 +00:00
Clémentine Urquizar
dc0d4addd9
First version of new CONTRIBUTING.md 2022-04-21 19:02:22 +02:00
Clément Renault
71414630fc
Merge pull request #504 from meilisearch/test-long-words
Add a test to make sure that long words are handled
2022-04-21 16:06:13 +02:00
ad hoc
2e0089d5ff
normalize exact words 2022-04-21 15:38:40 +02:00
ad hoc
3a2451fcba
add test normalize exact words 2022-04-21 13:52:09 +02:00
Clément Renault
eb5830aa40
Add a test to make sure that long words are handled 2022-04-21 13:45:28 +02:00
Tamo
d81a3f4a74
improve the fuzzer of the flatten crate 2022-04-20 16:11:23 +02:00
bors[bot]
c7d0097c97
Merge #498
498: Get rid of the threshold when comparing benchmarks r=curquiza a=irevoire

It just hides things

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-19 14:04:11 +00:00
Tamo
152a10344c
Get rid of the threshold when comparing benchmarks
It just hide things
2022-04-19 15:39:58 +02:00
bors[bot]
04eb32e539
Merge #499
499: fix min-word-len-for-typo not reset properly r=Kerollmops a=MarinPostma

fix min word len for typo not resettign properly, as reported in https://github.com/meilisearch/meilisearch/issues/2330


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-19 13:22:19 +00:00
ad hoc
8b14090927
fix min-word-len-for-typo not reset properly 2022-04-19 15:20:16 +02:00
bors[bot]
ea4bb9402f
Merge #483
483: Enhance matching words r=Kerollmops a=ManyTheFish

# Summary

Enhance milli word-matcher making it handle match computing and cropping.

# Implementation

## Computing best matches for cropping

Before we were considering that the first match of the attribute was the best one, this was accurate when only one word was searched but was missing the target when more than one word was searched.

Now we are searching for the best matches interval to crop around, the chosen interval is the one:
1) that have the highest count of unique matches
> for example, if we have a query `split the world`, then the interval `the split the split the` has 5 matches but only 2 unique matches (1 for `split` and 1 for `the`) where the interval `split of the world` has 3 matches and 3 unique matches. So the interval `split of the world` is considered better.
2) that have the minimum distance between matches
> for example, if we have a query `split the world`, then the interval `split of the world` has a distance of 3 (2 between `split` and `the`, and 1 between `the` and `world`) where the interval `split the world` has a distance of 2. So the interval `split the world` is considered better.
3) that have the highest count of ordered matches
> for example, if we have a query `split the world`, then the interval `the world split` has 2 ordered words where the interval `split the world` has 3. So the interval `split the world` is considered better.

## Cropping around the best matches interval

Before we were cropping around the interval without checking the context.

Now we are cropping around words in the same context as matching words.
This means that we will keep words that are farther from the matching words but are in the same phrase, than words that are nearer but separated by a dot.

> For instance, for the matching word `Split` the text:
`Natalie risk her future. Split The World is a book written by Emily Henry. I never read it.`
will be cropped like:
`…. Split The World is a book written by Emily Henry. …`
and  not like:
`Natalie risk her future. Split The World is a book …`


Co-authored-by: ManyTheFish <many@meilisearch.com>
2022-04-19 11:42:32 +00:00
ManyTheFish
f1115e274f Use Copy impl of FormatOption instead of clonning 2022-04-19 10:35:50 +02:00
Clémentine Urquizar - curqui
a68e3a79fb
Merge pull request #497 from meilisearch/v0.26.1
Update version for the next release (v0.26.1)
2022-04-14 11:53:31 +02:00
Clémentine Urquizar
8d630a6f62
Update version for the next release (v0.26.1) 2022-04-14 11:44:06 +02:00
Clémentine Urquizar - curqui
d362278a41
Merge pull request #494 from meilisearch/flatten-what-is-needed
Only flatten the required objects
2022-04-14 11:43:28 +02:00
Tamo
00f78d6b5a
Apply code suggestions
Co-authored-by: Clément Renault <clement@meilisearch.com>
2022-04-14 11:14:08 +02:00
Tamo
399fba16bb
only flatten an object if it's nested 2022-04-14 11:14:08 +02:00
Tamo
c2469b6765
create the json-depth-checker crate 2022-04-14 11:14:08 +02:00
bors[bot]
7791ef90e7
Merge #493
493: Use smartstring to store the external id in our hashmap r=Kerollmops a=irevoire

We need to store all the external id (primary key) in a hashmap
associated to their internal id.
The smartstring remove heap allocation / memory usage and should
improve the cache locality.

I ran the benchmarks to measure the impact of this PR on the indexing time.
I think we should merge it whatever happens thought because it'll decrease the memory consumption.

---------

This improve really sliiiiiightly the performances but improve the memory usage thus it should be merged.
```
group                                                             indexing_main_6b073738                 indexing_use-smartsring_3f343511
-----                                                             ----------------------                 --------------------------------
indexing/Indexing geo_point                                       1.02      25.2±0.20s        ? ?/sec    1.00      24.8±0.13s        ? ?/sec
indexing/Indexing movies in three batches                         1.00      18.2±0.10s        ? ?/sec    1.00      18.2±0.23s        ? ?/sec
indexing/Indexing movies with default settings                    1.00      17.5±0.09s        ? ?/sec    1.01      17.7±0.11s        ? ?/sec
indexing/Indexing songs in three batches with default settings    1.00      68.3±1.01s        ? ?/sec    1.00      68.0±0.95s        ? ?/sec
indexing/Indexing songs with default settings                     1.00      63.2±0.78s        ? ?/sec    1.00      63.0±0.58s        ? ?/sec
indexing/Indexing songs without any facets                        1.02      59.6±1.00s        ? ?/sec    1.00      58.5±1.03s        ? ?/sec
indexing/Indexing songs without faceted numbers                   1.00      62.8±0.38s        ? ?/sec    1.00      62.6±1.02s        ? ?/sec
indexing/Indexing wiki                                            1.01   1009.2±25.25s        ? ?/sec    1.00    998.1±11.27s        ? ?/sec
indexing/Indexing wiki in three batches                           1.01    1142.0±9.97s        ? ?/sec    1.00   1134.4±11.21s        ? ?/sec
```

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-13 20:28:28 +00:00
Tamo
ee64f4a936
Use smartstring to store the external id in our hashmap
We need to store all the external id (primary key) in a hashmap
associated to their internal id during.
The smartstring remove heap allocation / memory usage and should
improve the cache locality.
2022-04-13 21:22:07 +02:00
bors[bot]
456887a54a
Merge #496
496: Improve the performances of the flattening subcrate r=irevoire a=Kerollmops

This PR adds some benchmarks to the _flatten-serde-json_ crate, this crate is responsible for transforming the original documents into flat versions that the engine can understand. It can probably be speed-up and this is why I added benchmarks to it.

I make some interesting performance improvements when I replaced the `json!` macro calls.

```
flatten/simple          time:   [452.44 ns 453.31 ns 454.18 ns]
                        change: [-15.036% -14.751% -14.473%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 2 outliers among 100 measurements (2.00%)
  2 (2.00%) high mild

Benchmarking flatten/complex: Collecting 100 samples in estimated 5.0007 s (4.9M i                                                                                  flatten/complex         time:   [1.0101 us 1.0131 us 1.0160 us]
                        change: [-18.001% -17.775% -17.536%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 6 outliers among 100 measurements (6.00%)
  5 (5.00%) high mild
  1 (1.00%) high severe
```

---

_I removed this particular commit from this PR._ The reason is that the two other commits were enough for this PR to give enough impact and be merged. We will continue to explore where we can get performances later.

But when I changed the flattening function to accept an owned version of the objects, we lost a lot of performances. Yes, I rewrote the benchmarks (locally) to clone the input object (and measured both, previous and new versions, with the cloning benchmarks). Maybe cloning the benchmark inputs is not the right thing to do...

```
Benchmarking flatten/simple: Collecting 100 samples in estimated 5.0005 s (6.7M it                                                                                  flatten/simple          time:   [746.46 ns 749.59 ns 752.70 ns]
                        change: [+40.082% +40.714% +41.347%] (p = 0.00 < 0.05)
                        Performance has regressed.

Benchmarking flatten/complex: Collecting 100 samples in estimated 5.0047 s (2.9M i                                                                                  flatten/complex         time:   [1.7311 us 1.7342 us 1.7368 us]
                        change: [+40.976% +41.398% +41.807%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) low mild
```

Co-authored-by: Kerollmops <clement@meilisearch.com>
2022-04-13 11:14:29 +00:00
Kerollmops
b3cec1a383
Prefer using direct method calls instead of using the json macros 2022-04-13 13:12:57 +02:00
Kerollmops
436d2032c4
Add benchmarks to the flatten-serde-json subcrate 2022-04-13 13:12:57 +02:00
bors[bot]
3828635fb2
Merge #489
489: fix distinct count bug r=curquiza a=MarinPostma

fix https://github.com/meilisearch/meilisearch/issues/2152

I think the issue was that we didn't take off the excluded candidates from the initial candidates when returning the candidates with the search result.


Co-authored-by: ad hoc <postma.marin@protonmail.com>
2022-04-13 10:15:30 +00:00
ad hoc
dda28d7415
exclude excluded canditates from search result candidates 2022-04-13 12:10:35 +02:00
ad hoc
cd83014fff
add test for disctinct nb hits 2022-04-13 12:10:35 +02:00
ad hoc
bbb6728d2f
add distinct attributes to cli 2022-04-13 12:10:35 +02:00
bors[bot]
49fbbacafc
Merge #492
492: Add the new `Specify breaking` check to bors.toml r=curquiza a=curquiza

Should prevent this problem: https://github.com/meilisearch/milli/pull/489#issuecomment-1094988060

Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2022-04-13 08:59:40 +00:00
Clémentine Urquizar - curqui
7ad582f39f
Update bors.toml 2022-04-13 10:56:56 +02:00
Clémentine Urquizar - curqui
aa896f0e7a
Update bors.toml 2022-04-13 10:56:56 +02:00
Clémentine Urquizar - curqui
0261a0e3cf
Add the new Specify breaking check to bors.toml 2022-04-13 10:56:55 +02:00
ManyTheFish
5809d3ae0d Add first benchmarks on formatting 2022-04-12 16:31:58 +02:00
ManyTheFish
827cedcd15 Add format option structure 2022-04-12 13:42:14 +02:00
ManyTheFish
011f8210ed Make compute_matches more rust idiomatic 2022-04-12 10:19:02 +02:00
bors[bot]
6b0737384b
Merge #491
491: remove the unused key warning r=curquiza a=irevoire

When I copy-pasted my flatten crate I forgot to remove the key used to publish the package and that throw a warning.

Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-11 16:55:25 +00:00
Tamo
e153418b8a
remove the unused key warning 2022-04-11 14:52:41 +02:00
bors[bot]
c8306616e0
Merge #490
490: Enforce labelling for the PRs r=curquiza a=curquiza

- Enforce one of the following labels to make the CI pass: `no breaking`, `DB breaking`, `API breaking` (milli API, not the Meilisearch API of course), or `skip changelog`. This new CI is now `Required` in the GitHub settings for merging a PR.
- Adapt the release drafter to these new labels
- rename `skip-changelog` into `skip changelog` according to the new label name

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-04-11 08:24:23 +00:00
Clémentine Urquizar
9383629d13
Enforce labelling for the PRs 2022-04-09 23:47:06 +02:00
ManyTheFish
a16de5de84 Symplify format and remove intermediate function 2022-04-08 11:20:41 +02:00
ManyTheFish
a769e09dfa Make token_crop_bounds more rust idiomatic 2022-04-07 20:15:14 +02:00
bors[bot]
9ac2fd1c37
Merge #487
487: Update version (v0.26.0) r=Kerollmops a=curquiza

breaking because of #458 

Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
2022-04-07 17:10:24 +00:00
bors[bot]
80ae020bee
Merge #458
458: Nested fields r=Kerollmops a=irevoire

For the following document:
```json
{
  "id": 1,
  "person": {
    "name": "tamo",
    "age": 25,
  }
}
```
Suppose the user sets `person` as a filterable attribute. We need to store `person` in the filterable _obviously_. But we also need to keep track of `person.name` and `person.age` somewhere.
That’s where I changed a little bit the logic of the engine.

Currently, we have a function called `faceted_field` that returns the union of the filterable and sortable.
I renamed this function in `user_defined_faceted_field`. And now, when we finish indexing documents, we look at all the fields and see if they « match » a `user_defined_faceted_field`.
So in our case:
- does `id` match `person`: 🔴 
- does `person.name` match `person`: 🟢 
- does `person.age` match `person`: 🟢 

And thus, we insert in the database the following faceted fields: `person, person.name, person.age`.

The good thing about that solution is that we generate everything during the indexing phase, and then during the search, we can access our field without recomputing too much globbing.

-----

Now the bad thing is that I had to create a new db.

And if that was only one db, that would be ok, but actually, I need to do the same for the:
- Displayed attributes
- Attributes to retrieve
- Attributes to highlight
- Attribute to crop

`@Kerollmops` 
Do you think there is a better way to do it?
Apart from all the code, can we have a problem because we have too many dbs?

Co-authored-by: Irevoire <tamo@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
2022-04-07 16:26:09 +00:00