When a document deletion occurs, instead of deleting the document we mark it as deleted
in the new “soft deleted” bitmap. It is then removed from the search, and all the other
endpoints.
564: Rename the limitedTo parameter into maxTotalHits r=curquiza a=Kerollmops
This PR is related to https://github.com/meilisearch/meilisearch/issues/2542, it renames the `limitedTo` parameter into `maxTotalHits`.
Co-authored-by: Kerollmops <clement@meilisearch.com>
563: Improve the `estimatedNbHits` when a `distinctAttribute` is specified r=irevoire a=Kerollmops
This PR is related to https://github.com/meilisearch/meilisearch/issues/2532 but it doesn't fix it entirely. It improves it by computing the excluded documents (the ones with an already-seen distinct value) before stopping the loop, I think it was a mistake and should always have been this way.
The reason it doesn't fix the issue is that Meilisearch is lazy, just to be sure not to compute too many things and answer by taking too much time. When we deduplicate the documents by their distinct value we must do it along the water, everytime we see a new document we check that its distinct value of it doesn't collide with an already returned document.
The reason we can see the correct result when enough documents are fetched is that we were lucky to see all of the different distinct values possible in the dataset and all of the deduplication was done, no document can be returned.
If we wanted to implement that to have a correct `extimatedNbHits` every time we should have done a pass on the whole set of possible distinct values for the distinct attribute and do a big intersection, this could cost a lot of CPU cycles.
Co-authored-by: Kerollmops <clement@meilisearch.com>
552: Fix escaped quotes in filter r=Kerollmops a=irevoire
Will fix https://github.com/meilisearch/meilisearch/issues/2380
The issue was that in the evaluation of the filter, I was using the deref implementation instead of calling the `value` method of my token.
To avoid the problem happening again, I removed the deref implementation; now, you need to either call the `lexeme` or the `value` methods but can't rely on a « default » implementation to get a string out of a token.
Co-authored-by: Tamo <tamo@meilisearch.com>
547: Update version for next release (v0.29.1) r=Kerollmops a=curquiza
A new milli version will be released once this PR is merged https://github.com/meilisearch/milli/pull/543
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
541: Update version for next release (v0.29.0) r=ManyTheFish a=curquiza
Need to update the version since #540 was merged and breaking
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
535: Reintroduce the max values by facet limit r=ManyTheFish a=Kerollmops
This PR reintroduces the max values by facet limit this is related to https://github.com/meilisearch/meilisearch/issues/2349.
~I would like some help in deciding on whether I keep the default 100 max values in milli and set up the `FacetDistribution` settings in Meilisearch to use 1000 as the new value, I expose the `max_values_by_facet` for this purpose.~
I changed the default value to 1000 and the max to 10000, thank you `@ManyTheFish` for the help!
Co-authored-by: Kerollmops <clement@meilisearch.com>
538: speedup exact words r=Kerollmops a=MarinPostma
This PR make `exact_words` return an `Option` instead of an empty set, since set creation is costly, as noticed by `@kerollmops.`
I was not convinces that this was the cause for all of the performance drop we measured, and then realized that methods that initialized it were called recursively which caused initialization times to add up. While the first fix solves the issue when not using exact words, using exact word remained way more expensive that it should be. To address this issue, the exact words are cached into the `Context`, so they are only initialized once.
Co-authored-by: ad hoc <postma.marin@protonmail.com>
525: Simplify the error creation with thiserror r=irevoire a=irevoire
I introduced [`thiserror`](https://docs.rs/thiserror/latest/thiserror/) to implements all the `Display` trait and most of the `impl From<xxx> for yyy` in way less lines.
And then I introduced a cute macro to implements the `impl<X, Y, Z> From<X> for Z where Y: From<X>, Z: From<X>` more easily.
Co-authored-by: Tamo <tamo@meilisearch.com>
523: Improve geosearch error messages r=irevoire a=irevoire
Improve the geosearch error messages (#488).
And try to parse the string as specified in https://github.com/meilisearch/meilisearch/issues/2354
Co-authored-by: Tamo <tamo@meilisearch.com>
520: fix mistake in Settings initialization r=irevoire a=MarinPostma
fix settings not being correctly initialized and add a test to make sure that they are in the future.
fix https://github.com/meilisearch/meilisearch/issues/2358
Co-authored-by: ad hoc <postma.marin@protonmail.com>
518: Return facets even when there is no value associated to it r=Kerollmops a=Kerollmops
This PR is related to https://github.com/meilisearch/meilisearch/issues/2352 and should fix the issue when Meilisearch is up-to-date with this PR.
Co-authored-by: Kerollmops <clement@meilisearch.com>
511: Update version in every workspace r=curquiza a=curquiza
Checked with `@Kerollmops`
- Update the version into every workspace (the current version is v0.27.0, but I forgot to update it for the previous release)
- add `publish = false` except in `milli` workspace.
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
514: Stop flattening every field r=Kerollmops a=irevoire
When we need to flatten a document:
* The primary key contains a `.`.
* Some fields need to be flattened
Instead of flattening the whole object and thus creating a lot of allocations with the `serde_json_flatten_crate`, we instead generate a minimal sub-object containing only the fields that need to be flattened.
That should create fewer allocations and thus index faster.
---------
```
group indexing_main_e1e362fa indexing_stop-flattening-every-field_40d1bd6b
----- ---------------------- ---------------------------------------------
indexing/Indexing geo_point 1.99 23.7±0.23s ? ?/sec 1.00 11.9±0.21s ? ?/sec
indexing/Indexing movies in three batches 1.00 18.2±0.24s ? ?/sec 1.01 18.3±0.29s ? ?/sec
indexing/Indexing movies with default settings 1.00 17.5±0.09s ? ?/sec 1.01 17.7±0.26s ? ?/sec
indexing/Indexing songs in three batches with default settings 1.00 64.8±0.47s ? ?/sec 1.00 65.1±0.49s ? ?/sec
indexing/Indexing songs with default settings 1.00 54.9±0.99s ? ?/sec 1.01 55.7±1.34s ? ?/sec
indexing/Indexing songs without any facets 1.00 50.6±0.62s ? ?/sec 1.01 50.9±1.05s ? ?/sec
indexing/Indexing songs without faceted numbers 1.00 54.0±1.14s ? ?/sec 1.01 54.7±1.13s ? ?/sec
indexing/Indexing wiki 1.00 996.2±8.54s ? ?/sec 1.02 1021.1±30.63s ? ?/sec
indexing/Indexing wiki in three batches 1.00 1136.8±9.72s ? ?/sec 1.00 1138.6±6.59s ? ?/sec
```
So basically everything slowed down a liiiiiittle bit except the dataset with a nested field which got twice faster
Co-authored-by: Tamo <tamo@meilisearch.com>