3913: Expose a Puffin server to profile the indexing process r=Kerollmops a=Kerollmops
This PR exposes a puffin HTTP server to expose the internal timing it takes to index documents, delete documents, or update the settings of an index.
<img width="1752" alt="Capture d’écran 2023-07-10 à 18 44 58" src="https://github.com/meilisearch/meilisearch/assets/3610253/a3c7a6bf-db5b-42f4-8be1-c4e31c869843">
## To be done
- [x] Move the puffin HTTP server under a feature flag.
- [x] Use [the `puffin::set_scopes_on` function](https://docs.rs/puffin/latest/puffin/fn.set_scopes_on.html) to toggle it (by using the feature directly).
When this function is called with `false`, [a call to `profile_scope!` talked 1-2ns](https://docs.rs/puffin/latest/puffin/fn.set_scopes_on.html).
- [x] Create a _PROFILING.md_ file explaining how to use it.
- [x] Explain that merging scopes on the interface is not always useful.
- [x] Add more info on the number of batched tasks (using the `puffin::profile_scope!` macro data).
- I added more info, but that's more continuous work when we consider we need more info here and there.
- [x] Clean up some scopes, and don't touch too much code to inject puffin.
- I am not sure that the _index_documents/mod.rs_ function is that complex with the addition of the scope.
- [x] Think about what we consider frames. One indexation operation or the wall program. When must we stop the frame, then?
- What we consider a frame is one single `IndexScheduler::tick` execution.
- We can change that later.
Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: Clément Renault <clement@meilisearch.com>
3866: Update charabia v0.8.0 r=dureuill a=ManyTheFish
# Pull Request
Update Charabia:
- enhance Japanese segmentation
- enhance Latin Tokenization
- words containing `_` are now properly segmented into several words
- brackets `{([])}` are no more considered as context separators so word separated by brackets are now considered near together for the proximity ranking rule
- fixes#3815
- fixes#3778
- fixes [product#151](https://github.com/meilisearch/product/discussions/151)
> Important note: now the float numbers are segmented around the `.` so `3.22` is segmented as [`3`, `.`, `22`] but the middle dot isn't considered as a hard separator, which means that if we search `3.22` we find documents containing `3.22`
Co-authored-by: ManyTheFish <many@meilisearch.com>
3670: Fix addition deletion bug r=irevoire a=irevoire
The first commit of this PR is a revert of https://github.com/meilisearch/meilisearch/pull/3667. It re-enable the auto-batching of addition and deletion of tasks. No new changes have been introduced outside of `milli`. So all the changes you see on the autobatcher have actually already been reviewed.
It fixes https://github.com/meilisearch/meilisearch/issues/3440.
### What was happening?
The issue was that the `external_documents_ids` generated in the `transform` were used in a very strange way that wasn’t compatible with the deletion of documents.
Instead of doing a clear merge between the external document IDs of the DB and the one returned by the transform + writing it on disk, we were doing some weird tricks with the soft-deleted to avoid writing the fst on disk as much as possible.
The new algorithm may be a bit slower but is way more straightforward and doesn’t change depending on if the soft deletion was used or not. Here is a list of the changes introduced:
1. We now do a clear distinction between the `new_external_documents_ids` coming from the transform and only held on RAM and the `external_documents_ids` coming from the DB.
2. The `new_external_documents_ids` (coming out of the transform) are now represented as an `fst`. We don't need to struggle with the hard, soft distinction + the soft_deleted => That's easier to understand
3. When indexing documents, we merge the `external_documents_ids` coming from the DB and the `new_external_documents_ids` coming from the transform.
### Other things introduced in this PR
Since we constantly have to write small, very specialized fuzzers for this kind of bug, we decided to push the one used to reproduce this bug.
It's not perfect, but it's easy to improve in the future.
It'll also run for as long as possible on every merge on the main branch.
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Loïc Lecrenier <loic.lecrenier@icloud.com>
Conflicts | resolution
----------|-----------
Cargo.lock | added mimalloc
Cargo.toml | took origin/main version
milli/src/search/criteria/exactness.rs | deleted after checking it was only clippy changes
milli/src/search/query_tree.rs | deleted after checking it was only clippy changes
3688: Following release v1.1.1: bring back changes into `main` r=curquiza a=curquiza
`@meilisearch/engine-team` ensure the changes we bring to `main` are the ones you want
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: dureuill <dureuill@users.noreply.github.com>
3347: Enhance language detection r=irevoire a=ManyTheFish
## Summary
Some completely unrelated Languages can share the same characters, in Meilisearch we detect the Languages using `whatlang`, which works well on large texts but fails on small search queries leading to a bad segmentation and normalization of the query.
This PR now stores the Languages detected during the indexing in order to reduce the Languages list that can be detected during the search.
## Detail
- Create a 19th database mapping the scripts and the Languages detected with the documents where the Language is detected
- Fill the newly created database during indexing
- Create an allow-list with this database and pass it to Charabia
- Add a test ensuring that a Japanese request containing kanjis only is detected as Japanese and not Chinese
## Related issues
Fixes#2403Fixes#3513
Co-authored-by: f3r10 <frledesma@outlook.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
Co-authored-by: Many the fish <many@meilisearch.com>
3505: Csv delimiter r=irevoire a=irevoire
Fixes https://github.com/meilisearch/meilisearch/issues/3442
Closes https://github.com/meilisearch/meilisearch/pull/2803
Specified in https://github.com/meilisearch/specifications/pull/221
This PR is a reimplementation of https://github.com/meilisearch/meilisearch/pull/2803, on the new engine. Thanks for your idea and initial PR `@MixusMinimax;` sorry I couldn’t update/merge your PR. Way too many changes happened on the engine in the meantime.
**Attention to reviewer**; I had to update deserr to implement the support of deserializing `char`s
-------
It introduces four new error messages;
- Invalid value in parameter csvDelimiter: expected a string of one character, but found an empty string
- Invalid value in parameter csvDelimiter: expected a string of one character, but found the following string of 5 characters: doggo
- csv delimiter must be an ascii character. Found: 🍰
- The Content-Type application/json does not support the use of a csv delimiter. The csv delimiter can only be used with the Content-Type text/csv.
And one error code;
- `invalid_index_csv_delimiter`
The `invalid_content_type` error code is now also used when we encounter the `csvDelimiter` query parameter with a non-csv content type.
Co-authored-by: Tamo <tamo@meilisearch.com>
3461: Bring v1 changes into main r=curquiza a=Kerollmops
Also bring back changes in milli (the remote repository) into main done during the pre-release
Co-authored-by: Loïc Lecrenier <loic.lecrenier@me.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: curquiza <curquiza@users.noreply.github.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Philipp Ahlner <philipp@ahlner.com>
Co-authored-by: Kerollmops <clement@meilisearch.com>
3406: Master Key: Implements errors and warnings from the specification r=irevoire a=dureuill
<sub>Now in technicolor</sub>
# Pull Request
## What does this PR do?
- Uses `atty` and `termcolor` as dependency
- Use these dependencies to print colored background for warning messages
- Update messages to match https://github.com/meilisearch/specifications/pull/209
## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [ ] Have you read the contributing guidelines?
- [ ] Have you made sure that the title is accurate and descriptive of the changes?
Thank you so much for contributing to Meilisearch!
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
3128: Bumps cargo_toml version to most up to date r=curquiza a=colbsmcdolbs
# Pull Request
## Related issue
Fixes#3127
## What does this PR do?
- The README of this repository declares that one package is not up to date. In order to ensure Due Diligence, I have bumped the version number of the package. No test failures running on Windows.
## PR checklist
Please check if your PR fulfills the following requirements:
- [X] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [X] Have you read the contributing guidelines?
- [X] Have you made sure that the title is accurate and descriptive of the changes?
Thank you so much for contributing to Meilisearch!
Co-authored-by: Colby Allen <colbyjayallen@gmail.com>
* Fix error code of the "duplicate index found" error
* Use the content of the ProcessingTasks in the tasks cancelation system
* Change the missing_filters error code into missing_task_filters
* WIP Introduce the invalid_task_uid error code
* Use more precise error codes/message for the task routes
+ Allow star operator in delete/cancel tasks
+ rename originalQuery to originalFilters
+ Display error/canceled_by in task view even when they are = null
+ Rename task filter fields by using their plural forms
+ Prepare an error code for canceledBy filter
+ Only return global tasks if the API key action `index.*` is there
* Add canceledBy task filter
* Update tests following task API changes
* Rename original_query to original_filters everywhere
* Update more insta-snap tests
* Make clippy happy
They're a happy clip now.
* Make rustfmt happy
>:-(
* Fix Index name parsing error message to fit the specification
* Bump milli version to 0.35.1
* Fix the new error messages
* fix the error messages and add tests
* rename the error codes for the sake of consistency
* refactor the way we send the cli informations + add the analytics for the config file and ssl usage
* Apply suggestions from code review
Co-authored-by: Clément Renault <clement@meilisearch.com>
* add a comment over the new infos structure
* reformat, sorry @kero
* Store analytics for the documents deletions
* Add analytics on all the settings
* Spawn threads with names
* Spawn rayon threads with names
* update the distinct attributes to the spec update
* update the analytics on the search route
* implements the analytics on the health and version routes
* Fix task details serialization
* Add the question mark to the task deletion query filter
* Add the question mark to the task cancelation query filter
* Fix tests
* add analytics on the task route
* Add all the missing fields of the new task query type
* Create a new analytics for the task deletion
* Create a new analytics for the task creation
* batch the tasks seen events
* Update the finite pagination analytics
* add the analytics of the swap-indexes route
* Stop removing the DB when failing to read it
* Rename originalFilters into originalFilters
* Rename matchedDocuments into providedIds
* Add `workflow_dispatch` to flaky.yml
* Bump grenad to 0.4.4
* Bump milli to version v0.37.0
* Don't multiply total memory returned by sysinfo anymore
sysinfo now returns bytes rather than KB
* Add a dispatch to the publish binaries workflow
* Fix publish release CI
* Don't use gold but the default linker
* Always display details for the indexDeletion task
* Fix the insta tests
* refactorize the whole test suite
1. Make a call to assert_internally_consistent automatically when snapshoting the scheduler. There is no point in snapshoting something broken and expect the dumb humans to notice.
2. Replace every possible call to assert_internally_consistent by a snapshot of the scheduler. It takes as many lines and ensure we never change something without noticing in any tests ever.
3. Name every snapshots: it's easier to debug when something goes wrong and easier to review in general.
4. Stop skipping breakpoints, it's too easy to miss something. Now you must explicitely show which path is the scheduler supposed to use.
5. Add a timeout on the channel.recv, it eases the process of writing tests, now when something file you get a failure instead of a deadlock.
* rebase on release-v0.30
* makes clippy happy
* update the snapshots after a rebase
* try to remove the flakyness of the failing test
* Add more analytics on the ranking rules positions
* Update the dump test to check for the dumpUid dumpCreation task details
* send the ranking rules as a string because amplitude is too dumb to process an array as a single value
* Display a null dumpUid until we computed the dump itself on disk
* Update tests
* Check if the master key is missing before returning an error
Co-authored-by: Loïc Lecrenier <loic.lecrenier@me.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2851: Upgrade clap to 4.0 r=loiclec a=choznerol
# Pull Request
## Related issue
Fixes#2846
This PR is draft based on #2847 to avoid conflict. I will rebase and mark as 'Ready for review' after #2847 is merged.
## What does this PR do?
1. Upgrade clap to the latest version or 4.0 (4.0.9 as of today) by following the [migrating instruction](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md#migrating) from [4.0 changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md#migrating)
2. Fix an `ArgGroup` typo that can only be caught after upgrading to 4.0 in 20a715e29ed17c5a76229c98fb31504ada873597
## Notable changes
### The `--help` message
The format, ordering and indentation of `--help` message was changed in 4.0. I recorded the output of `cargo run -- --help` before and after upgrade to 4.0 for reference.
<details>
<summary>diff</summary>
Output of `diff --ignore-all-space --text --unified --new-file help-message-before.txt help-message-after.txt`:
```diff
--- help-message-before.txt 2022-10-14 16:45:36.000000000 +0800
+++ help-message-after.txt 2022-10-14 16:36:53.000000000 +0800
`@@` -1,12 +1,8 `@@`
-meilisearch-http 0.29.1
+Usage: meilisearch [OPTIONS]
-USAGE:
- meilisearch [OPTIONS]
-
-OPTIONS:
+Options:
--config-file-path <CONFIG_FILE_PATH>
- Set the path to a configuration file that should be used to setup the engine. Format
- must be TOML
+ Set the path to a configuration file that should be used to setup the engine. Format must be TOML
--db-path <DB_PATH>
Designates the location where database files will be created and retrieved
`@@` -26,15 +22,14 `@@`
[default: dumps/]
--env <ENV>
- Configures the instance's environment. Value must be either `production` or
- `development`
+ Configures the instance's environment. Value must be either `production` or `development`
[env: MEILI_ENV=]
[default: development]
[possible values: development, production]
-h, --help
- Print help information
+ Print help information (use `-h` for a summary)
--http-addr <HTTP_ADDR>
Sets the HTTP address and port Meilisearch will use
`@@` -43,63 +38,53 `@@`
[default: 127.0.0.1:7700]
--http-payload-size-limit <HTTP_PAYLOAD_SIZE_LIMIT>
- Sets the maximum size of accepted payloads. Value must be given in bytes or explicitly
- stating a base unit (for instance: 107374182400, '107.7Gb', or '107374 Mb')
+ Sets the maximum size of accepted payloads. Value must be given in bytes or explicitly stating a base unit (for instance: 107374182400, '107.7Gb', or '107374 Mb')
[env: MEILI_HTTP_PAYLOAD_SIZE_LIMIT=]
[default: 100000000]
--ignore-dump-if-db-exists
- Prevents a Meilisearch instance with an existing database from throwing an error when
- using `--import-dump`. Instead, the dump will be ignored and Meilisearch will launch
- using the existing database.
+ Prevents a Meilisearch instance with an existing database from throwing an error when using `--import-dump`. Instead, the dump will be ignored and Meilisearch will launch using the existing database.
This option will trigger an error if `--import-dump` is not defined.
[env: MEILI_IGNORE_DUMP_IF_DB_EXISTS=]
--ignore-missing-dump
- Prevents Meilisearch from throwing an error when `--import-dump` does not point to a
- valid dump file. Instead, Meilisearch will start normally without importing any dump.
+ Prevents Meilisearch from throwing an error when `--import-dump` does not point to a valid dump file. Instead, Meilisearch will start normally without importing any dump.
This option will trigger an error if `--import-dump` is not defined.
[env: MEILI_IGNORE_MISSING_DUMP=]
--ignore-missing-snapshot
- Prevents a Meilisearch instance from throwing an error when `--import-snapshot` does not
- point to a valid snapshot file.
+ Prevents a Meilisearch instance from throwing an error when `--import-snapshot` does not point to a valid snapshot file.
This command will throw an error if `--import-snapshot` is not defined.
[env: MEILI_IGNORE_MISSING_SNAPSHOT=]
--ignore-snapshot-if-db-exists
- Prevents a Meilisearch instance with an existing database from throwing an error when
- using `--import-snapshot`. Instead, the snapshot will be ignored and Meilisearch will
- launch using the existing database.
+ Prevents a Meilisearch instance with an existing database from throwing an error when using `--import-snapshot`. Instead, the snapshot will be ignored and Meilisearch will launch using the existing database.
This command will throw an error if `--import-snapshot` is not defined.
[env: MEILI_IGNORE_SNAPSHOT_IF_DB_EXISTS=]
--import-dump <IMPORT_DUMP>
- Imports the dump file located at the specified path. Path must point to a `.dump` file.
- If a database already exists, Meilisearch will throw an error and abort launch
+ Imports the dump file located at the specified path. Path must point to a `.dump` file. If a database already exists, Meilisearch will throw an error and abort launch
[env: MEILI_IMPORT_DUMP=]
--import-snapshot <IMPORT_SNAPSHOT>
- Launches Meilisearch after importing a previously-generated snapshot at the given
- filepath
+ Launches Meilisearch after importing a previously-generated snapshot at the given filepath
[env: MEILI_IMPORT_SNAPSHOT=]
--log-level <LOG_LEVEL>
Defines how much detail should be present in Meilisearch's logs.
- Meilisearch currently supports five log levels, listed in order of increasing verbosity:
- ERROR, WARN, INFO, DEBUG, TRACE.
+ Meilisearch currently supports five log levels, listed in order of increasing verbosity: ERROR, WARN, INFO, DEBUG, TRACE.
[env: MEILI_LOG_LEVEL=]
[default: INFO]
`@@` -110,31 +95,25 `@@`
[env: MEILI_MASTER_KEY=]
--max-index-size <MAX_INDEX_SIZE>
- Sets the maximum size of the index. Value must be given in bytes or explicitly stating a
- base unit (for instance: 107374182400, '107.7Gb', or '107374 Mb')
+ Sets the maximum size of the index. Value must be given in bytes or explicitly stating a base unit (for instance: 107374182400, '107.7Gb', or '107374 Mb')
[env: MEILI_MAX_INDEX_SIZE=]
[default: 107374182400]
--max-indexing-memory <MAX_INDEXING_MEMORY>
- Sets the maximum amount of RAM Meilisearch can use when indexing. By default,
- Meilisearch uses no more than two thirds of available memory
+ Sets the maximum amount of RAM Meilisearch can use when indexing. By default, Meilisearch uses no more than two thirds of available memory
[env: MEILI_MAX_INDEXING_MEMORY=]
[default: "21.33 TiB"]
--max-indexing-threads <MAX_INDEXING_THREADS>
- Sets the maximum number of threads Meilisearch can use during indexation. By default,
- the indexer avoids using more than half of a machine's total processing units. This
- ensures Meilisearch is always ready to perform searches, even while you are updating an
- index
+ Sets the maximum number of threads Meilisearch can use during indexation. By default, the indexer avoids using more than half of a machine's total processing units. This ensures Meilisearch is always ready to perform searches, even while you are updating an index
[env: MEILI_MAX_INDEXING_THREADS=]
[default: 5]
--max-task-db-size <MAX_TASK_DB_SIZE>
- Sets the maximum size of the task database. Value must be given in bytes or explicitly
- stating a base unit (for instance: 107374182400, '107.7Gb', or '107374 Mb')
+ Sets the maximum size of the task database. Value must be given in bytes or explicitly stating a base unit (for instance: 107374182400, '107.7Gb', or '107374 Mb')
[env: MEILI_MAX_TASK_DB_SIZE=]
[default: 107374182400]
```
- ~[help-message-before.txt](https://github.com/meilisearch/meilisearch/files/9715683/help-message-before.txt)~ [help-message-before.txt](https://github.com/meilisearch/meilisearch/files/9784156/help-message-before-2.txt)
- ~[help-message-after.txt](https://github.com/meilisearch/meilisearch/files/9715682/help-message-after.txt)~ [help-message-after.txt](https://github.com/meilisearch/meilisearch/files/9784091/help-message-after.txt)
</details>
## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?
Thank you so much for contributing to Meilisearch!
Co-authored-by: Lawrence Chou <choznerol@protonmail.com>
Refactored tests code to allow to specify compression (content-encoding) algorithm.
Added tests to verify what actix actually handle different content encodings properly.
2745: Config file support r=curquiza a=mlemesle
# Pull Request
## What does this PR do?
Fixes#2558
## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?
Thank you so much for contributing to Meilisearch!
2789: Fix typos r=Kerollmops a=kianmeng
# Pull Request
## What does this PR do?
Found via `codespell -L crate,nam,hart`.
## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?
Thank you so much for contributing to Meilisearch!
2814: Skip dashboard test if mini-dashboard feature is disabled r=Kerollmops a=jirutka
Fixes#2813
Fixes the following error:
cargo test --no-default-features
...
error: couldn't read target/debug/build/meilisearch-http-ec029d8c902cf2cb/out/generated.rs: No such file or directory (os error 2)
--> meilisearch-http/tests/dashboard/mod.rs:8:9
|
8 | include!(concat!(env!("OUT_DIR"), "/generated.rs"));
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: this error originates in the macro `include` (in Nightly builds, run with -Z macro-backtrace for more info)
error: could not compile `meilisearch-http` due to previous error
2826: Rename receivedDocumentIds into matchedDocuments r=Kerollmops a=Ugzuzg
# Pull Request
## What does this PR do?
Fixes#2799
Changes DocumentDeletion task details response.
## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?
Tested with curl:
```
curl \
-X POST 'http://localhost:7700/indexes/movies/documents/delete-batch' \
-H 'Content-Type: application/json' \
--data-binary '[
23488,
153738,
437035,
363869
]'
{"taskUid":1,"indexUid":"movies","status":"enqueued","type":"documentDeletion","enqueuedAt":"2022-10-01T20:06:37.105416054Z"}%
curl \
-X GET 'http://localhost:7700/tasks/1'
{"uid":1,"indexUid":"movies","status":"succeeded","type":"documentDeletion","details":{"matchedDocuments":4,"deletedDocuments":2},"duration":"PT0.005708322S","enqueuedAt":"2022-10-01T20:06:37.105416054Z","startedAt":"2022-10-01T20:06:37.115562733Z","finishedAt":"2022-10-01T20:06:37.121271055Z"}
```
Co-authored-by: mlemesle <lemesle.martin@hotmail.fr>
Co-authored-by: Kian-Meng Ang <kianmeng@cpan.org>
Co-authored-by: Jakub Jirutka <jakub@jirutka.cz>
Co-authored-by: Jarasłaŭ Viktorčyk <ugzuzg@gmail.com>
2689: Use mimalloc as the global allocator r=Kerollmops a=loiclec
milli has switched its global allocator to mimalloc already, and we have seen some performance gains as a result. Furthermore, we can use mimalloc as the global allocator on all platforms whereas jemalloc was only activated on Linux.
This PR brings mimalloc to Meilisearch as well.
2690: Add LTO and codegen-units=1 to release compile options r=Kerollmops a=loiclec
This PR brings Meilisearch's release compile options in line with milli (see https://github.com/meilisearch/milli/pull/606 ).
Adding LTO and codegen=units=1 will make compile times longer, but they also speed up the final binary significantly.
Co-authored-by: Loïc Lecrenier <loic@meilisearch.com>
2523: Improve the tasks error reporting when processed in batches r=irevoire a=Kerollmops
This fixes#2478 by changing the behavior of the task handler when there is an error in a batch of document addition or update.
What changes is that when there is a user error in a task in a batch we now report this task as failed with the right error message but we continue to process the other tasks. A user error can be when a geo field is invalid, a document id is invalid, or missing.
fixes#2582, #2478
Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
Move `meilisearch_error` to `meilisearch_types::error`
Move `meilisearch_lib::index_resolver::IndexUid` to `meilisearch_types::index_uid`
Add a new `InvalidIndexUid` error in `meilisearch_types::index_uid`
2494: Introduce the new faceting and pagination settings r=ManyTheFish a=Kerollmops
This PR introduces two new settings following the newly created spec https://github.com/meilisearch/specifications/pull/157:
- The `faceting.max_values_per_facet` one describes the maximum number of values (each with a count) associated with a value in a facet distribution query.
- The `pagination.limited_to` one describes the maximum number of documents that a search query can ever return.
Co-authored-by: Kerollmops <clement@meilisearch.com>
2445: Seek-based tasks list r=Kerollmops a=Kerollmops
This PR implements the seek-based pagination for the tasks list following [the spec](https://github.com/meilisearch/specifications/pull/115).
Co-authored-by: Kerollmops <clement@meilisearch.com>
2450: Bump the dependencies r=ManyTheFish a=Kerollmops
In order to use [the latest version of grenad](https://docs.rs/grenad) I bump the dependencies here. We also use the latest versions of all our other dependencies now.
Co-authored-by: Kerollmops <clement@meilisearch.com>