Commit Graph

9037 Commits

Author SHA1 Message Date
Louis Dureuil
21bcf32109
Add candle and hg_hub, updating a lot of deps in the process 2023-12-14 16:07:48 +01:00
ManyTheFish
35e1981488 Remove proximityPrecision form the experimental feature 2023-12-14 15:52:42 +01:00
meili-bors[bot]
e0f712b9d3
Merge #4254
4254: Bring back v1.5.1 changes into main r=ManyTheFish a=Kerollmops

This pull request brings back changes from the _release-v1.5.1_ branch into _main_.

Co-authored-by: ManyTheFish <many@meilisearch.com>
Co-authored-by: meili-bors[bot] <89034592+meili-bors[bot]@users.noreply.github.com>
Co-authored-by: curquiza <curquiza@users.noreply.github.com>
Co-authored-by: Clément Renault <clement@meilisearch.com>
2023-12-14 09:41:57 +00:00
Clément Renault
56571f762a
Merge remote-tracking branch 'origin/main' into tmp-release-v1.5.1 2023-12-13 11:57:01 +01:00
Clément Renault
005800634d
Merge pull request #4249 from meilisearch/flag-limit-batch-size
Introduce parameters to limit the number of batched tasks
2023-12-13 10:32:14 +01:00
Clément Renault
976af4fa8f
Add the default commented experimental batched tasks limit parameter to the config file 2023-12-12 10:59:00 +01:00
Clément Renault
99fec27788
Make the --max-number-of-batched-tasks argument experimental 2023-12-12 10:55:39 +01:00
meili-bors[bot]
afa8f273a8
Merge #4250
4250: Update version for the next release (v1.5.1) in Cargo.toml r=dureuill a=meili-bot

⚠️ This PR is automatically generated. Check the new version is the expected one and Cargo.lock has been updated before merging.

Co-authored-by: curquiza <curquiza@users.noreply.github.com>
2023-12-12 08:26:06 +00:00
curquiza
4b644f6bc0 Update version for the next release (v1.5.1) in Cargo.toml 2023-12-11 17:15:11 +00:00
Clément Renault
7e259cb0d2
Expose the --max-number-of-batched-tasks argument 2023-12-11 16:08:39 +01:00
meili-bors[bot]
0fbc1511d7
Merge #4225
4225: [EXP] Let the user customize the proximity precision r=dureuill a=ManyTheFish

# Pull Request
This PR introduces a new setting `proximityPrecision` allowing the user to trade indexing time with search precision on proximity-based features:
- proximity ranking rules
- multi-word synonyms
- phrase search
- split-words

I put the API PRD below:
https://www.notion.so/meilisearch/3988b345b5b248948a4a0dc5932a18ce?v=45d79150adb84b0aa27826ff6da2e029&p=aa69c2bab2c3402bab9340ae4def4577&pm=s

## Related issue
Fixes #4187

Co-authored-by: ManyTheFish <many@meilisearch.com>
2023-12-06 17:21:43 +00:00
ManyTheFish
c9860c7913 Small test fixes 2023-12-06 15:49:05 +01:00
ManyTheFish
03ffabe889 Add a new dump test 2023-12-06 15:49:05 +01:00
ManyTheFish
1f4fc9c229 Make the feature experimental 2023-12-06 15:49:05 +01:00
ManyTheFish
8cc3c54117 Add proximityPrecision setting in settings route 2023-12-06 15:49:05 +01:00
ManyTheFish
467b49153d Implement proximityPrecision setting on milli side 2023-12-06 15:49:02 +01:00
ManyTheFish
0c3fa8cbc4 Add tests on proximityPrecision setting 2023-12-06 14:59:23 +01:00
ManyTheFish
bddc168d83 List TODOs 2023-12-06 14:59:23 +01:00
meili-bors[bot]
84a36002d7
Merge #4239
4239: Remove the actix-web dependency from milli r=dureuill a=Kerollmops

Just remove actix-web from milli.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2023-11-29 10:19:40 +00:00
meili-bors[bot]
c95d68e244
Merge #4233
4233: Add test reproducing #4232 r=dureuill a=ManyTheFish

- add a test reproducing the bug
- fix the bug by creating 2 different restricting lists of attributes, one for the exact attributes, and the other for the tolerant attributes

## Related issue
Fixes #4232


Co-authored-by: ManyTheFish <many@meilisearch.com>
2023-11-29 08:47:17 +00:00
ManyTheFish
3b3fa38f27 Put the restrict list in a sub-struct 2023-11-28 18:37:57 +01:00
Clément Renault
170e063b80
Remove the actix-web dependency from milli 2023-11-28 17:19:57 +01:00
ManyTheFish
d6c2ee15a9 Filter on attributes before computing the docids when attribute restriction is on 2023-11-28 14:55:29 +01:00
meili-bors[bot]
6376c342c1
Merge #4223
4223: Update to heed 0.20 r=dureuill a=Kerollmops

This PR brings the v0.20-alpha.9 version of heed into Meilisearch 🎉 The main goal is to test it in a real environment to make the necessary changes if needed. We also want to merge it as soon as possible during the pre-release phase to ensure we catch bugs before the release.

Most of the calls to heed are the same as before, except:
 - The `PolyDatabase` has been replaced with a `Database<Unspecified, Unspecified>`. We replaced the `get<T, U>()` by a `remap<T, U>().get()` calls.
 - The `Database` `append(...)` method has been replaced with a `put_with_flags(PutFlags::APPEND, ...)`.
 - The `RwTxn<'e, 'p>` has been simplified into a `RwTxn<'e>`.
 - The `BytesEncode/Decode` traits return a `Result<_, BoxedError>` instead of an `Option<_>`.
 - We no longer need to wrap and unwrap the `BEU32` integer when storing/getting them from heed.

### TODO
 - [x] Create actual, simple error types instead of using strings in the codecs.

### Follow-up work
 - Move the codecs into another member crate (we depend on the uuid one in the meilitool crate).
 - Display the internal decoding error in the `SerializationError` internal error variant.

Co-authored-by: Clément Renault <clement@meilisearch.com>
2023-11-28 13:39:44 +00:00
Clément Renault
5b563f872b
Move the clippy attribute on the problematic part of the code 2023-11-28 14:37:58 +01:00
Clément Renault
ec9b52d608
Rename copy_to_path to copy_to_file 2023-11-28 14:32:30 +01:00
Clément Renault
34c67ac389
Remove the possibility to fail fetching the env info 2023-11-28 14:31:23 +01:00
Clément Renault
d050c9b4ae
Only remap the main database once 2023-11-28 14:27:30 +01:00
Clément Renault
7dd1226faf
Clarify an unreachable unwrap 2023-11-28 14:26:31 +01:00
Clément Renault
1575456594
Further reduce an async block 2023-11-28 14:23:32 +01:00
Clément Renault
add2ceef67
Introduce error types to avoid panics 2023-11-28 14:21:49 +01:00
Clément Renault
548c8247c2
Create and use real error types in the codecs 2023-11-28 10:11:17 +01:00
meili-bors[bot]
181ca48482
Merge #4234
4234: Fix puffin in the index scheduler r=dureuill a=irevoire

Currently, we can't compile the index scheduler without this feature.

It could be cool to specify the dependencies in the main workspace cargo toml like quickwit does to avoid this kind of error in the future; https://github.com/quickwit-oss/quickwit/blob/main/quickwit/Cargo.toml#L41

Co-authored-by: Tamo <tamo@meilisearch.com>
2023-11-28 08:23:48 +00:00
Tamo
5751f5c640 fix puffin in the index scheduler 2023-11-27 15:18:33 +01:00
Clément Renault
d32eb11329
Move to the v0.20.0-alpha.9 of heed 2023-11-27 11:52:22 +01:00
ManyTheFish
dc07790133 Add test reproducing #4232 2023-11-27 11:39:11 +01:00
meili-bors[bot]
3d23b388bc
Merge #4231
4231: Fixed payload limit setting being ignored for delete documents by batch r=Kerollmops a=Karribalu


# Pull Request

## Related issue
Fixes #4224

## What does this PR do?
- Added http_payload_size_limit to JsonConfig to allow deleting documents in batches with a payload size greater than 2MB, which is the default limit set in the JsonConfig crate.

## PR checklist
Please check if your PR fulfills the following requirements:
- [Y] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [Y] Have you read the contributing guidelines?
- [Y] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: karribalu <karri.balu123456@gmail.com>
2023-11-27 09:26:21 +00:00
karribalu
85626cff8e Fixed payload limit setting being ignored for delete documents by batch route 2023-11-25 18:41:16 +00:00
Clément Renault
58dac8af42
Remove the panics and unwraps 2023-11-23 15:00:48 +01:00
Clément Renault
0dbf1a16ff
Make clippy happy 2023-11-23 14:11:38 +01:00
Clément Renault
462b4c0080
Fix the tests 2023-11-23 12:07:35 +01:00
Clément Renault
0d4482625a
Make the changes to use heed v0.20-alpha.6 2023-11-23 11:43:58 +01:00
Clément Renault
56a0d91ecd
Update the heed dependency and lock file 2023-11-22 15:11:09 +01:00
meili-bors[bot]
b366acdae6
Merge #4220
4220: Bring back changes from v1.5.0 into main r=dureuill a=Kerollmops

This will bring the fixes from v1.5.0 into main. By [following this guide](https://github.com/meilisearch/engine-team/blob/main/resources/meilisearch-release.md#after-the-release) I decided to create a temporary branch to fix the git conflicts and merge into main afterward.

Co-authored-by: curquiza <curquiza@users.noreply.github.com>
Co-authored-by: Vivek Kumar <vivek.26@outlook.com>
Co-authored-by: Louis Dureuil <louis.dureuil@gmail.com>
Co-authored-by: meili-bors[bot] <89034592+meili-bors[bot]@users.noreply.github.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Clément Renault <clement@meilisearch.com>
Co-authored-by: Louis Dureuil <louis.dureuil@xinra.net>
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2023-11-22 07:46:22 +00:00
Clément Renault
7cb7e37ba8
Merge branch 'main' into tmp-release-v1.5.0 2023-11-21 16:30:46 +01:00
meili-bors[bot]
33b7c574ea
Merge #4090
4090: Diff indexing r=ManyTheFish a=ManyTheFish

This pull request aims to reduce the indexing time by computing a difference between the data added to the index and the data removed from the index before writing in LMDB.

## Why focus on reducing the writings in LMDB?

The indexing in Meilisearch is split into 3 main phases:
1) The computing or the extraction of the data (Multi-threaded)
2) The writing of the data in LMDB (Mono-threaded)
3) The processing of the prefix databases (Mono-threaded)

see below:
![Capture d’écran 2023-09-28 à 20 01 45](https://github.com/meilisearch/meilisearch/assets/6482087/51513162-7c39-4244-978b-2c6b60c43a56)


Because the writing is mono-threaded, it represents a bottleneck in the indexing, reducing the number of writes in LMDB will reduce the pressure on the main thread and should reduce the global time spent on the indexing.

## Give Feedback

We created [a dedicated discussion](https://github.com/meilisearch/meilisearch/discussions/4196) for users to try this new feature and to give feedback on bugs or performance issues.

## Technical approach
### Part 1: merge the addition and the deletion process
This part:
a) Aims to reduce the time spent on indexing only the filterable/sortable fields of documents, for example:
  - Updating the number of "likes" or "stars" of a song or a movie
  - Updating the "stock count" or the "price" of a product

b) Aims to reduce the time spent on writing in LMDB which should reduce the global indexing time for the highly multi-threaded machines by reducing the writing bottleneck.

c) Aims to reduce the average time spent to delete documents without having to keep the soft-deleted documents implementation

- [x] Create a preprocessing function that creates the diff-based documents chuck (`OBKV<fid, OBKV<AddDel, value>>`)
  - [x] and clearly separate the faceted fields and the searchable fields in two different chunks
- Change the parameters of the input extractor by taking an `OBKV<fid, OBKV<AddDel, value>>` instead of  `OBKV<fid, value>`.
  - [x] extract_docid_word_positions
  - [x] extract_geo_points
  - [x] extract_vector_points
  - [x] extract_fid_docid_facet_values
- Adapt the searchable extractors to the new diff-chucks
  - [x] extract_fid_word_count_docids
  - [x] extract_word_pair_proximity_docids
  - [x] extract_word_position_docids
  - [x] extract_word_docids
- Adapt the facet extractors to the new diff-chucks
  - [x] extract_facet_number_docids
  - [x] extract_facet_string_docids
  - [x] extract_fid_docid_facet_values
  - [x] FacetsUpdate
- [x] Adapt the prefix database extractors ⚠️ ⚠️ 
- [x] Make the LMDB writer remove the document_ids to delete at the same time the new document_ids are added
- [x] Remove document deletion pipeline
  - [x] remove `new_documents_ids` entirely and `replaced_documents_ids`
  - [x] reuse extracted external id from transform instead of re-extracting in `TypedChunks::Documents`
  - [x] Remove deletion pipeline after autobatcher
  - [x] remove autobatcher deletion pipeline
    - [x] everything uses `IndexOperation::DocumentOperation`
    - [x] repair deletion by internal id for filter by delete
    - [x] Improve the deletion via internal ids by avoiding iterating over the whole set of external document ids.  
- [x] Remove soft-deleted documents

#### FIXME

- [x] field distribution is not correctly updated after deletion
- [x] missing documents in the tests of tokenizer_customization

### Part 2: Only compute the documents field by field
This part aims to reduce the global indexing time for any kind of partial document modification on any size of machine from the mono-threaded one to the highly multi-threaded one.

- [ ] Make the preprocessing function only send the fields that changed to the extractors
- [ ] remove the `word_docids` and `exact_word_docids` database and adapt the search (⚠️ could impact the search performances)
- [ ] replace the `word_pair_proximity_docids` database with a `word_pair_proximity_fid_docids` database and adapt the search (⚠️ could impact the search performances)
- [ ] Adapt the prefix database extractors ⚠️ ⚠️

## Technical Concerns
- The part 1 implementation could increase the indexing time for the smallest machines (with few threads) by increasing the extracting time (multi-threaded) more than the writing time (mono-threaded)
- The part 2 implementation needs to change the databases which could have a significant impact on the search performances
- The prefix databases are a bit special to process and may be a pain to adapt to the difference-based indexing

Co-authored-by: ManyTheFish <many@meilisearch.com>
Co-authored-by: Clément Renault <clement@meilisearch.com>
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2023-11-21 09:44:38 +00:00
ManyTheFish
d3575fb028 Make into_del_add_obkv parameters more human readable 2023-11-20 16:10:39 +01:00
ManyTheFish
39cbb499c2 Small fixes 2023-11-20 10:20:39 +01:00
ManyTheFish
ebef6bc24d Simplify documents database writing 2023-11-20 10:14:57 +01:00
ManyTheFish
d59b7db8d0 remove unused code 2023-11-20 10:10:45 +01:00