Commit Graph

9834 Commits

Author SHA1 Message Date
meili-bors[bot]
885b9f07b1
Merge #4949
4949: Fix swedish language support v1.10 r=Kerollmops a=ManyTheFish

# Pull Request

Cherry-picked commits from https://github.com/meilisearch/meilisearch/pull/4945 for v1.10.2


Co-authored-by: ManyTheFish <many@meilisearch.com>
2024-09-23 16:19:53 +00:00
ManyTheFish
7d0f532dba fix tests 2024-09-23 17:13:27 +02:00
meili-bors[bot]
73c1ee65a2
Merge #4951
4951: Update version for the next release (v1.10.2) in Cargo.toml r=dureuill a=ManyTheFish

# Pull Request

Update Meilisearch v1.10.2

Co-authored-by: ManyTheFish <ManyTheFish@users.noreply.github.com>
2024-09-23 14:51:07 +00:00
ManyTheFish
10a150da93 Update Charabia v0.9.1 2024-09-23 07:55:44 +02:00
ManyTheFish
a2451b7683 Update version for the next release (v1.10.2) in Cargo.toml 2024-09-23 05:47:44 +00:00
ManyTheFish
cfa82ab1bb Add Swedish pipeline in all-tokenization feature 2024-09-23 07:30:03 +02:00
meili-bors[bot]
3f517dfae6
Merge #4905
4905: Do not fail the whole batch when a single document deletion by filter fails r=dureuill a=irevoire

# Pull Request

## Related issue
Fixes a small bug introduced by https://github.com/meilisearch/meilisearch/pull/4901 where a document deletion by filter could fail a whole batch of document deletion task.

## What does this PR do?
- When a document deletion by filter contains an invalid filter, only fails this task instead of the whole batch
- Adds a big test with multiple document deletions batched together ensuring everything works well


Co-authored-by: Tamo <tamo@meilisearch.com>
2024-09-02 09:00:11 +00:00
Tamo
11a8e537ed add the snapshots 2024-09-02 10:53:33 +02:00
Tamo
959aeb0df3 Do not fail the whole batch when a single document deletion by filter fails 2024-09-02 10:34:28 +02:00
meili-bors[bot]
4daa27a8e8
Merge #4901
4901: Autobatch document deletion by filter r=dureuill a=irevoire

# Pull Request

## Related issue
Fixes https://github.com/meilisearch/meilisearch/issues/4897

## What does this PR do?
- Enable autobatching of document deletion by filter with:
  - Document deletion by filter
  - Document deletion
  - Document clear
  - Index deletion


Co-authored-by: Tamo <tamo@meilisearch.com>
2024-08-29 15:46:16 +00:00
Tamo
34ebcd378e autobatch document deletion by filter 2024-08-29 16:50:05 +02:00
meili-bors[bot]
5fed4d035a
Merge #4899
4899: stop trying to process searches after one minute r=ManyTheFish a=irevoire

# Pull Request

## Related issue
May be related to #4654 and https://github.com/meilisearch/meilisearch-support/issues/350

## What does this PR do?
- If we've been waiting for one whole minute for a search to process, we cancel it
- Ideally we should check if the connection was closed instead but that’s not possible currently: https://github.com/actix/actix-web/issues/3462


Co-authored-by: Tamo <tamo@meilisearch.com>
2024-08-29 07:10:42 +00:00
Tamo
11bee34bf0 stop trying to process searches after one minute 2024-08-28 19:01:54 +02:00
meili-bors[bot]
541a23b17a
Merge #4898
4898: Explicitely drop the search permits r=ManyTheFish a=irevoire

# Pull Request

## Related issue
May be related to #4654 and https://github.com/meilisearch/meilisearch-support/issues/350

## What does this PR do?
- Stop spawning a tokio task that is not immediately scheduled and instead explicitly drop the search permit

This should make new search requests to be scheduled quicker than before and reduce the general load on tokio

Co-authored-by: Tamo <tamo@meilisearch.com>
2024-08-28 13:33:38 +00:00
Tamo
69ab09c149 ensure we never early exit when we have a permit and remove the warning when we implicitely drop a permit 2024-08-28 15:17:10 +02:00
meili-bors[bot]
25534cc794
Merge #4895
4895: Update version for the next release (v1.10.1) in Cargo.toml r=irevoire a=meili-bot

⚠️ This PR is automatically generated. Check the new version is the expected one and Cargo.lock has been updated before merging.

Co-authored-by: irevoire <irevoire@users.noreply.github.com>
2024-08-28 12:52:29 +00:00
irevoire
17d0b10765 Update version for the next release (v1.10.1) in Cargo.toml 2024-08-28 14:51:53 +02:00
Tamo
241746e7f4 add a warning to help us find when we forget to drop explicitely drop a permit 2024-08-28 14:37:55 +02:00
Tamo
1b90e6ce5f explicitely drop the search permit 2024-08-28 14:29:25 +02:00
meili-bors[bot]
c10d06febc
Merge #4896
4896: Make sure the index scheduler never stops running r=dureuill a=irevoire

# Pull Request

## Related issue
Fixes #4748 for the v1.10.1

I cherry-picked the commits from https://github.com/meilisearch/meilisearch/pull/4861

Co-authored-by: Tamo <tamo@meilisearch.com>
2024-08-28 07:46:05 +00:00
meili-bors[bot]
9eee467226
Merge #4893
4893: Only spawn one search queue in actix-web r=dureuill a=irevoire

# Pull Request

## Related issue
May be related to #4654 and https://github.com/meilisearch/meilisearch-support/issues/350

## What does this PR do?
- We noticed a bug where multiple search queue were spawned instead of one


Co-authored-by: Tamo <tamo@meilisearch.com>
2024-08-27 20:26:59 +00:00
Tamo
1cc6ac089b ensure the run function doesn't panic even if the tick function does 2024-08-27 18:26:13 +02:00
Tamo
f85e091cb4 make sure the index scheduler never stops running 2024-08-27 18:26:05 +02:00
Tamo
99fdccdc7e Only spawn one search queue in actix-web 2024-08-27 18:19:06 +02:00
meili-bors[bot]
36d8684dc8
Merge #4881
4881: Infer locales from index settings r=curquiza a=ManyTheFish

# Pull Request

## Related issue
Fixes #4828
Fixes #4816
## What does this PR do?
- Add some test using `AttributesToSearchOn`
- Make the search infer the language based on the index settings when the `locales` filed is not precise


CI is now working:
https://github.com/meilisearch/meilisearch/actions/runs/10490050545/job/29055955667



Co-authored-by: ManyTheFish <many@meilisearch.com>
2024-08-21 14:18:16 +00:00
ManyTheFish
b12e997c8a Add pinyin flag 2024-08-21 14:38:04 +02:00
ManyTheFish
8bf89ec394 Infer locales from index settings 2024-08-21 10:47:40 +02:00
meili-bors[bot]
ee62d9ce30
Merge #4845
4845: Fix perf regression facet strings r=ManyTheFish a=dureuill

Benchmarks between v1.9 and v1.10 show a performance regression of about x2 (+3dB regression) for most indexing workloads (+44s for hackernews).

[Benchmark interpretation in the engine weekly meeting](https://www.notion.so/meilisearch/Engine-weekly-4d49560d374c4a87b4e3d126a261d4a0?pvs=4#98a709683276450295fcfe1f8ea5cef3).

- Initial investigation pointed to #4819 as the origin of the regression.
- Further investigation points towards the hypernormalization of each facet value in `extract_facet_string_docids`
- Most of the slowdown is in `normalize_facet_strings`, and precisely in `detection.language()`.

This PR improves the situation (-10s compared with `main` for hackernews, so only +34s regression compared with `v1.9`) by skipping normalization when it can be skipped.

I'm not sure how to fix the root cause though. Should we skip facet locale normalization for now? Cc `@ManyTheFish` 

---

Tentative resolution options:

1. remove locale normalization from facet. I'm not sure why this is required, I believe we weren't doing this before, so maybe we can stop doing that again.
2. don't do language detection when it can be helped: won't help with the regressions in benchmark, but maybe we can skip language detection when the locales contain only one language?
3. use a faster language detection library: `@Kerollmops` told me about https://github.com/quickwit-oss/whichlang which bolsters x10 to x100 throughput compared with whatlang. Should we consider replacing whatlang with whichlang? Now I understand whichlang supports fewer languages than whatlang, so I also suggest:
4. use whichlang when the list of locales is empty (autodetection), or when it only contains locales that whichlang can detect. If the list of locales contains locales that whichlang *cannot* detect, **then** use whatlang instead.

---

> [!CAUTION]
> this PR contains a commit that adds detailed spans, that were used to detect which part of `extract_facet_string_docids` was taking too much time. As this commit adds spans that are called too often and adds 7s overhead, it should be removed before landing.

Co-authored-by: Louis Dureuil <louis@meilisearch.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
2024-08-19 06:29:48 +00:00
ManyTheFish
0f965d3574 Remove hotloop's spans 2024-08-14 14:33:36 +02:00
ManyTheFish
ade54493ab Only detect language for a facet if several locales have been specified by the user in the settings 2024-08-14 12:03:52 +02:00
meili-bors[bot]
07c8ed0459
Merge #4864
4864: Don't remove facet value when multiple original values map to the same normalized value r=ManyTheFish a=dureuill

# Pull Request

## Related issue

Fixes #4860 

> [!WARNING]  
> This PR contains a fix to the immediate issue, but it looks like the underlying data model is faulty: there is only one possible "original" value for each normalized value in a facet of a document, while because of array values (or manually written nested fields, if you're evil), it is technically possible to have multiple, distinct original values mapping to the same normalized value.

Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2024-08-13 14:04:17 +00:00
Louis Dureuil
c3cdc407ec
Avoid unnecessary clone() 2024-08-08 14:57:02 +02:00
Louis Dureuil
2f10273d14
Group by normalized values, make sure you don't remove a value where there remains at still one value that normalizes towards it 2024-08-08 14:02:53 +02:00
meili-bors[bot]
b44e17c4c3
Merge #4858
4858: also intersect the universe for searchOnAttributes r=irevoire a=dureuill

# Pull Request

## Related issue
Fixes #4857 

## What does this PR do?
- intersect with the universe (which does not contain the filtered out ids) when looking up documents for words, even when using `searchOnAttributes`


Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2024-08-07 13:15:26 +00:00
Louis Dureuil
e3ef0ae19e
also intersect the universe for searchOnAttributes 2024-08-06 14:06:56 +02:00
meili-bors[bot]
57f7af77c7
Merge #4846
4846: Add OpenAI tests r=dureuill a=dureuill

# Pull Request

## Related issue
Part of fixing #4757 

## What does this PR do?
- OpenAI embedder: don't pass apiKey when it is empty (slightly improves error messages)
- rest embedder and rest-based embedders: specialize the authorization denied error message depending on the configuration source
- fix existing tests
- Adds assets containing prerecorded texts to embed and the embeddings obtained from OpenAI
- Adds an asset containing a tokenized long document and the embedding obtained from OpenAI for this token
- Uses the wiremock crate to mock the OpenAI API: parse the openai request, lookup the response in assets, craft an openai response


Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2024-08-05 10:49:28 +00:00
meili-bors[bot]
c817718e07
Merge #4853
4853: Fix rhai deletion r=irevoire a=dureuill

# Pull Request

## Related issue
Fixes #4849 

## What does this PR do?
- insert inside of the bitmap instead of pushing into it.


Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2024-08-01 16:34:31 +00:00
Louis Dureuil
e64d0e0ca8
use insert instead of push for bitmaps 2024-08-01 18:32:45 +02:00
Louis Dureuil
21aa430b5e
Fix openai tests 2024-07-31 17:57:55 +02:00
Louis Dureuil
8535dc0be2
Fix existing tests 2024-07-31 17:57:32 +02:00
Louis Dureuil
72b9005344
Redact uid for Value 2024-07-31 17:57:13 +02:00
meili-bors[bot]
420c33132c
Merge #4850
4850: Use a fixed date format regardless of features r=irevoire a=dureuill

# Pull Request

## Related issue
Fixes #4844 

## What does this PR do?

Given the following script: 
```
cargo run -- --db-path meili.ms
sleep 3
curl -s -X POST http://127.0.0.1:7700/indexes -H 'Content-Type: application/json' --data-binary '{"uid": "movies", "primaryKey": "id"}'
sleep 3
cargo run  -p meilisearch --db-path meili.ms
sleep 3
curl -s -X POST http://127.0.0.1:7700/indexes/movies/search -H 'Content-Type: application/json' --data-binary '{}'
```

- Before this PR, the final search returns a decoding error.
- After this PR, the search completes successfully

### Technical standpoint

This PR fixes two locations where the formatting of dates were dependent on the feature set of the `time` crate.

1. The `IndexStats` had two fields without the serialization format specified
2. More subtly, the index dates (`createdAt,` `updatedAt`) were using value remapping in the main DB to `SerdeJson<OffsetDateTime>`, which was using whatever default format was available. This was fixed by creating a local `OffsetDateTime` wrapper that would specify the serialization format 

Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2024-07-31 15:32:26 +00:00
Louis Dureuil
9ef710cad4
Use wrapper that forces the desired date format 2024-07-31 17:12:19 +02:00
Louis Dureuil
48f7329a83
Specify index_mapper on IndexStats 2024-07-31 17:11:28 +02:00
Louis Dureuil
ab1ec9ca21
Add tokenized test 2024-07-31 15:03:45 +02:00
Louis Dureuil
9d6efd92d2
new assets for tokenized test 2024-07-31 15:03:45 +02:00
Louis Dureuil
abdb337fd6
Add openai tests 2024-07-31 15:03:45 +02:00
Louis Dureuil
1c755c8899
Add openai responses 2024-07-31 15:03:45 +02:00
Louis Dureuil
3a42c3134e
update tests after changing authorized error message 2024-07-31 15:03:45 +02:00
Louis Dureuil
5aa6cb3600
Specialize authorized error message depending on config source 2024-07-31 15:03:44 +02:00