Commit Graph

213 Commits

Author SHA1 Message Date
Louis Dureuil
4564a38ae7
Bail earlier when the experimental feature is not enabled 2024-04-04 15:58:19 +02:00
Louis Dureuil
6ebb6b55a6
Lazily embed, don't fail hybrid search on embedding failure 2024-04-04 15:58:17 +02:00
meili-bors[bot]
78668584cd
Merge #4533
4533: Hide api key in settings and task queue r=dureuill a=dureuill

# Pull Request

See [Usage page](https://meilisearch.notion.site/v1-8-AI-search-API-usage-135552d6e85a4a52bc7109be82aeca42#117f5ff7b19f4d95bb3ae0005f6c6633)

## Motivation

See [slack discussion (internal link)](https://meilisearch.slack.com/archives/C06GQP7FQ6P/p1709804022298749)


## Changes

- The value of the `apiKey` parameter is now hidden in the settings and the details of the task queue.

Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2024-03-28 16:02:53 +00:00
meili-bors[bot]
fa9748cc99
Merge #4536
4536: Limit concurrent search requests r=ManyTheFish a=irevoire

# Pull Request

## Related issue
Fixes https://github.com/meilisearch/meilisearch/issues/4489

## What does this PR do?
- Adds a « search queue » that limits the number of search requests we can process at the same time and stores search requests to be processed
- Process only one search request per core/thread (we use available_parallelism)
- When the search queue is full, new search requests replace old ones **randomly**. The reason is that:
  - If we serve the oldest one first, like Typesense, we give the worst performances to everyone
  - If we serve the latest one, it gets too easy to DoS us (you just need to fill the queue with as many search requests as we can process simultaneously to ensure no other request will ever be processed)
  - By picking the search request randomly, we give a chance to recent search requests to be processed while ensuring that we can't be owned unless they fill our queue entirely and we start returning errors 5xx
- Adds an experimental parameter to control the size of the queue
- Adds a bunch of tests to ensure the search queue works correctly
- Ensure the loop consuming the search queue is running in the health route and crashes if it’s not the case

Co-authored-by: Tamo <tamo@meilisearch.com>
2024-03-28 15:01:52 +00:00
Tamo
b7c582e4f3 connect the search queue with the health route 2024-03-27 15:49:43 +01:00
Tamo
e433fd53e6 rename the method to get a permit and use it in all search requests 2024-03-26 17:28:03 +01:00
Louis Dureuil
f82d056072
Hide secrets in settings and task queue 2024-03-26 10:36:24 +01:00
Louis Dureuil
dfa5e41ea6
Check validity of the URL setting 2024-03-25 11:23:16 +01:00
Louis Dureuil
f649f58013
embed no longer async 2024-03-25 11:23:03 +01:00
Tamo
4369e9e97c add an error code test on the setting 2024-03-19 11:14:28 +01:00
Tamo
7bd881b9bc adds the degraded searches to the prometheus dashboard 2024-03-19 10:35:47 +01:00
Tamo
6a0c399c2f rename the search_cutoff parameter to search_cutoff_ms 2024-03-19 10:35:47 +01:00
Tamo
b72495eb58 fix the settings tests 2024-03-19 10:28:23 +01:00
Tamo
d1db495119 add a settings for the search cutoff 2024-03-19 10:28:23 +01:00
meili-bors[bot]
5ed7b6a0b2
Merge #4456
4456: Add Ollama as an embeddings provider r=dureuill a=jakobklemm

# Pull Request

## Related issue
[Related Discord Thread](https://discord.com/channels/1006923006964154428/1211977150316683305)

## What does this PR do?
- Adds Ollama as a provider of Embeddings besides HuggingFace and OpenAI under the name `ollama`
- Adds the environment variable `MEILI_OLLAMA_URL` to set the embeddings URL of an Ollama instance with a default value of `http://localhost:11434/api/embeddings` if no variable is set
- Changes some of the structs and functions in `openai.rs` to be public so that they can be shared.
- Added more error variants for Ollama specific errors
- It uses the model `nomic-embed-text` as default, but any string value is allowed, however it won't automatically check if the model actually exists or is an embedding model

Tested against Ollama version `v0.1.27` and the `nomic-embed-text` model.

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Co-authored-by: Jakob Klemm <jakob@jeykey.net>
Co-authored-by: Louis Dureuil <louis.dureuil@gmail.com>
2024-03-13 08:48:47 +00:00
Louis Dureuil
c608b3f9b5
Factor vergen stuff to a build-info crate 2024-03-05 10:11:43 +01:00
Jakob Klemm
d3004d8040
Implemented Ollama as an embeddings provider
Initial prototype of Ollama embeddings actually working, error handlign / retries still missing.

Allow model to be any String and require dimensions parameter

Fixed rustfmt formatting issues

There were some formatting issues in the initial PR and this should not make the changes comply with the Rust style guidelines

Because I accidentally didn't follow the style guide for commits in my commit messages I squashed them into one to comply
2024-03-04 15:09:43 +01:00
Louis Dureuil
452a343a2b
Fix imports 2024-02-28 18:09:40 +01:00
Tamo
0562818c2a fix and remove the file-store hack of /dev/null 2024-02-26 13:59:41 +01:00
Tamo
bbf3fb88ca rename the cli parameter 2024-02-26 13:59:40 +01:00
Tamo
36c27a18a1 implement the dry run ha parameter 2024-02-26 13:58:04 +01:00
Tamo
507739bd98 add an experimental cli parameter to allow specifying your task id 2024-02-26 13:58:03 +01:00
Tamo
eb25b07390 let you specify your task id 2024-02-26 13:56:31 +01:00
Tamo
eb90f0b4fb fix and remove the file-store hack of /dev/null 2024-02-26 10:19:07 +01:00
Tamo
693ba8dd15 rename the cli parameter 2024-02-21 14:33:40 +01:00
Tamo
05ae291989 implement the dry run ha parameter 2024-02-21 11:21:26 +01:00
Tamo
01ae46dd80 add an experimental cli parameter to allow specifying your task id 2024-02-20 11:24:44 +01:00
Tamo
9ee4f55e6c let you specify your task id 2024-02-19 14:29:33 +01:00
Tamo
a081da0d90 add support for the json format in the stream route 2024-02-14 15:34:39 +01:00
Tamo
3b6544db6d Implement the experimental log mode cli flag 2024-02-13 18:09:15 +01:00
Tamo
285aa15d2f
make the mode camelCase instead of lowercase 2024-02-08 15:04:06 +01:00
Tamo
2c88131bb1
rename the fmt mode to human 2024-02-08 15:04:06 +01:00
Tamo
35aa9d5904
fix an error message 2024-02-08 15:04:06 +01:00
Tamo
1502382316
use debug instead of debug_span 2024-02-08 15:04:06 +01:00
Louis Dureuil
1b74010e9e
Remove "with_line_numbers" 2024-02-08 15:04:06 +01:00
Tamo
08af0e690c
Structures a bunch of logs 2024-02-08 15:04:06 +01:00
Louis Dureuil
91eb67e981
logs route: make memory profiling toggling usable 2024-02-08 15:04:05 +01:00
Tamo
7ff722b72e
get rids of the log dependencies everywhere 2024-02-08 15:04:05 +01:00
Tamo
bcf7909bba
add a profile_memory parameter disabled by default 2024-02-08 15:04:05 +01:00
Tamo
ceb211c515
move the /logs route to the /logs/stream route 2024-02-08 15:04:05 +01:00
Louis Dureuil
661baa716b
logs route profile mode: don't barf bytes if the buffer is not empty 2024-02-08 15:04:05 +01:00
Clément Renault
b393823f36
Replace stats_alloc with procfs 2024-02-08 15:04:05 +01:00
Tamo
f158e96fe7
fix the auth 2024-02-08 15:04:05 +01:00
Tamo
7793ba67a4
hide the route logs behind a feature flag 2024-02-08 15:03:33 +01:00
Tamo
80774148fd
handle and tests errors 2024-02-08 15:03:33 +01:00
Louis Dureuil
38e1c40f38
meilisearch: logs route disconnects in profile mode 2024-02-08 15:03:33 +01:00
Louis Dureuil
91a8f74763
Add cancel log route 2024-02-08 15:03:32 +01:00
Tamo
abaa72e2bf
start handling reloads with profiling 2024-02-08 15:03:32 +01:00
Tamo
3c3a258a22
start exposing the profiling layer 2024-02-08 15:03:32 +01:00
Louis Dureuil
73e66d5a97
Add dummy log when calling tasks 2024-02-08 15:03:32 +01:00