455 Commits

Author SHA1 Message Date
Tamo
4391cba6ca
fix the addition + deletion bug 2023-05-17 18:28:57 +02:00
Tamo
d7ddf4925e
Revert "Disable autobatching of additions and deletions"
This reverts commit a94e78ffb051193ece752a9dd19858a05922f706.
2023-05-17 14:25:50 +02:00
Tamo
96da5130a4
fix the error code in case of not filterable attributes on the get / delete documents by filter routes 2023-05-16 13:56:18 +02:00
Clément Renault
13f870e993
Fix typos and documentation issues 2023-05-15 15:11:45 +02:00
Kerollmops
f759ec7fad
Expose a flag to enable the MDB_WRITEMAP flag 2023-05-15 11:38:43 +02:00
Kerollmops
c4a40e7110
Use the writemap flag to reduce the memory usage 2023-05-15 10:15:33 +02:00
meili-bors[bot]
a95128df6b
Merge #3550
3550: Delete documents by filter r=irevoire a=dureuill

# Prototype `prototype-delete-by-filter-0`

Usage:
A new route is available under `POST /indexes/{index_uid}/documents/delete` that allows you to delete your documents by filter.
The expected payload looks like that:
```json
{
  "filter": "doggo = bernese",
}
```

It'll then enqueue a task in your task queue that'll delete all the documents matching this filter once it's processed.
Here is an example of the associated details;
```json
  "details": {
    "deletedDocuments": 53,
    "originalFilter": "\"doggo = bernese\""
  }
```

----------


# Pull Request

## Related issue
Related to https://github.com/meilisearch/meilisearch/issues/3477

## What does this PR do?

### User standpoint

- Modifies the `/indexes/{:indexUid}/documents/delete-batch` route to accept either the existing array of documents ids, or a JSON object with a `filter` field representing a filter to apply. If that latter variant is used, any document matching the filter will be deleted.

### Implementation standpoint

- (processing time version) Adds a new BatchKind that is not autobatchable and that performs the delete by filter
- Reuse the `documentDeletion` task with a new `originalFilter` detail that replaces the `providedIds` detail.

## Example

<details>
<summary>Sample request, response and task result</summary>

Request:

```
curl \
  -X POST 'http://localhost:7700/indexes/index-10/documents/delete-batch' \
  -H 'Content-Type: application/json' \
  --data-binary '{ "filter" : "mass = 600"}'
```

Response:

```
{
  "taskUid": 3902,
  "indexUid": "index-10",
  "status": "enqueued",
  "type": "documentDeletion",
  "enqueuedAt": "2023-02-28T20:50:31.667502Z"
}
```

Task log:

```json
    {
      "uid": 3906,
      "indexUid": "index-12",
      "status": "succeeded",
      "type": "documentDeletion",
      "canceledBy": null,
      "details": {
        "deletedDocuments": 3,
        "originalFilter": "\"mass = 600\""
      },
      "error": null,
      "duration": "PT0.001819S",
      "enqueuedAt": "2023-03-07T08:57:20.11387Z",
      "startedAt": "2023-03-07T08:57:20.115895Z",
      "finishedAt": "2023-03-07T08:57:20.117714Z"
    }
```

</details>

## Draft status

- [ ] Error handling
- [ ] Analytics
- [ ] Do we want to reuse the `delete-batch` route in this way, or create a new route instead?
- [ ] Should the filter be applied at request time or when the deletion task is processed? 
  - The first commit in this PR applies the filter at request time, meaning that even if a document is modified in a way that no longer matches the filter in a later update, it will be deleted as long as the deletion task is processed after that update. 
  - The other commits in this PR apply the filter only when the asynchronous deletion task is processed, meaning that documents that match the filter at processing time are deleted even if they didn't match the filter at request time.
- [ ] If keeping the filter at request time, find a more elegant way to recover the user document ids from the internal document ids. The current way implemented in the first commit of this PR involves getting all the documents matching the filter, looking for the value of their primary key, and turning it into a string by copy-pasting routines found in milli...
- [ ] Security consideration, if any
- [ ] Fix the tests (but waiting until product questions are resolved)
- [ ] Add delete by filter specific tests



Co-authored-by: Louis Dureuil <louis@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
2023-05-04 10:44:41 +00:00
meili-bors[bot]
da220294f6
Merge #3639
3639: Add a dedicated error variant for planned failures in index scheduler tests r=Kerollmops a=Sufflope

# Pull Request

## Related issue
Fixes #3086

## What does this PR do?
- Add a dedicated test variant in test cfg to avoid reusing a misleading existing error

## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


Co-authored-by: Jean-Sébastien Bour <jean-sebastien@bour.name>
2023-05-04 09:33:57 +00:00
Louis Dureuil
d8381eb790
Fix originalFilter 2023-05-04 10:07:59 +02:00
Louis Dureuil
b212aef5db
add one nanosecond to generated filter so as to generate a filter that would have matched the last task to delete 2023-05-04 09:56:48 +02:00
Louis Dureuil
52ab114f6c
Fix test on macOS: 50 tasks would result in the test consistently failing on a local macOS 2023-05-04 00:06:49 +02:00
Tamo
dcbfecf42c
make the generated filter valid 2023-05-04 00:06:49 +02:00
Tamo
9ca6f59546
Update index-scheduler/src/lib.rs
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2023-05-04 00:06:49 +02:00
Tamo
aa7537a11e
make the autodeletion work with a fixed number of tasks and update the tests 2023-05-04 00:06:49 +02:00
Tamo
972bb2831c
log when meilisearch need to delete tasks 2023-05-04 00:06:49 +02:00
Tamo
f9ddd32545
implement the auto-deletion of tasks 2023-05-04 00:06:49 +02:00
Tamo
0f0cd2d929
handle the array of array form of filter in the dumps 2023-05-03 17:41:50 +02:00
Tamo
6df2ba93a9
remove one useless txn 2023-05-03 17:41:49 +02:00
Louis Dureuil
3680a6bf1e
extract impl to a function 2023-05-03 17:41:49 +02:00
Louis Dureuil
732c52093d
Processing time without autobatching implementation 2023-05-03 17:41:48 +02:00
Jean-Sébastien Bour
d09b771bce
Add a dedicated error variant for planned failures in index scheduler tests
Fixes #3086
2023-05-02 14:37:20 +02:00
Tamo
0b2200e6e7
remove the unused snapshot files 2023-04-25 17:55:27 +02:00
bors[bot]
654a3a9e19
Merge #3688
3688: Following release v1.1.1: bring back changes into `main` r=curquiza a=curquiza

`@meilisearch/engine-team` ensure the changes we bring to `main` are the ones you want

Co-authored-by: Louis Dureuil <louis@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: dureuill <dureuill@users.noreply.github.com>
2023-04-24 11:38:23 +00:00
Louis Dureuil
fd583501d7
Use non_free_pages_size instead of real_disk_size to check task db space taken 2023-04-13 17:07:44 +02:00
bors[bot]
f9960be115
Merge #3659
3659: stops receiving tasks once the task queue is full r=Kerollmops a=irevoire

Give 20GiB to the task queue + once 50% of the task queue is used, it blocks itself and only receives task deletion requests to ensure we never get in a state where we can’t do anything.

Also, create a new error message when we reach this case:
```
Meilisearch cannot receive write operations because the size limit of the tasks database has been reached. Please delete tasks to continue performing write operations.
```

Co-authored-by: Tamo <tamo@meilisearch.com>
2023-04-13 09:11:12 +00:00
Tamo
b4fabce36d
update the error message + update the task db size to 20GiB with a limit at 50% 2023-04-12 18:54:11 +02:00
Tamo
be69ab320d
stops receiving tasks once the task queue is full 2023-04-12 18:54:11 +02:00
Louis Dureuil
a94e78ffb0
Disable autobatching of additions and deletions 2023-04-12 10:53:00 +02:00
Tamo
4d308d5237 Improve the health route by ensuring lmdb is not down
And refactorize slightly the auth controller.
2023-04-06 15:31:42 +02:00
Tamo
597d57bf1d Merge branch 'main' into bring-back-changes-v1.1.0 2023-04-05 11:32:14 +02:00
Tamo
3fb67f94f7 Reduce the time to import a dump by caching some datas
With this commit, for a dump containing 1M tasks we went form 1m02 to 6s
2023-03-29 14:44:15 +02:00
Tamo
cf5145b542 Reduce the time to import a dump
With this commit, for a dump containing 1M tasks we went from 3m36s to import the task queue down to 1m02s
2023-03-29 14:27:40 +02:00
Tamo
a2b151e877 ensure that the task queue is correctly imported
reduce the size of the snapshots file
2023-03-21 14:41:46 +01:00
bors[bot]
667bb87e35
Merge #3541
3541: Add cache on the indexes stats r=dureuill a=irevoire

Fix https://github.com/meilisearch/meilisearch/issues/3540

Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2023-03-09 13:32:52 +00:00
Louis Dureuil
7faa9a22f6
Pass IndexStat by ref in store_stats_of 2023-03-07 14:00:54 +01:00
Louis Dureuil
76288fad72 Fix snapshots 2023-03-06 16:57:31 +01:00
Louis Dureuil
076a3d371c Eagerly compute stats as fallback to the cache.
- Refactor all around to avoid spawning indexes more times than necessary
2023-03-06 16:57:31 +01:00
Tamo
3bbf760542 update most snapshots 2023-03-06 16:57:31 +01:00
Tamo
fd5c48941a Add cache on the indexes stats 2023-03-06 16:57:31 +01:00
Tamo
e704728ee7 fix the snapshots permissions on unix system 2023-03-06 16:28:40 +01:00
Louis Dureuil
0202ff8ab4
Attempt to use default budget for faster startup 2023-02-28 10:55:43 +01:00
Louis Dureuil
71e7900c67
move index_map to file 2023-02-23 11:29:11 +01:00
Louis Dureuil
431782f3ee
Move index_mapper to mod.rs 2023-02-23 11:29:11 +01:00
Louis Dureuil
3db613ff77
Don't iterate all indexes manually 2023-02-23 11:29:09 +01:00
Louis Dureuil
5822764be9
Skip computing index budget in tests 2023-02-23 11:23:39 +01:00
Louis Dureuil
a529bf160c
Compute budget 2023-02-23 11:23:39 +01:00
Louis Dureuil
f1119f2dc2
Add dichotomic search to utils 2023-02-23 11:23:39 +01:00
Louis Dureuil
1db7d5d851
Add basic tests for index eviction and resize 2023-02-23 11:23:39 +01:00
Louis Dureuil
80b060f920
Use LRU cache 2023-02-23 11:23:39 +01:00
Louis Dureuil
fdf043580c
Add LruMap 2023-02-23 11:23:38 +01:00
Louis Dureuil
42577403d8
Authentication: Directly pass the authfilter to the index scheduler 2023-02-22 16:35:52 +01:00
bors[bot]
b08a49a16e
Merge #3319 #3470
3319: Transparently resize indexes on MaxDatabaseSizeReached errors r=Kerollmops a=dureuill

# Pull Request

## Related issue
Related to https://github.com/meilisearch/meilisearch/discussions/3280, depends on https://github.com/meilisearch/milli/pull/760

## What does this PR do?

### User standpoint

- Meilisearch no longer fails tasks that encounter the `milli::UserError(MaxDatabaseSizeReached)` error.
- Instead, these tasks are retried after increasing the maximum size allocated to the index where the failure occurred.

### Implementation standpoint

- Add `Batch::index_uid` to get the `index_uid` of a batch of task if there is one
- `IndexMapper::create_or_open_index` now takes an additional `size` argument that allows to (re)open indexes with a size different from the base `IndexScheduler::index_size` field
- `IndexScheduler::tick` now returns a `Result<TickOutcome>` instead of a `Result<usize>`. This offers more explicit control over what the behavior should be wrt the next tick.
- Add `IndexStatus::BeingResized` that contains a handle that a thread can use to await for the resize operation to complete and the index to be available again.
- Add `IndexMapper::resize_index` to increase the size of an index.
- In `IndexScheduler::tick`, intercept task batches that failed due to `MaxDatabaseSizeReached` and resize the index that caused the error, then request a new tick that will eventually handle the still enqueued task.

## Testing the PR

The following diff can be applied to this branch to make testing the PR easier:

<details>


```diff
diff --git a/index-scheduler/src/index_mapper.rs b/index-scheduler/src/index_mapper.rs
index 553ab45a..022b2f00 100644
--- a/index-scheduler/src/index_mapper.rs
+++ b/index-scheduler/src/index_mapper.rs
`@@` -228,13 +228,15 `@@` impl IndexMapper {
 
         drop(lock);
 
+        std:🧵:sleep_ms(2000);
+
         let current_size = index.map_size()?;
         let closing_event = index.prepare_for_closing();
-        log::info!("Resizing index {} from {} to {} bytes", name, current_size, current_size * 2);
+        log::error!("Resizing index {} from {} to {} bytes", name, current_size, current_size * 2);
 
         closing_event.wait();
 
-        log::info!("Resized index {} from {} to {} bytes", name, current_size, current_size * 2);
+        log::error!("Resized index {} from {} to {} bytes", name, current_size, current_size * 2);
 
         let index_path = self.base_path.join(uuid.to_string());
         let index = self.create_or_open_index(&index_path, None, 2 * current_size)?;
`@@` -268,8 +270,10 `@@` impl IndexMapper {
             match index {
                 Some(Available(index)) => break index,
                 Some(BeingResized(ref resize_operation)) => {
+                    log::error!("waiting for resize end");
                     // Deadlock: no lock taken while doing this operation.
                     resize_operation.wait();
+                    log::error!("trying our luck again!");
                     continue;
                 }
                 Some(BeingDeleted) => return Err(Error::IndexNotFound(name.to_string())),
diff --git a/index-scheduler/src/lib.rs b/index-scheduler/src/lib.rs
index 11b17d05..242dc095 100644
--- a/index-scheduler/src/lib.rs
+++ b/index-scheduler/src/lib.rs
`@@` -908,6 +908,7 `@@` impl IndexScheduler {
     ///
     /// Returns the number of processed tasks.
     fn tick(&self) -> Result<TickOutcome> {
+        log::error!("ticking!");
         #[cfg(test)]
         {
             *self.run_loop_iteration.write().unwrap() += 1;
diff --git a/meilisearch/src/main.rs b/meilisearch/src/main.rs
index 050c825a..63f312f6 100644
--- a/meilisearch/src/main.rs
+++ b/meilisearch/src/main.rs
`@@` -25,7 +25,7 `@@` fn setup(opt: &Opt) -> anyhow::Result<()> {
 
 #[actix_web::main]
 async fn main() -> anyhow::Result<()> {
-    let (opt, config_read_from) = Opt::try_build()?;
+    let (mut opt, config_read_from) = Opt::try_build()?;
 
     setup(&opt)?;
 
`@@` -56,6 +56,8 `@@` We generated a secure master key for you (you can safely copy this token):
         _ => (),
     }
 
+    opt.max_index_size = byte_unit::Byte::from_str("1MB").unwrap();
+
     let (index_scheduler, auth_controller) = setup_meilisearch(&opt)?;
 
     #[cfg(all(not(debug_assertions), feature = "analytics"))]
```
</details>

Mainly, these debug changes do the following:

- Set the default index size to 1MiB so that index resizes are initially frequent
- Turn some logs from info to error so that they can be displayed with `--log-level ERROR` (hiding the other infos)
- Add a long sleep between the beginning and the end of the resize so that we can observe the `BeingResized` index status (otherwise it would never come up in my tests)

## Open questions

- Is the growth factor of x2 the correct solution? For a `Vec` in memory it makes sense, but here we're manipulating quantities that are potentially in the order of 500GiBs. For bigger indexes it may make more sense to add at most e.g. 100GiB on each resize operation, avoiding big steps like 500GiB -> 1TiB.

## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [ ] Have you read the contributing guidelines?
- [ ] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


3470: Autobatch addition and deletion r=irevoire a=irevoire

This PR adds the capability to meilisearch to batch document addition and deletion together.

Fix https://github.com/meilisearch/meilisearch/issues/3440

--------------

Things to check before merging;

- [x] What happens if we delete multiple time the same documents -> add a test
- [x] If a documentDeletion gets batched with a documentAddition but the index doesn't exist yet? It should not work

Co-authored-by: Louis Dureuil <louis@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
2023-02-20 15:00:19 +00:00
Louis Dureuil
35f6c624bc
Make sure we don't leave the in memory hashmap in an inconsistent state 2023-02-20 13:55:32 +01:00
Louis Dureuil
1116788475
Resize indexes when they're full 2023-02-20 13:55:32 +01:00
Louis Dureuil
951a5b5832
Add IndexMapper::resize_index fn 2023-02-20 13:55:32 +01:00
Louis Dureuil
1c670d7fa0
Add IndexStatus::BeingResized 2023-02-20 13:55:32 +01:00
Louis Dureuil
6cc3797aa1
IndexScheduler::tick returns a TickOutcome 2023-02-20 13:55:31 +01:00
Louis Dureuil
faf1e17a27
create_or_open_index takes a map_size argument 2023-02-20 13:55:31 +01:00
Louis Dureuil
4c519c2ab3
Add Batch::index_uid 2023-02-20 13:55:31 +01:00
Tamo
29d14bed90
get rids of the let/else syntax 2023-02-14 17:45:46 +01:00
Clément Renault
4570d5bf3a
Merge remote-tracking branch 'origin/main' into temp-wildcard 2023-02-09 13:14:05 +01:00
Tamo
eaad84bd1d fix the test to handle the document deletion correctly 2023-02-09 11:29:13 +01:00
Tamo
ea9ac46f28
stop autobatching the deletion without the index creation right with the addition 2023-02-08 21:24:27 +01:00
Tamo
93f130a400
fix all warnings 2023-02-08 20:57:35 +01:00
Tamo
860c993ef7
Handle the autobatching of deletion and addition in the scheduler 2023-02-08 20:53:19 +01:00
Tamo
67dda0678f
cleanup the autobatcher a little bit 2023-02-08 18:10:59 +01:00
Tamo
2db6347686
update the autobatcher to batch the addition and deletion together 2023-02-08 18:07:59 +01:00
Kerollmops
a36b1dbd70
Fix the tasks with the new patterns 2023-02-01 18:21:45 +01:00
Louis Dureuil
924d5d4c11
clippy: remove needless lifetimes 2023-01-31 10:40:48 +01:00
Tamo
a858531574 apply review comments 2023-01-25 14:51:36 +01:00
Tamo
bf94f89035 Update index-scheduler/src/lib.rs
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2023-01-25 11:31:50 +01:00
Tamo
3bcff60d1c makes clippy happy 2023-01-25 11:31:48 +01:00
Tamo
c92948b143 Compute the size of the auth-controller, index-scheduler and all update files in the global stats 2023-01-25 11:25:02 +01:00
Tamo
c7b2e3be87
apply review comments 2023-01-24 17:54:43 +01:00
Tamo
ea3b269b77 reformat 2023-01-23 23:59:34 +01:00
Tamo
a4be4c49e8
Update index-scheduler/src/batch.rs
Co-authored-by: Clément Renault <clement@meilisearch.com>
2023-01-23 23:58:03 +01:00
Tamo
7d1ebb7295
add test on the autobatcher layer 2023-01-23 20:56:12 +01:00
Tamo
767cb725a5
reimplement the batching of task with or without primary key in the autobatcher 2023-01-23 20:18:22 +01:00
Tamo
5672118bfa
When adding documents, trying to update the primary-key now throw an error
While updating the test suite I also noticed an issue with the indexed_documents value of failed task and had to update it.
I also named a bunch of snapshots that had no name sorry 😬
2023-01-23 17:32:13 +01:00
Louis Dureuil
72e2b220ed
Fix tests 2023-01-19 15:48:20 +01:00
Tamo
e8e7070cc6
improve the error message when no task filter are specified for the cancelation or deletion of tasks 2023-01-19 12:42:08 +01:00
bors[bot]
3e5b3df487
Merge #3370 #3373 #3375
3370: make the swap indexes not found errors return an IndexNotFound error-code r=irevoire a=irevoire

Fix https://github.com/meilisearch/meilisearch/issues/3368

3373: fix a wrong error code and add tests on the document resource r=irevoire a=irevoire

Fix https://github.com/meilisearch/meilisearch/issues/3371

3375: Avoid deleting all task invalid canceled by r=irevoire a=Kerollmops

Fixes #3369 by making sure that at least one `canceledBy` task filter parameter matches something.

Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Kerollmops <clement@meilisearch.com>
2023-01-18 15:21:11 +00:00
Kerollmops
e89973f1bf
Do not delete all tasks when no canceled-by matches 2023-01-18 15:50:46 +01:00
Tamo
57da80900d
make the swap indexes not found errors return an IndexNotFound error code 2023-01-18 14:16:00 +01:00
Loïc Lecrenier
2bc2e99ff3
Simplify declaration of the error codes 2023-01-11 19:08:39 +01:00
Tamo
e706628bb1 fix the error code of the swap index route 2023-01-06 14:48:25 +01:00
Tamo
50ce0409bc
Integrate deserr on the most important routes 2023-01-05 20:48:29 +01:00
Loïc Lecrenier
2d74678b51 Replace underscores with hyphens in doc link to error code 2023-01-05 10:09:02 +01:00
Louis Dureuil
233372abea
Remove --max-index-size and --max-task-db-size 2023-01-04 17:20:01 +01:00
amab8901
9a39c4e40d Get date from IndexMetaData 2022-12-22 11:46:17 +01:00
amab8901
0893b175dc Merge branch 'main' into 2983-forward-date-to-milli 2022-12-21 14:31:19 +01:00
amab8901
d5978d11e1 Refactor 2022-12-21 14:28:00 +01:00
Tamo
d8fb506c92
handle most io error instead of tagging everything as an internal 2022-12-19 20:50:40 +01:00
amab8901
aa03e02fdc Apply Rustfmt 2022-12-19 19:24:56 +01:00
Louis Dureuil
869d331680
Clippy fixes after updating Rust to v1.66 2022-12-19 14:17:12 +01:00
amab8901
b4a73f2d74 Remove redundant date-setting 2022-12-16 08:32:44 +01:00
amab8901
4e175ae882 Replace Index::new_with_creation_dates(...) with Index::new(...) 2022-12-16 08:20:13 +01:00
amab8901
5a0a0468df Combine created and added into date 2022-12-16 08:11:12 +01:00
amab8901
d3eb8d2d5c Enable create_raw_index(...) to specify time 2022-12-14 10:44:25 +01:00
Kerollmops
7b2f2a4f9c
Do only one convertion to u64 2022-12-13 15:31:55 +01:00
jiangbo212
717dd36547 Merge branch 'fix-3037' of github.com:jiangbo212/meilisearch into fix-3037 2022-12-07 22:54:16 +08:00
Kerollmops
f1de3aa75a Make the tests use MB to trigger page size issues 2022-12-06 20:10:10 +01:00
Kerollmops
e4e4370a3c Clamp the databases size to the page size 2022-12-06 20:09:49 +01:00
jiangbo212
5a770ffe47 test fail fix 2022-12-03 22:48:38 +08:00
jiangbo212
bf96b6df93 clippy fix change 2022-11-30 17:59:06 +08:00
jiangbo212
9c28632498 Merge branch 'main' into fix-3037 2022-11-30 09:38:01 +08:00
jiangbo212
38982d13fe fix issue 3037 2022-11-30 00:03:22 +08:00
Clémentine Urquizar - curqui
457a473b72
Bring back release-v0.30.0 into release-v0.30.0-temp (final: into main) (#3145)
* Fix error code of the "duplicate index found" error

* Use the content of the ProcessingTasks in the tasks cancelation system

* Change the missing_filters error code into missing_task_filters

* WIP Introduce the invalid_task_uid error code

* Use more precise error codes/message for the task routes

+ Allow star operator in delete/cancel tasks
+ rename originalQuery to originalFilters
+ Display error/canceled_by in task view even when they are = null
+ Rename task filter fields by using their plural forms
+ Prepare an error code for canceledBy filter
+ Only return global tasks if the API key action `index.*` is there

* Add canceledBy task filter

* Update tests following task API changes

* Rename original_query to original_filters everywhere

* Update more insta-snap tests

* Make clippy happy

They're a happy clip now.

* Make rustfmt happy

>:-(

* Fix Index name parsing error message to fit the specification

* Bump milli version to 0.35.1

* Fix the new error messages

* fix the error messages and add tests

* rename the error codes for the sake of consistency

* refactor the way we send the cli informations + add the analytics for the config file and ssl usage

* Apply suggestions from code review

Co-authored-by: Clément Renault <clement@meilisearch.com>

* add a comment over the new infos structure

* reformat, sorry @kero

* Store analytics for the documents deletions

* Add analytics on all the settings

* Spawn threads with names

* Spawn rayon threads with names

* update the distinct attributes to the spec update

* update the analytics on the search route

* implements the analytics on the health and version routes

* Fix task details serialization

* Add the question mark to the task deletion query filter

* Add the question mark to the task cancelation query filter

* Fix tests

* add analytics on the task route

* Add all the missing fields of the new task query type
* Create a new analytics for the task deletion
* Create a new analytics for the task creation

* batch the tasks seen events

* Update the finite pagination analytics

* add the analytics of the swap-indexes route

* Stop removing the DB when failing to read it

* Rename originalFilters into originalFilters

* Rename matchedDocuments into providedIds

* Add `workflow_dispatch` to flaky.yml

* Bump grenad to 0.4.4

* Bump milli to version v0.37.0

* Don't multiply total memory returned by sysinfo anymore

sysinfo now returns bytes rather than KB

* Add a dispatch to the publish binaries workflow

* Fix publish release CI

* Don't use gold but the default linker

* Always display details for the indexDeletion task

* Fix the insta tests

* refactorize the whole test suite
1. Make a call to assert_internally_consistent automatically when snapshoting the scheduler. There is no point in snapshoting something broken and expect the dumb humans to notice.
2. Replace every possible call to assert_internally_consistent by a snapshot of the scheduler. It takes as many lines and ensure we never change something without noticing in any tests ever.
3. Name every snapshots: it's easier to debug when something goes wrong and easier to review in general.
4. Stop skipping breakpoints, it's too easy to miss something. Now you must explicitely show which path is the scheduler supposed to use.
5. Add a timeout on the channel.recv, it eases the process of writing tests, now when something file you get a failure instead of a deadlock.

* rebase on release-v0.30

* makes clippy happy

* update the snapshots after a rebase

* try to remove the flakyness of the failing test

* Add more analytics on the ranking rules positions

* Update the dump test to check for the dumpUid dumpCreation task details

* send the ranking rules as a string because amplitude is too dumb to process an array as a single value

* Display a null dumpUid until we computed the dump itself on disk

* Update tests

* Check if the master key is missing before returning an error

Co-authored-by: Loïc Lecrenier <loic.lecrenier@me.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2022-11-28 16:27:41 +01:00
Elbert Ronnie
0219ef25fe Moved the struct UuidCodec to a new file 2022-10-31 12:25:19 +05:30
Elbert Ronnie
3911fd64b5 Implement Uuid codec for heed 2022-10-30 03:27:30 +05:30
bors[bot]
dd1011ba76
Merge #2995
2995: merge the settings and do one indexation at the end r=irevoire a=irevoire



Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-10-27 21:24:21 +00:00
bors[bot]
20258461a8
Merge #2981 #2996
2981: Move index swap error handling from meilisearch-http to index-scheduler r=irevoire a=loiclec

And make index_not_found error asynchronous, since we can't know whether the index will exist by the time the index swap task is processed.

Improve the index-swap test to verify that future tasks are not swapped and to test the new error messages that were introduced.

## Related issue
https://github.com/meilisearch/meilisearch/issues/2973


2996: Get rids of the unecessary tasks when an index_uid is specified r=Kerollmops a=irevoire



Co-authored-by: Loïc Lecrenier <loic.lecrenier@me.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-10-27 19:11:23 +00:00
Tamo
87cac158c4
Update index-scheduler/src/batch.rs 2022-10-27 18:08:21 +02:00
Tamo
c9f89d38e3
Merge branch 'main' into index-swap-error-handling 2022-10-27 18:06:45 +02:00
Irevoire
01687c87a2
Get rids of the unecessary tasks when an index_uid is specified 2022-10-27 18:00:04 +02:00
Irevoire
313f204f39
merge the settings and do one indexation at the end 2022-10-27 16:38:21 +02:00
Loïc Lecrenier
8152ab5dfc Revert change in initialisation of TempDir for index scheduler tests 2022-10-27 16:26:17 +02:00
Loïc Lecrenier
2c31d7c50a Apply review suggestions 2022-10-27 16:24:08 +02:00
Loïc Lecrenier
4f4fc20acf Make clippy happy 2022-10-27 13:00:30 +02:00
Loïc Lecrenier
78ffa00f98 Move index swap error handling from meilisearch-http to index-scheduler
And make index_not_found error asynchronous, since we can't know
whether the index will exist by the time the index swap task is
processed.

Improve the index-swap test to verify that future tasks are not swapped
and to test the new error messages that were introduced.
2022-10-27 11:45:38 +02:00
Loïc Lecrenier
7b93ba40bd Reimplement task queries to account for special index swap rules 2022-10-27 11:44:51 +02:00
Irevoire
7307c4dacd
fix clippy 2022-10-27 11:35:22 +02:00
Irevoire
33996071ea
fix clippy from the CI 2022-10-27 11:35:21 +02:00
Kerollmops
7c908fadcf
Remove a useless clippy silence 2022-10-27 11:35:21 +02:00
Irevoire
07d39776f9
fix clippy _once again_ 2022-10-27 11:35:21 +02:00
Irevoire
8ec3681cf8
fix clippy part1 2022-10-27 11:35:20 +02:00
Kerollmops
2ba5e3b519
Clean up some code 2022-10-27 11:35:20 +02:00
Clément Renault
4f955e68b3
Apply suggestions from code review 2022-10-27 11:35:19 +02:00
Irevoire
6c98752922
move the commit before the insertion in the map 2022-10-27 11:35:19 +02:00
Irevoire
4e1b6b514e
update reviewer change 2022-10-27 11:35:19 +02:00
Irevoire
64e55b4db9
fix the index creation. When an index is being created we insert it in the index_map straight away to avoid someone else from trying to re-open it. The definitive fix should be made on milli's side 2022-10-27 11:35:18 +02:00
Loïc Lecrenier
1f75caae88
Fix a few index swap bugs.
1. Details of the indexSwap task
2. Query tasks with type=indexUid
3. Synchronous error message for multiple index not found
2022-10-27 11:35:17 +02:00
Irevoire
29bdcb880c
update the snapshot 2022-10-27 11:35:17 +02:00
Irevoire
a3fc0d3bd9
Fix the last regression 2022-10-27 11:35:17 +02:00
Kerollmops
2de8a0711a
Cargo insta test/review 2022-10-27 11:35:16 +02:00
Kerollmops
2f577b6fcd
Patch the IndexScheduler in meilisearch-http to use the options struct 2022-10-27 11:35:16 +02:00
Kerollmops
71b50853dc
Introduce an options struct to create the IndexScheduler 2022-10-27 11:35:16 +02:00
Kerollmops
7074872a78
cargo insta accept 2022-10-27 11:35:15 +02:00
Kerollmops
035e8eeff5
Clean-up some TODOs 2022-10-27 11:35:15 +02:00
Kerollmops
e35fe33712
Fix some bugs with files 2022-10-27 11:35:15 +02:00
Kerollmops
942b7c338b
Compress the snapshot in a tarball 2022-10-27 11:35:15 +02:00
Kerollmops
4cafc63561
Reintroduce the versioning functions 2022-10-27 11:35:14 +02:00
Kerollmops
89e127e4f4
Declare the auth path in the index scheduler 2022-10-27 11:35:14 +02:00
Kerollmops
eec43ec953
Implement a first version of the snapshots 2022-10-27 11:35:14 +02:00
Kerollmops
c063f154fb
Add the snapshots directory path to the IndexScheduler 2022-10-27 11:35:14 +02:00
Kerollmops
e0548e42e7
Rename the Snapshot task into SnapshotCreation 2022-10-27 11:35:14 +02:00
Kerollmops
4d43a9f5b1
Rename the index-scheduler module into insta_snapshot 2022-10-27 11:35:14 +02:00
Kerollmops
901c405919
Fix the inta-snapshot typos in the tests 2022-10-27 11:35:13 +02:00
Loïc Lecrenier
6db90ba6cc
Make sure that we don't delete or cancel future tasks
This should already have been the case before, but there is no harm
in adding another check.
2022-10-27 11:35:13 +02:00
Irevoire
e0821ad4b0
remove an useless dbg 2022-10-27 11:35:13 +02:00