Commit Graph

211 Commits

Author SHA1 Message Date
Clément Renault
0dbf1a16ff
Make clippy happy 2023-11-23 14:11:38 +01:00
Clément Renault
0d4482625a
Make the changes to use heed v0.20-alpha.6 2023-11-23 11:43:58 +01:00
Clément Renault
7cb7e37ba8
Merge branch 'main' into tmp-release-v1.5.0 2023-11-21 16:30:46 +01:00
meili-bors[bot]
33b7c574ea
Merge #4090
4090: Diff indexing r=ManyTheFish a=ManyTheFish

This pull request aims to reduce the indexing time by computing a difference between the data added to the index and the data removed from the index before writing in LMDB.

## Why focus on reducing the writings in LMDB?

The indexing in Meilisearch is split into 3 main phases:
1) The computing or the extraction of the data (Multi-threaded)
2) The writing of the data in LMDB (Mono-threaded)
3) The processing of the prefix databases (Mono-threaded)

see below:
![Capture d’écran 2023-09-28 à 20 01 45](https://github.com/meilisearch/meilisearch/assets/6482087/51513162-7c39-4244-978b-2c6b60c43a56)


Because the writing is mono-threaded, it represents a bottleneck in the indexing, reducing the number of writes in LMDB will reduce the pressure on the main thread and should reduce the global time spent on the indexing.

## Give Feedback

We created [a dedicated discussion](https://github.com/meilisearch/meilisearch/discussions/4196) for users to try this new feature and to give feedback on bugs or performance issues.

## Technical approach
### Part 1: merge the addition and the deletion process
This part:
a) Aims to reduce the time spent on indexing only the filterable/sortable fields of documents, for example:
  - Updating the number of "likes" or "stars" of a song or a movie
  - Updating the "stock count" or the "price" of a product

b) Aims to reduce the time spent on writing in LMDB which should reduce the global indexing time for the highly multi-threaded machines by reducing the writing bottleneck.

c) Aims to reduce the average time spent to delete documents without having to keep the soft-deleted documents implementation

- [x] Create a preprocessing function that creates the diff-based documents chuck (`OBKV<fid, OBKV<AddDel, value>>`)
  - [x] and clearly separate the faceted fields and the searchable fields in two different chunks
- Change the parameters of the input extractor by taking an `OBKV<fid, OBKV<AddDel, value>>` instead of  `OBKV<fid, value>`.
  - [x] extract_docid_word_positions
  - [x] extract_geo_points
  - [x] extract_vector_points
  - [x] extract_fid_docid_facet_values
- Adapt the searchable extractors to the new diff-chucks
  - [x] extract_fid_word_count_docids
  - [x] extract_word_pair_proximity_docids
  - [x] extract_word_position_docids
  - [x] extract_word_docids
- Adapt the facet extractors to the new diff-chucks
  - [x] extract_facet_number_docids
  - [x] extract_facet_string_docids
  - [x] extract_fid_docid_facet_values
  - [x] FacetsUpdate
- [x] Adapt the prefix database extractors ⚠️ ⚠️ 
- [x] Make the LMDB writer remove the document_ids to delete at the same time the new document_ids are added
- [x] Remove document deletion pipeline
  - [x] remove `new_documents_ids` entirely and `replaced_documents_ids`
  - [x] reuse extracted external id from transform instead of re-extracting in `TypedChunks::Documents`
  - [x] Remove deletion pipeline after autobatcher
  - [x] remove autobatcher deletion pipeline
    - [x] everything uses `IndexOperation::DocumentOperation`
    - [x] repair deletion by internal id for filter by delete
    - [x] Improve the deletion via internal ids by avoiding iterating over the whole set of external document ids.  
- [x] Remove soft-deleted documents

#### FIXME

- [x] field distribution is not correctly updated after deletion
- [x] missing documents in the tests of tokenizer_customization

### Part 2: Only compute the documents field by field
This part aims to reduce the global indexing time for any kind of partial document modification on any size of machine from the mono-threaded one to the highly multi-threaded one.

- [ ] Make the preprocessing function only send the fields that changed to the extractors
- [ ] remove the `word_docids` and `exact_word_docids` database and adapt the search (⚠️ could impact the search performances)
- [ ] replace the `word_pair_proximity_docids` database with a `word_pair_proximity_fid_docids` database and adapt the search (⚠️ could impact the search performances)
- [ ] Adapt the prefix database extractors ⚠️ ⚠️

## Technical Concerns
- The part 1 implementation could increase the indexing time for the smallest machines (with few threads) by increasing the extracting time (multi-threaded) more than the writing time (mono-threaded)
- The part 2 implementation needs to change the databases which could have a significant impact on the search performances
- The prefix databases are a bit special to process and may be a pain to adapt to the difference-based indexing

Co-authored-by: ManyTheFish <many@meilisearch.com>
Co-authored-by: Clément Renault <clement@meilisearch.com>
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2023-11-21 09:44:38 +00:00
Tamo
5b57fbab08 makes the dump cancellable 2023-11-14 11:23:13 +01:00
Louis Dureuil
a2d6dc8571
Fix typo, remove caching for the change of index 2023-11-13 10:44:36 +01:00
Louis Dureuil
492fc086f0
cargo fmt 2023-11-12 21:53:11 +01:00
Louis Dureuil
a2d0c73b41
Save the currently updating index so that the search can access it at all times 2023-11-10 10:52:03 +01:00
Louis Dureuil
f8289cd974
Use it from delete-by-filter 2023-11-09 14:23:15 +01:00
Louis Dureuil
ef6fa10f7a
Remove IndexOperation::DocumentDeletion 2023-11-06 12:16:15 +01:00
Louis Dureuil
cbaa54cafd
Fix clippy issues 2023-11-06 11:19:31 +01:00
Clément Renault
e507ef5932
Slow the logging down 2023-11-01 13:49:32 +01:00
Clément Renault
dfab6293c9
Use an LMDB database to store the external documents ids 2023-10-30 11:41:23 +01:00
Louis Dureuil
652ac3052d
use new iterator in batch 2023-10-30 11:41:22 +01:00
Louis Dureuil
c534a1b687
Stop using delete documents pipeline in batch runner 2023-10-30 11:41:22 +01:00
Louis Dureuil
cf8dad1ca0
index_scheduler.features() is no longer fallible 2023-10-23 10:38:56 +02:00
Clément Renault
055ca3935b
Update index-scheduler/src/batch.rs
Co-authored-by: Tamo <tamo@meilisearch.com>
2023-10-13 13:11:30 +02:00
Kerollmops
f2a9e1ebbb
Improve the debugging experience in the puffin reports 2023-10-13 13:11:30 +02:00
Tamo
34fac115d5 fix clippy 2023-09-11 17:15:57 +02:00
Tamo
9258e5b5bf Fix the stats of the documents deletion by filter
The issue was that the operation « DocumentDeletionByFilter » was not
declared as an index operation. That means the indexes stats were not
reprocessed after the application of the operation.
2023-09-11 14:04:10 +02:00
Kerollmops
eef95de30e
First iteration on exposing puffin profiling 2023-07-18 17:38:13 +02:00
Louis Dureuil
13e9b4c2e5
Add dump support 2023-06-26 16:29:43 +02:00
cui fliter
530a3e2df3 fix some typos
Signed-off-by: cui fliter <imcusg@gmail.com>
2023-06-22 21:59:00 +08:00
Tamo
96da5130a4
fix the error code in case of not filterable attributes on the get / delete documents by filter routes 2023-05-16 13:56:18 +02:00
Tamo
6df2ba93a9
remove one useless txn 2023-05-03 17:41:49 +02:00
Louis Dureuil
3680a6bf1e
extract impl to a function 2023-05-03 17:41:49 +02:00
Louis Dureuil
732c52093d
Processing time without autobatching implementation 2023-05-03 17:41:48 +02:00
bors[bot]
667bb87e35
Merge #3541
3541: Add cache on the indexes stats r=dureuill a=irevoire

Fix https://github.com/meilisearch/meilisearch/issues/3540

Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2023-03-09 13:32:52 +00:00
Louis Dureuil
7faa9a22f6
Pass IndexStat by ref in store_stats_of 2023-03-07 14:00:54 +01:00
Louis Dureuil
076a3d371c Eagerly compute stats as fallback to the cache.
- Refactor all around to avoid spawning indexes more times than necessary
2023-03-06 16:57:31 +01:00
Tamo
fd5c48941a Add cache on the indexes stats 2023-03-06 16:57:31 +01:00
Tamo
e704728ee7 fix the snapshots permissions on unix system 2023-03-06 16:28:40 +01:00
Louis Dureuil
3db613ff77
Don't iterate all indexes manually 2023-02-23 11:29:09 +01:00
bors[bot]
b08a49a16e
Merge #3319 #3470
3319: Transparently resize indexes on MaxDatabaseSizeReached errors r=Kerollmops a=dureuill

# Pull Request

## Related issue
Related to https://github.com/meilisearch/meilisearch/discussions/3280, depends on https://github.com/meilisearch/milli/pull/760

## What does this PR do?

### User standpoint

- Meilisearch no longer fails tasks that encounter the `milli::UserError(MaxDatabaseSizeReached)` error.
- Instead, these tasks are retried after increasing the maximum size allocated to the index where the failure occurred.

### Implementation standpoint

- Add `Batch::index_uid` to get the `index_uid` of a batch of task if there is one
- `IndexMapper::create_or_open_index` now takes an additional `size` argument that allows to (re)open indexes with a size different from the base `IndexScheduler::index_size` field
- `IndexScheduler::tick` now returns a `Result<TickOutcome>` instead of a `Result<usize>`. This offers more explicit control over what the behavior should be wrt the next tick.
- Add `IndexStatus::BeingResized` that contains a handle that a thread can use to await for the resize operation to complete and the index to be available again.
- Add `IndexMapper::resize_index` to increase the size of an index.
- In `IndexScheduler::tick`, intercept task batches that failed due to `MaxDatabaseSizeReached` and resize the index that caused the error, then request a new tick that will eventually handle the still enqueued task.

## Testing the PR

The following diff can be applied to this branch to make testing the PR easier:

<details>


```diff
diff --git a/index-scheduler/src/index_mapper.rs b/index-scheduler/src/index_mapper.rs
index 553ab45a..022b2f00 100644
--- a/index-scheduler/src/index_mapper.rs
+++ b/index-scheduler/src/index_mapper.rs
`@@` -228,13 +228,15 `@@` impl IndexMapper {
 
         drop(lock);
 
+        std:🧵:sleep_ms(2000);
+
         let current_size = index.map_size()?;
         let closing_event = index.prepare_for_closing();
-        log::info!("Resizing index {} from {} to {} bytes", name, current_size, current_size * 2);
+        log::error!("Resizing index {} from {} to {} bytes", name, current_size, current_size * 2);
 
         closing_event.wait();
 
-        log::info!("Resized index {} from {} to {} bytes", name, current_size, current_size * 2);
+        log::error!("Resized index {} from {} to {} bytes", name, current_size, current_size * 2);
 
         let index_path = self.base_path.join(uuid.to_string());
         let index = self.create_or_open_index(&index_path, None, 2 * current_size)?;
`@@` -268,8 +270,10 `@@` impl IndexMapper {
             match index {
                 Some(Available(index)) => break index,
                 Some(BeingResized(ref resize_operation)) => {
+                    log::error!("waiting for resize end");
                     // Deadlock: no lock taken while doing this operation.
                     resize_operation.wait();
+                    log::error!("trying our luck again!");
                     continue;
                 }
                 Some(BeingDeleted) => return Err(Error::IndexNotFound(name.to_string())),
diff --git a/index-scheduler/src/lib.rs b/index-scheduler/src/lib.rs
index 11b17d05..242dc095 100644
--- a/index-scheduler/src/lib.rs
+++ b/index-scheduler/src/lib.rs
`@@` -908,6 +908,7 `@@` impl IndexScheduler {
     ///
     /// Returns the number of processed tasks.
     fn tick(&self) -> Result<TickOutcome> {
+        log::error!("ticking!");
         #[cfg(test)]
         {
             *self.run_loop_iteration.write().unwrap() += 1;
diff --git a/meilisearch/src/main.rs b/meilisearch/src/main.rs
index 050c825a..63f312f6 100644
--- a/meilisearch/src/main.rs
+++ b/meilisearch/src/main.rs
`@@` -25,7 +25,7 `@@` fn setup(opt: &Opt) -> anyhow::Result<()> {
 
 #[actix_web::main]
 async fn main() -> anyhow::Result<()> {
-    let (opt, config_read_from) = Opt::try_build()?;
+    let (mut opt, config_read_from) = Opt::try_build()?;
 
     setup(&opt)?;
 
`@@` -56,6 +56,8 `@@` We generated a secure master key for you (you can safely copy this token):
         _ => (),
     }
 
+    opt.max_index_size = byte_unit::Byte::from_str("1MB").unwrap();
+
     let (index_scheduler, auth_controller) = setup_meilisearch(&opt)?;
 
     #[cfg(all(not(debug_assertions), feature = "analytics"))]
```
</details>

Mainly, these debug changes do the following:

- Set the default index size to 1MiB so that index resizes are initially frequent
- Turn some logs from info to error so that they can be displayed with `--log-level ERROR` (hiding the other infos)
- Add a long sleep between the beginning and the end of the resize so that we can observe the `BeingResized` index status (otherwise it would never come up in my tests)

## Open questions

- Is the growth factor of x2 the correct solution? For a `Vec` in memory it makes sense, but here we're manipulating quantities that are potentially in the order of 500GiBs. For bigger indexes it may make more sense to add at most e.g. 100GiB on each resize operation, avoiding big steps like 500GiB -> 1TiB.

## PR checklist
Please check if your PR fulfills the following requirements:
- [ ] Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
- [ ] Have you read the contributing guidelines?
- [ ] Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!


3470: Autobatch addition and deletion r=irevoire a=irevoire

This PR adds the capability to meilisearch to batch document addition and deletion together.

Fix https://github.com/meilisearch/meilisearch/issues/3440

--------------

Things to check before merging;

- [x] What happens if we delete multiple time the same documents -> add a test
- [x] If a documentDeletion gets batched with a documentAddition but the index doesn't exist yet? It should not work

Co-authored-by: Louis Dureuil <louis@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
2023-02-20 15:00:19 +00:00
Louis Dureuil
4c519c2ab3
Add Batch::index_uid 2023-02-20 13:55:31 +01:00
Tamo
29d14bed90
get rids of the let/else syntax 2023-02-14 17:45:46 +01:00
Tamo
93f130a400
fix all warnings 2023-02-08 20:57:35 +01:00
Tamo
860c993ef7
Handle the autobatching of deletion and addition in the scheduler 2023-02-08 20:53:19 +01:00
Tamo
2db6347686
update the autobatcher to batch the addition and deletion together 2023-02-08 18:07:59 +01:00
Louis Dureuil
924d5d4c11
clippy: remove needless lifetimes 2023-01-31 10:40:48 +01:00
Tamo
ea3b269b77 reformat 2023-01-23 23:59:34 +01:00
Tamo
a4be4c49e8
Update index-scheduler/src/batch.rs
Co-authored-by: Clément Renault <clement@meilisearch.com>
2023-01-23 23:58:03 +01:00
Tamo
767cb725a5
reimplement the batching of task with or without primary key in the autobatcher 2023-01-23 20:18:22 +01:00
Tamo
5672118bfa
When adding documents, trying to update the primary-key now throw an error
While updating the test suite I also noticed an issue with the indexed_documents value of failed task and had to update it.
I also named a bunch of snapshots that had no name sorry 😬
2023-01-23 17:32:13 +01:00
Tamo
e706628bb1 fix the error code of the swap index route 2023-01-06 14:48:25 +01:00
amab8901
0893b175dc Merge branch 'main' into 2983-forward-date-to-milli 2022-12-21 14:31:19 +01:00
Louis Dureuil
869d331680
Clippy fixes after updating Rust to v1.66 2022-12-19 14:17:12 +01:00
amab8901
5a0a0468df Combine created and added into date 2022-12-16 08:11:12 +01:00
amab8901
d3eb8d2d5c Enable create_raw_index(...) to specify time 2022-12-14 10:44:25 +01:00
Clémentine Urquizar - curqui
457a473b72
Bring back release-v0.30.0 into release-v0.30.0-temp (final: into main) (#3145)
* Fix error code of the "duplicate index found" error

* Use the content of the ProcessingTasks in the tasks cancelation system

* Change the missing_filters error code into missing_task_filters

* WIP Introduce the invalid_task_uid error code

* Use more precise error codes/message for the task routes

+ Allow star operator in delete/cancel tasks
+ rename originalQuery to originalFilters
+ Display error/canceled_by in task view even when they are = null
+ Rename task filter fields by using their plural forms
+ Prepare an error code for canceledBy filter
+ Only return global tasks if the API key action `index.*` is there

* Add canceledBy task filter

* Update tests following task API changes

* Rename original_query to original_filters everywhere

* Update more insta-snap tests

* Make clippy happy

They're a happy clip now.

* Make rustfmt happy

>:-(

* Fix Index name parsing error message to fit the specification

* Bump milli version to 0.35.1

* Fix the new error messages

* fix the error messages and add tests

* rename the error codes for the sake of consistency

* refactor the way we send the cli informations + add the analytics for the config file and ssl usage

* Apply suggestions from code review

Co-authored-by: Clément Renault <clement@meilisearch.com>

* add a comment over the new infos structure

* reformat, sorry @kero

* Store analytics for the documents deletions

* Add analytics on all the settings

* Spawn threads with names

* Spawn rayon threads with names

* update the distinct attributes to the spec update

* update the analytics on the search route

* implements the analytics on the health and version routes

* Fix task details serialization

* Add the question mark to the task deletion query filter

* Add the question mark to the task cancelation query filter

* Fix tests

* add analytics on the task route

* Add all the missing fields of the new task query type
* Create a new analytics for the task deletion
* Create a new analytics for the task creation

* batch the tasks seen events

* Update the finite pagination analytics

* add the analytics of the swap-indexes route

* Stop removing the DB when failing to read it

* Rename originalFilters into originalFilters

* Rename matchedDocuments into providedIds

* Add `workflow_dispatch` to flaky.yml

* Bump grenad to 0.4.4

* Bump milli to version v0.37.0

* Don't multiply total memory returned by sysinfo anymore

sysinfo now returns bytes rather than KB

* Add a dispatch to the publish binaries workflow

* Fix publish release CI

* Don't use gold but the default linker

* Always display details for the indexDeletion task

* Fix the insta tests

* refactorize the whole test suite
1. Make a call to assert_internally_consistent automatically when snapshoting the scheduler. There is no point in snapshoting something broken and expect the dumb humans to notice.
2. Replace every possible call to assert_internally_consistent by a snapshot of the scheduler. It takes as many lines and ensure we never change something without noticing in any tests ever.
3. Name every snapshots: it's easier to debug when something goes wrong and easier to review in general.
4. Stop skipping breakpoints, it's too easy to miss something. Now you must explicitely show which path is the scheduler supposed to use.
5. Add a timeout on the channel.recv, it eases the process of writing tests, now when something file you get a failure instead of a deadlock.

* rebase on release-v0.30

* makes clippy happy

* update the snapshots after a rebase

* try to remove the flakyness of the failing test

* Add more analytics on the ranking rules positions

* Update the dump test to check for the dumpUid dumpCreation task details

* send the ranking rules as a string because amplitude is too dumb to process an array as a single value

* Display a null dumpUid until we computed the dump itself on disk

* Update tests

* Check if the master key is missing before returning an error

Co-authored-by: Loïc Lecrenier <loic.lecrenier@me.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Kerollmops <clement@meilisearch.com>
Co-authored-by: ManyTheFish <many@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: Louis Dureuil <louis@meilisearch.com>
2022-11-28 16:27:41 +01:00
bors[bot]
dd1011ba76
Merge #2995
2995: merge the settings and do one indexation at the end r=irevoire a=irevoire



Co-authored-by: Irevoire <tamo@meilisearch.com>
2022-10-27 21:24:21 +00:00
Tamo
87cac158c4
Update index-scheduler/src/batch.rs 2022-10-27 18:08:21 +02:00
Tamo
c9f89d38e3
Merge branch 'main' into index-swap-error-handling 2022-10-27 18:06:45 +02:00
Irevoire
313f204f39
merge the settings and do one indexation at the end 2022-10-27 16:38:21 +02:00
Loïc Lecrenier
4f4fc20acf Make clippy happy 2022-10-27 13:00:30 +02:00
Loïc Lecrenier
78ffa00f98 Move index swap error handling from meilisearch-http to index-scheduler
And make index_not_found error asynchronous, since we can't know
whether the index will exist by the time the index swap task is
processed.

Improve the index-swap test to verify that future tasks are not swapped
and to test the new error messages that were introduced.
2022-10-27 11:45:38 +02:00
Loïc Lecrenier
7b93ba40bd Reimplement task queries to account for special index swap rules 2022-10-27 11:44:51 +02:00
Irevoire
8ec3681cf8
fix clippy part1 2022-10-27 11:35:20 +02:00
Kerollmops
2ba5e3b519
Clean up some code 2022-10-27 11:35:20 +02:00
Irevoire
4e1b6b514e
update reviewer change 2022-10-27 11:35:19 +02:00
Loïc Lecrenier
1f75caae88
Fix a few index swap bugs.
1. Details of the indexSwap task
2. Query tasks with type=indexUid
3. Synchronous error message for multiple index not found
2022-10-27 11:35:17 +02:00
Kerollmops
035e8eeff5
Clean-up some TODOs 2022-10-27 11:35:15 +02:00
Kerollmops
e35fe33712
Fix some bugs with files 2022-10-27 11:35:15 +02:00
Kerollmops
942b7c338b
Compress the snapshot in a tarball 2022-10-27 11:35:15 +02:00
Kerollmops
4cafc63561
Reintroduce the versioning functions 2022-10-27 11:35:14 +02:00
Kerollmops
89e127e4f4
Declare the auth path in the index scheduler 2022-10-27 11:35:14 +02:00
Kerollmops
eec43ec953
Implement a first version of the snapshots 2022-10-27 11:35:14 +02:00
Kerollmops
e0548e42e7
Rename the Snapshot task into SnapshotCreation 2022-10-27 11:35:14 +02:00
Loïc Lecrenier
d92425658e
Add index scheduler tests for task cancelation 2022-10-27 11:35:12 +02:00
Loïc Lecrenier
16fac10074
Fix crash when batching an index swap task containing 0 swaps 2022-10-27 11:35:12 +02:00
Irevoire
0aca5e84b9
rename received_document_ids to matched_documents in the DocumentDeletion task type (reimplementation of #2826) 2022-10-27 11:35:12 +02:00
Loïc Lecrenier
4de445d386
Start testing unexpected errors and panics in index scheduler 2022-10-27 11:35:10 +02:00
Irevoire
ecf4e43b3d
rename the dumpExport to dumpCreation 2022-10-27 11:35:10 +02:00
Irevoire
e9055f5572
fix clippy 2022-10-27 11:35:08 +02:00
Irevoire
c8ee453b6c
fix the autobatched document deletion 2022-10-27 11:35:07 +02:00
Irevoire
a8de5368e5
fix the index creation in case an index already exists 2022-10-27 11:35:07 +02:00
Irevoire
9bb2e3c790
fix the failed document addition with a primary key 2022-10-27 11:35:07 +02:00
Irevoire
8d1408c65e
fix the import of the dumpv4&v5 when there is no instance-uid + rename the Kind+KindWithContent+Details variant for the DocumentImport and the Setting 2022-10-27 11:35:05 +02:00
Clément Renault
80b2e70ee7
Introduce a rustfmt file 2022-10-27 11:35:05 +02:00
Clément Renault
72ec4ce96b
Fix allow_index_creation useless field 2022-10-27 11:34:17 +02:00
Irevoire
b6a0abea9f
fix the index deletion when the index doesn’t exists but would be created by one of the autobatched tasks 2022-10-27 11:34:16 +02:00
Irevoire
d9218578e3
it probably works but it's also horrendous 2022-10-27 11:34:16 +02:00
Loïc Lecrenier
11fee30f47
Apply review suggestions and stop using rtxn.commit 2022-10-27 11:34:15 +02:00
Loïc Lecrenier
17cd2a4aa0
Implement POST /indexes-swap 2022-10-27 11:34:15 +02:00
Loïc Lecrenier
169f386418
Add some documentation to the index scheduler 2022-10-27 11:34:15 +02:00
Loïc Lecrenier
22cf0559fe
Implement task date filters
before/after enqueued/started/finished at
2022-10-27 11:34:14 +02:00
Irevoire
5765883600
fix the auto-generated details 2022-10-27 11:34:14 +02:00
Tamo
cff003c928
remove the unused variants from the autobatcher 2022-10-27 11:34:14 +02:00
Kerollmops
50b8b9df6a
Delete the tasks content file once the transaction has been successfully committed 2022-10-27 11:34:13 +02:00
Kerollmops
b373d19831
Extract the must_stop flag out of the RwLock 2022-10-27 11:34:12 +02:00
Kerollmops
3cbfacb616
Prefer using an u64 instead of a usize in some places 2022-10-27 11:34:12 +02:00
Kerollmops
79c4275bfc
Delete the persisted data when we cancel a task 2022-10-27 11:34:12 +02:00
Kerollmops
c2ec4a089b
Put the original URL query in the tasks details 2022-10-27 11:34:12 +02:00
Kerollmops
290945e258
Update the canceledBy and finishedAt fields 2022-10-27 11:34:11 +02:00
Kerollmops
725158b454
Introduce the core algorithm of task cancelation 2022-10-27 11:34:11 +02:00
Kerollmops
1ca9a67c49
Introduce the task cancelation task type 2022-10-27 11:34:11 +02:00
Kerollmops
703ba7a1fb
Introduce the ProcessingTasks struct 2022-10-27 11:34:10 +02:00
Loïc Lecrenier
ea60d35c71
Delete a task's persisted data when appropriate 2022-10-27 11:34:10 +02:00
Tamo
2f748480a1
share the rtxn between the access to the tasks and to the indexes 2022-10-27 11:34:09 +02:00
Tamo
83f3c5ec57
flush the dump-writer only once everything has been inserted 2022-10-27 11:34:08 +02:00