2204: Fix blocking auth r=Kerollmops a=MarinPostma
Fix auth blocking runtime
I have decided to remove async code from `meilisearch-auth` and let `meilisearch-http` handle that.
Because Actix polls the extractor futures concurrently, I have made a wrapper extractor that forces the errors from the futures to be returned sequentially (though is still polls them sequentially).
close#2201
Co-authored-by: ad hoc <postma.marin@protonmail.com>
2197: Additions to 0.26 (Update actix-web dependency to 4.0) r=curquiza a=MarinPostma
- `@robjtede`
`@MarinPostma`
[update actix-web dependency to 4.0](3b2e467ca6)
From main to release-v0.26.0
Co-authored-by: Rob Ede <robjtede@icloud.com>
2173: chore(all): replace chrono with time r=irevoire a=irevoire
Chrono has been unmaintained for a few month now and there is a CVE on it.
Also I updated all the error messages related to the API key as you can see here: https://github.com/meilisearch/specifications/pull/114fix#2172
Co-authored-by: Irevoire <tamo@meilisearch.com>
2122: fix: docker image failed to boot on arm64 node r=curquiza a=Thearas
# Pull Request
## What does this PR do?
Fixes#2115.
## PR checklist
Please check if your PR fulfills the following requirements:
- [x] Does this PR fix an existing issue?
- [x] Have you read the contributing guidelines?
- [x] Have you made sure that the title is accurate and descriptive of the changes?
Co-authored-by: Thearas <thearas850@gmail.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>
2157: fix(auth): fix env being closed when dumping r=Kerollmops a=MarinPostma
When creating a dump, the auth store environment would be closed on drop, so subsequent dumps couldn't reopen the environment. I have added a flag in the environment to prevent the closing of the environment on drop when dumping.
Co-authored-by: ad hoc <postma.marin@protonmail.com>
2136: Refactoring CI regarding ARM binary publish r=curquiza a=curquiza
Fixes https://github.com/meilisearch/meilisearch/issues/1909
- Remove CI file to publish aarch64 binary and put the logic into `publish-binary.yml`
- Remove the job to publish armv8 binary
- Fix download-latest script accordingly
- Adapt dowload-latest with the specific case of the MacOS m1
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
Co-authored-by: meili-bot <74670311+meili-bot@users.noreply.github.com>
2005: auto batching r=MarinPostma a=MarinPostma
This pr implements auto batching. The basic functioning of this is that all updates that can be batched together are batched together while the previous batch is being processed.
For now, the only updates that can be batched together are the document addition updates (both update and replace), for a single index.
The batching is disabled by default for multiple reasons:
- We need more experimentation with the scheduling techniques
- Right now, if one task fails in a batch, the whole batch fails. We need more permissive error handling when processing document indexation.
There are four CLI options, for now, to interact with how the batch is scheduled:
- `enable-autobatching`: enable the autobatching feature.
- `debounce-duration-sec`: When an update is received, wait that number of seconds before batching and performing the updates. Defaults to 0s.
- `max-batch-size`: the maximum number of tasks per batch, defaults to unlimited.
- `max-documents-per-batch`: the maximum number of documents in a batch, defaults to unlimited. The batch will always contain a least 1 task, no matter the number of documents in that task.
# Implementation
The current implementation is made of 3 major components:
## TaskStore
The `TaskStore` contains all the tasks. When a task is pushed, it is directly registered to the task store.
## Scheduler
The scheduler is in charge of making the batches. At its core, there is a `TaskQueue` and a job queue. `Job`s are always processed first. They are *volatile* tasks, that is, they don't have a TaskId and are not persisted to disk. Snapshots and dumps are examples of Jobs.
If no `Job` is available for processing, then the scheduler attempts to make a `Task` batch from the `TaskQueue`. The first step is to gather new tasks from the `TaskStore` to populate the `TaskQueue`. When this is done, we can prepare our batch. The `TaskQueue` is itself a `BinaryHeap` of `Tasklist`. Each `index_uid` is associated with a `TaskList` that contains all the updates associated with that index uid. Each `TaskList` in the `TaskQueue` is ordered by the id of its first task.
When preparing a batch, the `TaskList` at the top of the `TaskQueue` is popped, and the tasks are popped from the list to make the next batch. If there are remaining tasks in the list, the list is inserted back in the `TaskQueue`.
## UpdateLoop
The `UpdateLoop` role is to perform batch sequentially. Each time updates are pushed to the update store, the scheduler is notified, and will in turn notify the update loop that work can be performed. When notified, the update loop waits some time to wait for more incoming update and then asks the scheduler for the next batch to perform and perform it. When it is done, the status of the task is put back into the store, and the next batch is processed.
Co-authored-by: mpostma <postma.marin@protonmail.com>
2120: Bring `stable` into `main` r=curquiza a=curquiza
I forgot to do it, tell me `@Kerollmops` or `@irevoire` if it's useful or not. I would say yes, otherwise I will have conflict when I will try to bring `main` into `stable` for the next release. Maybe I'm wrong
Co-authored-by: Irevoire <tamo@meilisearch.com>
Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com>
Co-authored-by: Clémentine Urquizar - curqui <clementine@meilisearch.com>