mirror of
https://github.com/meilisearch/MeiliSearch
synced 2024-12-29 16:01:40 +01:00
457a473b72
* Fix error code of the "duplicate index found" error * Use the content of the ProcessingTasks in the tasks cancelation system * Change the missing_filters error code into missing_task_filters * WIP Introduce the invalid_task_uid error code * Use more precise error codes/message for the task routes + Allow star operator in delete/cancel tasks + rename originalQuery to originalFilters + Display error/canceled_by in task view even when they are = null + Rename task filter fields by using their plural forms + Prepare an error code for canceledBy filter + Only return global tasks if the API key action `index.*` is there * Add canceledBy task filter * Update tests following task API changes * Rename original_query to original_filters everywhere * Update more insta-snap tests * Make clippy happy They're a happy clip now. * Make rustfmt happy >:-( * Fix Index name parsing error message to fit the specification * Bump milli version to 0.35.1 * Fix the new error messages * fix the error messages and add tests * rename the error codes for the sake of consistency * refactor the way we send the cli informations + add the analytics for the config file and ssl usage * Apply suggestions from code review Co-authored-by: Clément Renault <clement@meilisearch.com> * add a comment over the new infos structure * reformat, sorry @kero * Store analytics for the documents deletions * Add analytics on all the settings * Spawn threads with names * Spawn rayon threads with names * update the distinct attributes to the spec update * update the analytics on the search route * implements the analytics on the health and version routes * Fix task details serialization * Add the question mark to the task deletion query filter * Add the question mark to the task cancelation query filter * Fix tests * add analytics on the task route * Add all the missing fields of the new task query type * Create a new analytics for the task deletion * Create a new analytics for the task creation * batch the tasks seen events * Update the finite pagination analytics * add the analytics of the swap-indexes route * Stop removing the DB when failing to read it * Rename originalFilters into originalFilters * Rename matchedDocuments into providedIds * Add `workflow_dispatch` to flaky.yml * Bump grenad to 0.4.4 * Bump milli to version v0.37.0 * Don't multiply total memory returned by sysinfo anymore sysinfo now returns bytes rather than KB * Add a dispatch to the publish binaries workflow * Fix publish release CI * Don't use gold but the default linker * Always display details for the indexDeletion task * Fix the insta tests * refactorize the whole test suite 1. Make a call to assert_internally_consistent automatically when snapshoting the scheduler. There is no point in snapshoting something broken and expect the dumb humans to notice. 2. Replace every possible call to assert_internally_consistent by a snapshot of the scheduler. It takes as many lines and ensure we never change something without noticing in any tests ever. 3. Name every snapshots: it's easier to debug when something goes wrong and easier to review in general. 4. Stop skipping breakpoints, it's too easy to miss something. Now you must explicitely show which path is the scheduler supposed to use. 5. Add a timeout on the channel.recv, it eases the process of writing tests, now when something file you get a failure instead of a deadlock. * rebase on release-v0.30 * makes clippy happy * update the snapshots after a rebase * try to remove the flakyness of the failing test * Add more analytics on the ranking rules positions * Update the dump test to check for the dumpUid dumpCreation task details * send the ranking rules as a string because amplitude is too dumb to process an array as a single value * Display a null dumpUid until we computed the dump itself on disk * Update tests * Check if the master key is missing before returning an error Co-authored-by: Loïc Lecrenier <loic.lecrenier@me.com> Co-authored-by: bors[bot] <26634292+bors[bot]@users.noreply.github.com> Co-authored-by: Kerollmops <clement@meilisearch.com> Co-authored-by: ManyTheFish <many@meilisearch.com> Co-authored-by: Tamo <tamo@meilisearch.com> Co-authored-by: Louis Dureuil <louis@meilisearch.com>
234 lines
8.4 KiB
Rust
234 lines
8.4 KiB
Rust
use std::collections::hash_map::Entry;
|
|
use std::collections::HashMap;
|
|
use std::path::{Path, PathBuf};
|
|
use std::sync::{Arc, RwLock};
|
|
use std::{fs, thread};
|
|
|
|
use log::error;
|
|
use meilisearch_types::heed::types::Str;
|
|
use meilisearch_types::heed::{Database, Env, EnvOpenOptions, RoTxn, RwTxn};
|
|
use meilisearch_types::milli::update::IndexerConfig;
|
|
use meilisearch_types::milli::Index;
|
|
use uuid::Uuid;
|
|
|
|
use self::IndexStatus::{Available, BeingDeleted};
|
|
use crate::uuid_codec::UuidCodec;
|
|
use crate::{Error, Result};
|
|
|
|
const INDEX_MAPPING: &str = "index-mapping";
|
|
|
|
/// Structure managing meilisearch's indexes.
|
|
///
|
|
/// It is responsible for:
|
|
/// 1. Creating new indexes
|
|
/// 2. Opening indexes and storing references to these opened indexes
|
|
/// 3. Accessing indexes through their uuid
|
|
/// 4. Mapping a user-defined name to each index uuid.
|
|
#[derive(Clone)]
|
|
pub struct IndexMapper {
|
|
/// Keep track of the opened indexes. Used mainly by the index resolver.
|
|
index_map: Arc<RwLock<HashMap<Uuid, IndexStatus>>>,
|
|
|
|
/// Map an index name with an index uuid currently available on disk.
|
|
pub(crate) index_mapping: Database<Str, UuidCodec>,
|
|
|
|
/// Path to the folder where the LMDB environments of each index are.
|
|
base_path: PathBuf,
|
|
index_size: usize,
|
|
pub indexer_config: Arc<IndexerConfig>,
|
|
}
|
|
|
|
/// Whether the index is available for use or is forbidden to be inserted back in the index map
|
|
#[allow(clippy::large_enum_variant)]
|
|
#[derive(Clone)]
|
|
pub enum IndexStatus {
|
|
/// Do not insert it back in the index map as it is currently being deleted.
|
|
BeingDeleted,
|
|
/// You can use the index without worrying about anything.
|
|
Available(Index),
|
|
}
|
|
|
|
impl IndexMapper {
|
|
pub fn new(
|
|
env: &Env,
|
|
base_path: PathBuf,
|
|
index_size: usize,
|
|
indexer_config: IndexerConfig,
|
|
) -> Result<Self> {
|
|
Ok(Self {
|
|
index_map: Arc::default(),
|
|
index_mapping: env.create_database(Some(INDEX_MAPPING))?,
|
|
base_path,
|
|
index_size,
|
|
indexer_config: Arc::new(indexer_config),
|
|
})
|
|
}
|
|
|
|
/// Create or open an index in the specified path.
|
|
/// The path *must* exists or an error will be thrown.
|
|
fn create_or_open_index(&self, path: &Path) -> Result<Index> {
|
|
let mut options = EnvOpenOptions::new();
|
|
options.map_size(self.index_size);
|
|
options.max_readers(1024);
|
|
Ok(Index::new(options, path)?)
|
|
}
|
|
|
|
/// Get or create the index.
|
|
pub fn create_index(&self, mut wtxn: RwTxn, name: &str) -> Result<Index> {
|
|
match self.index(&wtxn, name) {
|
|
Ok(index) => {
|
|
wtxn.commit()?;
|
|
Ok(index)
|
|
}
|
|
Err(Error::IndexNotFound(_)) => {
|
|
let uuid = Uuid::new_v4();
|
|
self.index_mapping.put(&mut wtxn, name, &uuid)?;
|
|
|
|
let index_path = self.base_path.join(uuid.to_string());
|
|
fs::create_dir_all(&index_path)?;
|
|
let index = self.create_or_open_index(&index_path)?;
|
|
|
|
wtxn.commit()?;
|
|
// TODO: it would be better to lazily create the index. But we need an Index::open function for milli.
|
|
if let Some(BeingDeleted) =
|
|
self.index_map.write().unwrap().insert(uuid, Available(index.clone()))
|
|
{
|
|
panic!("Uuid v4 conflict.");
|
|
}
|
|
|
|
Ok(index)
|
|
}
|
|
error => error,
|
|
}
|
|
}
|
|
|
|
/// Removes the index from the mapping table and the in-memory index map
|
|
/// but keeps the associated tasks.
|
|
pub fn delete_index(&self, mut wtxn: RwTxn, name: &str) -> Result<()> {
|
|
let uuid = self
|
|
.index_mapping
|
|
.get(&wtxn, name)?
|
|
.ok_or_else(|| Error::IndexNotFound(name.to_string()))?;
|
|
|
|
// Once we retrieved the UUID of the index we remove it from the mapping table.
|
|
assert!(self.index_mapping.delete(&mut wtxn, name)?);
|
|
|
|
wtxn.commit()?;
|
|
// We remove the index from the in-memory index map.
|
|
let mut lock = self.index_map.write().unwrap();
|
|
let closing_event = match lock.insert(uuid, BeingDeleted) {
|
|
Some(Available(index)) => Some(index.prepare_for_closing()),
|
|
_ => None,
|
|
};
|
|
|
|
drop(lock);
|
|
|
|
let index_map = self.index_map.clone();
|
|
let index_path = self.base_path.join(uuid.to_string());
|
|
let index_name = name.to_string();
|
|
thread::Builder::new()
|
|
.name(String::from("index_deleter"))
|
|
.spawn(move || {
|
|
// We first wait to be sure that the previously opened index is effectively closed.
|
|
// This can take a lot of time, this is why we do that in a seperate thread.
|
|
if let Some(closing_event) = closing_event {
|
|
closing_event.wait();
|
|
}
|
|
|
|
// Then we remove the content from disk.
|
|
if let Err(e) = fs::remove_dir_all(&index_path) {
|
|
error!(
|
|
"An error happened when deleting the index {} ({}): {}",
|
|
index_name, uuid, e
|
|
);
|
|
}
|
|
|
|
// Finally we remove the entry from the index map.
|
|
assert!(matches!(index_map.write().unwrap().remove(&uuid), Some(BeingDeleted)));
|
|
})
|
|
.unwrap();
|
|
|
|
Ok(())
|
|
}
|
|
|
|
pub fn exists(&self, rtxn: &RoTxn, name: &str) -> Result<bool> {
|
|
Ok(self.index_mapping.get(rtxn, name)?.is_some())
|
|
}
|
|
|
|
/// Return an index, may open it if it wasn't already opened.
|
|
pub fn index(&self, rtxn: &RoTxn, name: &str) -> Result<Index> {
|
|
let uuid = self
|
|
.index_mapping
|
|
.get(rtxn, name)?
|
|
.ok_or_else(|| Error::IndexNotFound(name.to_string()))?;
|
|
|
|
// we clone here to drop the lock before entering the match
|
|
let index = self.index_map.read().unwrap().get(&uuid).cloned();
|
|
let index = match index {
|
|
Some(Available(index)) => index,
|
|
Some(BeingDeleted) => return Err(Error::IndexNotFound(name.to_string())),
|
|
// since we're lazy, it's possible that the index has not been opened yet.
|
|
None => {
|
|
let mut index_map = self.index_map.write().unwrap();
|
|
// between the read lock and the write lock it's not impossible
|
|
// that someone already opened the index (eg if two search happens
|
|
// at the same time), thus before opening it we check a second time
|
|
// if it's not already there.
|
|
// Since there is a good chance it's not already there we can use
|
|
// the entry method.
|
|
match index_map.entry(uuid) {
|
|
Entry::Vacant(entry) => {
|
|
let index_path = self.base_path.join(uuid.to_string());
|
|
let index = self.create_or_open_index(&index_path)?;
|
|
entry.insert(Available(index.clone()));
|
|
index
|
|
}
|
|
Entry::Occupied(entry) => match entry.get() {
|
|
Available(index) => index.clone(),
|
|
BeingDeleted => return Err(Error::IndexNotFound(name.to_string())),
|
|
},
|
|
}
|
|
}
|
|
};
|
|
|
|
Ok(index)
|
|
}
|
|
|
|
/// Return all indexes, may open them if they weren't already opened.
|
|
pub fn indexes(&self, rtxn: &RoTxn) -> Result<Vec<(String, Index)>> {
|
|
self.index_mapping
|
|
.iter(rtxn)?
|
|
.map(|ret| {
|
|
ret.map_err(Error::from).and_then(|(name, _)| {
|
|
self.index(rtxn, name).map(|index| (name.to_string(), index))
|
|
})
|
|
})
|
|
.collect()
|
|
}
|
|
|
|
/// Swap two index names.
|
|
pub fn swap(&self, wtxn: &mut RwTxn, lhs: &str, rhs: &str) -> Result<()> {
|
|
let lhs_uuid = self
|
|
.index_mapping
|
|
.get(wtxn, lhs)?
|
|
.ok_or_else(|| Error::IndexNotFound(lhs.to_string()))?;
|
|
let rhs_uuid = self
|
|
.index_mapping
|
|
.get(wtxn, rhs)?
|
|
.ok_or_else(|| Error::IndexNotFound(rhs.to_string()))?;
|
|
|
|
self.index_mapping.put(wtxn, lhs, &rhs_uuid)?;
|
|
self.index_mapping.put(wtxn, rhs, &lhs_uuid)?;
|
|
|
|
Ok(())
|
|
}
|
|
|
|
pub fn index_exists(&self, rtxn: &RoTxn, name: &str) -> Result<bool> {
|
|
Ok(self.index_mapping.get(rtxn, name)?.is_some())
|
|
}
|
|
|
|
pub fn indexer_config(&self) -> &IndexerConfig {
|
|
&self.indexer_config
|
|
}
|
|
}
|