382: Refactor attribute criterion r=Kerollmops a=ManyTheFish

### Re-implement set based algorithm for attribute criterion
#### Levels
Instead of doing level iteration and digging in the interesting level, we only iterate over the lowest level.

#### crossword iteration VS minimal position iteration
Instead of crossing word position in order to iterate strictly over the position that gives the best rank in good order; we iterate word by word starting with the word that increases the rank the little as possible.
This new method is a bit less precise but way simpler.

### Simplify word-level-position database
We don't use levels anymore in the attribute criterion, and so we removed the level complexity of the database making a word-position-docids database.

### Benchmarks on search on big datasets

#### songs main VS refactor-attribute-criterion
```diff
  group                                                   search_songsmain_31c18f09               search_songsrefactor-attribute-criterion_1bd15d84
  -----                                                   -------------------------               -------------------------------------------------
- smol-songs.csv: basic filter: <=/Notstandskomitee       1.00     84.8±0.58µs        ? ?/sec     1.09     92.2±8.98µs        ? ?/sec
+ smol-songs.csv: basic filter: TO/Notstandskomitee       1.18     98.0±6.30µs        ? ?/sec     1.00     83.2±0.97µs        ? ?/sec
+ smol-songs.csv: basic with quote/"david" "bowie"        114.68    76.0±0.20ms        ? ?/sec    1.00    662.5±5.03µs        ? ?/sec
- smol-songs.csv: basic with quote/"john"                 1.00    197.4±1.06µs        ? ?/sec     1.05    208.1±1.53µs        ? ?/sec
+ smol-songs.csv: basic with quote/"michael" "jackson"    2.75      2.0±0.01ms        ? ?/sec     1.00    738.9±3.91µs        ? ?/sec
+ smol-songs.csv: basic without quote/david bowie         297.42  1499.3±0.86ms        ? ?/sec    1.00      5.0±0.02ms        ? ?/sec
+ smol-songs.csv: basic without quote/michael jackson     2.55      8.9±0.02ms        ? ?/sec     1.00      3.5±0.01ms        ? ?/sec
+ smol-songs.csv: big filter/john                         1.08    473.6±2.25µs        ? ?/sec     1.00    438.1±2.59µs        ? ?/sec
- smol-songs.csv: prefix search/a                         1.00    446.9±1.81µs        ? ?/sec     1.79    800.5±4.45µs        ? ?/sec
- smol-songs.csv: prefix search/b                         1.00    398.5±2.74µs        ? ?/sec     1.81    723.1±5.46µs        ? ?/sec
- smol-songs.csv: prefix search/i                         1.00    486.3±1.99µs        ? ?/sec     1.69    823.6±9.42µs        ? ?/sec
- smol-songs.csv: prefix search/s                         1.00    229.6±3.29µs        ? ?/sec     2.59    594.4±2.22µs        ? ?/sec
- smol-songs.csv: prefix search/x                         1.00    150.2±0.76µs        ? ?/sec     1.11    166.0±0.87µs        ? ?/sec
```

On songs, the new algorithm gives a big improvement on slow queries, and is slower on one char prefix search (fast queries <1ms).

#### wiki main VS refactor-attribute-criterion
```diff
  group                                                           search_wikimain_31c18f09               search_wikirefactor-attribute-criterion_1bd15d84
  -----                                                           ------------------------               ------------------------------------------------
- smol-wiki-articles.csv: basic with quote/"rock" "and" "roll"    1.00      3.2±0.01ms        ? ?/sec    1.15      3.7±0.01ms        ? ?/sec
- smol-wiki-articles.csv: basic without quote/film                1.00    351.5±2.47µs        ? ?/sec    1.13    396.8±1.63µs        ? ?/sec
+ smol-wiki-articles.csv: basic without quote/rock and roll       1.10      9.4±0.02ms        ? ?/sec    1.00      8.6±0.04ms        ? ?/sec
- smol-wiki-articles.csv: basic without quote/spain               1.00    446.0±3.23µs        ? ?/sec    1.11    496.6±7.75µs        ? ?/sec
- smol-wiki-articles.csv: prefix search/c                         1.00    115.6±0.61µs        ? ?/sec    2.22    256.7±1.24µs        ? ?/sec
- smol-wiki-articles.csv: prefix search/g                         1.00    189.7±2.03µs        ? ?/sec    1.57    297.0±1.35µs        ? ?/sec
- smol-wiki-articles.csv: prefix search/j                         1.00    209.2±1.11µs        ? ?/sec    1.40    293.0±2.09µs        ? ?/sec
- smol-wiki-articles.csv: prefix search/q                         1.00     79.0±0.44µs        ? ?/sec    1.10     87.2±0.69µs        ? ?/sec
- smol-wiki-articles.csv: prefix search/t                         1.00    270.1±1.15µs        ? ?/sec    1.55    419.9±5.16µs        ? ?/sec
- smol-wiki-articles.csv: prefix search/x                         1.00    244.9±1.33µs        ? ?/sec    1.07    260.9±1.95µs        ? ?/sec
- smol-wiki-articles.csv: words/Abraham machin                    1.00      8.1±0.03ms        ? ?/sec    1.17      9.4±0.02ms        ? ?/sec
- smol-wiki-articles.csv: words/Idaho Bellevue pizza              1.00     19.3±0.07ms        ? ?/sec    1.07     20.6±0.05ms        ? ?/sec
```
On wiki we have some regressions `+17%` and `+15%` on request `>1ms`.

Co-authored-by: many <maxime@meilisearch.com>
This commit is contained in:
bors[bot] 2021-10-06 09:19:33 +00:00 committed by GitHub
commit dde1da1c0e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
20 changed files with 436 additions and 963 deletions

View File

@ -361,6 +361,7 @@ async fn main() -> anyhow::Result<()> {
// We must use the write transaction of the update here. // We must use the write transaction of the update here.
let mut wtxn = index_cloned.write_txn()?; let mut wtxn = index_cloned.write_txn()?;
let mut builder = update_builder.index_documents(&mut wtxn, &index_cloned); let mut builder = update_builder.index_documents(&mut wtxn, &index_cloned);
builder.enable_autogenerate_docids();
match method.as_str() { match method.as_str() {
"replace" => builder "replace" => builder

View File

@ -7,7 +7,7 @@ use byte_unit::Byte;
use heed::EnvOpenOptions; use heed::EnvOpenOptions;
use milli::facet::FacetType; use milli::facet::FacetType;
use milli::index::db_name::*; use milli::index::db_name::*;
use milli::{FieldId, Index, TreeLevel}; use milli::{FieldId, Index};
use structopt::StructOpt; use structopt::StructOpt;
use Command::*; use Command::*;
@ -22,8 +22,8 @@ const ALL_DATABASE_NAMES: &[&str] = &[
DOCID_WORD_POSITIONS, DOCID_WORD_POSITIONS,
WORD_PAIR_PROXIMITY_DOCIDS, WORD_PAIR_PROXIMITY_DOCIDS,
WORD_PREFIX_PAIR_PROXIMITY_DOCIDS, WORD_PREFIX_PAIR_PROXIMITY_DOCIDS,
WORD_LEVEL_POSITION_DOCIDS, WORD_POSITION_DOCIDS,
WORD_PREFIX_LEVEL_POSITION_DOCIDS, WORD_PREFIX_POSITION_DOCIDS,
FIELD_ID_WORD_COUNT_DOCIDS, FIELD_ID_WORD_COUNT_DOCIDS,
FACET_ID_F64_DOCIDS, FACET_ID_F64_DOCIDS,
FACET_ID_STRING_DOCIDS, FACET_ID_STRING_DOCIDS,
@ -281,10 +281,10 @@ fn main() -> anyhow::Result<()> {
facet_values_docids(&index, &rtxn, !full_display, FacetType::String, field_name) facet_values_docids(&index, &rtxn, !full_display, FacetType::String, field_name)
} }
WordsLevelPositionsDocids { full_display, words } => { WordsLevelPositionsDocids { full_display, words } => {
words_level_positions_docids(&index, &rtxn, !full_display, words) words_positions_docids(&index, &rtxn, !full_display, words)
} }
WordPrefixesLevelPositionsDocids { full_display, prefixes } => { WordPrefixesLevelPositionsDocids { full_display, prefixes } => {
word_prefixes_level_positions_docids(&index, &rtxn, !full_display, prefixes) word_prefixes_positions_docids(&index, &rtxn, !full_display, prefixes)
} }
FieldIdWordCountDocids { full_display, field_name } => { FieldIdWordCountDocids { full_display, field_name } => {
field_id_word_count_docids(&index, &rtxn, !full_display, field_name) field_id_word_count_docids(&index, &rtxn, !full_display, field_name)
@ -379,8 +379,8 @@ fn biggest_value_sizes(index: &Index, rtxn: &heed::RoTxn, limit: usize) -> anyho
docid_word_positions, docid_word_positions,
word_pair_proximity_docids, word_pair_proximity_docids,
word_prefix_pair_proximity_docids, word_prefix_pair_proximity_docids,
word_level_position_docids, word_position_docids,
word_prefix_level_position_docids, word_prefix_position_docids,
field_id_word_count_docids, field_id_word_count_docids,
facet_id_f64_docids, facet_id_f64_docids,
facet_id_string_docids, facet_id_string_docids,
@ -395,8 +395,8 @@ fn biggest_value_sizes(index: &Index, rtxn: &heed::RoTxn, limit: usize) -> anyho
let docid_word_positions_name = "docid_word_positions"; let docid_word_positions_name = "docid_word_positions";
let word_prefix_pair_proximity_docids_name = "word_prefix_pair_proximity_docids"; let word_prefix_pair_proximity_docids_name = "word_prefix_pair_proximity_docids";
let word_pair_proximity_docids_name = "word_pair_proximity_docids"; let word_pair_proximity_docids_name = "word_pair_proximity_docids";
let word_level_position_docids_name = "word_level_position_docids"; let word_position_docids_name = "word_position_docids";
let word_prefix_level_position_docids_name = "word_prefix_level_position_docids"; let word_prefix_position_docids_name = "word_prefix_position_docids";
let field_id_word_count_docids_name = "field_id_word_count_docids"; let field_id_word_count_docids_name = "field_id_word_count_docids";
let facet_id_f64_docids_name = "facet_id_f64_docids"; let facet_id_f64_docids_name = "facet_id_f64_docids";
let facet_id_string_docids_name = "facet_id_string_docids"; let facet_id_string_docids_name = "facet_id_string_docids";
@ -471,19 +471,19 @@ fn biggest_value_sizes(index: &Index, rtxn: &heed::RoTxn, limit: usize) -> anyho
} }
} }
for result in word_level_position_docids.remap_data_type::<ByteSlice>().iter(rtxn)? { for result in word_position_docids.remap_data_type::<ByteSlice>().iter(rtxn)? {
let ((word, level, left, right), value) = result?; let ((word, pos), value) = result?;
let key = format!("{} {} {:?}", word, level, left..=right); let key = format!("{} {}", word, pos);
heap.push(Reverse((value.len(), key, word_level_position_docids_name))); heap.push(Reverse((value.len(), key, word_position_docids_name)));
if heap.len() > limit { if heap.len() > limit {
heap.pop(); heap.pop();
} }
} }
for result in word_prefix_level_position_docids.remap_data_type::<ByteSlice>().iter(rtxn)? { for result in word_prefix_position_docids.remap_data_type::<ByteSlice>().iter(rtxn)? {
let ((word, level, left, right), value) = result?; let ((word, pos), value) = result?;
let key = format!("{} {} {:?}", word, level, left..=right); let key = format!("{} {}", word, pos);
heap.push(Reverse((value.len(), key, word_prefix_level_position_docids_name))); heap.push(Reverse((value.len(), key, word_prefix_position_docids_name)));
if heap.len() > limit { if heap.len() > limit {
heap.pop(); heap.pop();
} }
@ -663,7 +663,7 @@ fn facet_values_docids(
Ok(wtr.flush()?) Ok(wtr.flush()?)
} }
fn words_level_positions_docids( fn words_positions_docids(
index: &Index, index: &Index,
rtxn: &heed::RoTxn, rtxn: &heed::RoTxn,
debug: bool, debug: bool,
@ -671,16 +671,16 @@ fn words_level_positions_docids(
) -> anyhow::Result<()> { ) -> anyhow::Result<()> {
let stdout = io::stdout(); let stdout = io::stdout();
let mut wtr = csv::Writer::from_writer(stdout.lock()); let mut wtr = csv::Writer::from_writer(stdout.lock());
wtr.write_record(&["word", "level", "positions", "documents_count", "documents_ids"])?; wtr.write_record(&["word", "position", "documents_count", "documents_ids"])?;
for word in words.iter().map(AsRef::as_ref) { for word in words.iter().map(AsRef::as_ref) {
let range = { let range = {
let left = (word, TreeLevel::min_value(), u32::min_value(), u32::min_value()); let left = (word, u32::min_value());
let right = (word, TreeLevel::max_value(), u32::max_value(), u32::max_value()); let right = (word, u32::max_value());
left..=right left..=right
}; };
for result in index.word_level_position_docids.range(rtxn, &range)? { for result in index.word_position_docids.range(rtxn, &range)? {
let ((w, level, left, right), docids) = result?; let ((w, pos), docids) = result?;
let count = docids.len().to_string(); let count = docids.len().to_string();
let docids = if debug { let docids = if debug {
@ -688,20 +688,15 @@ fn words_level_positions_docids(
} else { } else {
format!("{:?}", docids.iter().collect::<Vec<_>>()) format!("{:?}", docids.iter().collect::<Vec<_>>())
}; };
let position_range = if level == TreeLevel::min_value() { let position = format!("{:?}", pos);
format!("{:?}", left) wtr.write_record(&[w, &position, &count, &docids])?;
} else {
format!("{:?}", left..=right)
};
let level = level.to_string();
wtr.write_record(&[w, &level, &position_range, &count, &docids])?;
} }
} }
Ok(wtr.flush()?) Ok(wtr.flush()?)
} }
fn word_prefixes_level_positions_docids( fn word_prefixes_positions_docids(
index: &Index, index: &Index,
rtxn: &heed::RoTxn, rtxn: &heed::RoTxn,
debug: bool, debug: bool,
@ -709,16 +704,16 @@ fn word_prefixes_level_positions_docids(
) -> anyhow::Result<()> { ) -> anyhow::Result<()> {
let stdout = io::stdout(); let stdout = io::stdout();
let mut wtr = csv::Writer::from_writer(stdout.lock()); let mut wtr = csv::Writer::from_writer(stdout.lock());
wtr.write_record(&["prefix", "level", "positions", "documents_count", "documents_ids"])?; wtr.write_record(&["prefix", "position", "documents_count", "documents_ids"])?;
for word in prefixes.iter().map(AsRef::as_ref) { for word in prefixes.iter().map(AsRef::as_ref) {
let range = { let range = {
let left = (word, TreeLevel::min_value(), u32::min_value(), u32::min_value()); let left = (word, u32::min_value());
let right = (word, TreeLevel::max_value(), u32::max_value(), u32::max_value()); let right = (word, u32::max_value());
left..=right left..=right
}; };
for result in index.word_prefix_level_position_docids.range(rtxn, &range)? { for result in index.word_prefix_position_docids.range(rtxn, &range)? {
let ((w, level, left, right), docids) = result?; let ((w, pos), docids) = result?;
let count = docids.len().to_string(); let count = docids.len().to_string();
let docids = if debug { let docids = if debug {
@ -726,13 +721,8 @@ fn word_prefixes_level_positions_docids(
} else { } else {
format!("{:?}", docids.iter().collect::<Vec<_>>()) format!("{:?}", docids.iter().collect::<Vec<_>>())
}; };
let position_range = if level == TreeLevel::min_value() { let position = format!("{:?}", pos);
format!("{:?}", left) wtr.write_record(&[w, &position, &count, &docids])?;
} else {
format!("{:?}", left..=right)
};
let level = level.to_string();
wtr.write_record(&[w, &level, &position_range, &count, &docids])?;
} }
} }
@ -970,8 +960,8 @@ fn size_of_databases(index: &Index, rtxn: &heed::RoTxn, names: Vec<String>) -> a
docid_word_positions, docid_word_positions,
word_pair_proximity_docids, word_pair_proximity_docids,
word_prefix_pair_proximity_docids, word_prefix_pair_proximity_docids,
word_level_position_docids, word_position_docids,
word_prefix_level_position_docids, word_prefix_position_docids,
field_id_word_count_docids, field_id_word_count_docids,
facet_id_f64_docids, facet_id_f64_docids,
facet_id_string_docids, facet_id_string_docids,
@ -994,8 +984,8 @@ fn size_of_databases(index: &Index, rtxn: &heed::RoTxn, names: Vec<String>) -> a
DOCID_WORD_POSITIONS => docid_word_positions.as_polymorph(), DOCID_WORD_POSITIONS => docid_word_positions.as_polymorph(),
WORD_PAIR_PROXIMITY_DOCIDS => word_pair_proximity_docids.as_polymorph(), WORD_PAIR_PROXIMITY_DOCIDS => word_pair_proximity_docids.as_polymorph(),
WORD_PREFIX_PAIR_PROXIMITY_DOCIDS => word_prefix_pair_proximity_docids.as_polymorph(), WORD_PREFIX_PAIR_PROXIMITY_DOCIDS => word_prefix_pair_proximity_docids.as_polymorph(),
WORD_LEVEL_POSITION_DOCIDS => word_level_position_docids.as_polymorph(), WORD_POSITION_DOCIDS => word_position_docids.as_polymorph(),
WORD_PREFIX_LEVEL_POSITION_DOCIDS => word_prefix_level_position_docids.as_polymorph(), WORD_PREFIX_POSITION_DOCIDS => word_prefix_position_docids.as_polymorph(),
FIELD_ID_WORD_COUNT_DOCIDS => field_id_word_count_docids.as_polymorph(), FIELD_ID_WORD_COUNT_DOCIDS => field_id_word_count_docids.as_polymorph(),
FACET_ID_F64_DOCIDS => facet_id_f64_docids.as_polymorph(), FACET_ID_F64_DOCIDS => facet_id_f64_docids.as_polymorph(),
FACET_ID_STRING_DOCIDS => facet_id_string_docids.as_polymorph(), FACET_ID_STRING_DOCIDS => facet_id_string_docids.as_polymorph(),

View File

@ -4,7 +4,7 @@ mod field_id_word_count_codec;
mod obkv_codec; mod obkv_codec;
mod roaring_bitmap; mod roaring_bitmap;
mod roaring_bitmap_length; mod roaring_bitmap_length;
mod str_level_position_codec; mod str_beu32_codec;
mod str_str_u8_codec; mod str_str_u8_codec;
pub use self::beu32_str_codec::BEU32StrCodec; pub use self::beu32_str_codec::BEU32StrCodec;
@ -14,5 +14,5 @@ pub use self::roaring_bitmap::{BoRoaringBitmapCodec, CboRoaringBitmapCodec, Roar
pub use self::roaring_bitmap_length::{ pub use self::roaring_bitmap_length::{
BoRoaringBitmapLenCodec, CboRoaringBitmapLenCodec, RoaringBitmapLenCodec, BoRoaringBitmapLenCodec, CboRoaringBitmapLenCodec, RoaringBitmapLenCodec,
}; };
pub use self::str_level_position_codec::StrLevelPositionCodec; pub use self::str_beu32_codec::StrBEU32Codec;
pub use self::str_str_u8_codec::StrStrU8Codec; pub use self::str_str_u8_codec::StrStrU8Codec;

View File

@ -0,0 +1,38 @@
use std::borrow::Cow;
use std::convert::TryInto;
use std::mem::size_of;
use std::str;
pub struct StrBEU32Codec;
impl<'a> heed::BytesDecode<'a> for StrBEU32Codec {
type DItem = (&'a str, u32);
fn bytes_decode(bytes: &'a [u8]) -> Option<Self::DItem> {
let footer_len = size_of::<u32>();
if bytes.len() < footer_len {
return None;
}
let (word, bytes) = bytes.split_at(bytes.len() - footer_len);
let word = str::from_utf8(word).ok()?;
let pos = bytes.try_into().map(u32::from_be_bytes).ok()?;
Some((word, pos))
}
}
impl<'a> heed::BytesEncode<'a> for StrBEU32Codec {
type EItem = (&'a str, u32);
fn bytes_encode((word, pos): &Self::EItem) -> Option<Cow<[u8]>> {
let pos = pos.to_be_bytes();
let mut bytes = Vec::with_capacity(word.len() + pos.len());
bytes.extend_from_slice(word.as_bytes());
bytes.extend_from_slice(&pos[..]);
Some(Cow::Owned(bytes))
}
}

View File

@ -1,47 +0,0 @@
use std::borrow::Cow;
use std::convert::{TryFrom, TryInto};
use std::mem::size_of;
use std::str;
use crate::TreeLevel;
pub struct StrLevelPositionCodec;
impl<'a> heed::BytesDecode<'a> for StrLevelPositionCodec {
type DItem = (&'a str, TreeLevel, u32, u32);
fn bytes_decode(bytes: &'a [u8]) -> Option<Self::DItem> {
let footer_len = size_of::<u8>() + size_of::<u32>() * 2;
if bytes.len() < footer_len {
return None;
}
let (word, bytes) = bytes.split_at(bytes.len() - footer_len);
let word = str::from_utf8(word).ok()?;
let (level, bytes) = bytes.split_first()?;
let left = bytes[..4].try_into().map(u32::from_be_bytes).ok()?;
let right = bytes[4..].try_into().map(u32::from_be_bytes).ok()?;
let level = TreeLevel::try_from(*level).ok()?;
Some((word, level, left, right))
}
}
impl<'a> heed::BytesEncode<'a> for StrLevelPositionCodec {
type EItem = (&'a str, TreeLevel, u32, u32);
fn bytes_encode((word, level, left, right): &Self::EItem) -> Option<Cow<[u8]>> {
let left = left.to_be_bytes();
let right = right.to_be_bytes();
let mut bytes = Vec::with_capacity(word.len() + 1 + left.len() + right.len());
bytes.extend_from_slice(word.as_bytes());
bytes.push((*level).into());
bytes.extend_from_slice(&left[..]);
bytes.extend_from_slice(&right[..]);
Some(Cow::Owned(bytes))
}
}

View File

@ -20,7 +20,7 @@ use crate::{
default_criteria, BEU32StrCodec, BoRoaringBitmapCodec, CboRoaringBitmapCodec, Criterion, default_criteria, BEU32StrCodec, BoRoaringBitmapCodec, CboRoaringBitmapCodec, Criterion,
DocumentId, ExternalDocumentsIds, FacetDistribution, FieldDistribution, FieldId, DocumentId, ExternalDocumentsIds, FacetDistribution, FieldDistribution, FieldId,
FieldIdWordCountCodec, GeoPoint, ObkvCodec, Result, RoaringBitmapCodec, RoaringBitmapLenCodec, FieldIdWordCountCodec, GeoPoint, ObkvCodec, Result, RoaringBitmapCodec, RoaringBitmapLenCodec,
Search, StrLevelPositionCodec, StrStrU8Codec, BEU32, Search, StrBEU32Codec, StrStrU8Codec, BEU32,
}; };
pub mod main_key { pub mod main_key {
@ -55,8 +55,8 @@ pub mod db_name {
pub const DOCID_WORD_POSITIONS: &str = "docid-word-positions"; pub const DOCID_WORD_POSITIONS: &str = "docid-word-positions";
pub const WORD_PAIR_PROXIMITY_DOCIDS: &str = "word-pair-proximity-docids"; pub const WORD_PAIR_PROXIMITY_DOCIDS: &str = "word-pair-proximity-docids";
pub const WORD_PREFIX_PAIR_PROXIMITY_DOCIDS: &str = "word-prefix-pair-proximity-docids"; pub const WORD_PREFIX_PAIR_PROXIMITY_DOCIDS: &str = "word-prefix-pair-proximity-docids";
pub const WORD_LEVEL_POSITION_DOCIDS: &str = "word-level-position-docids"; pub const WORD_POSITION_DOCIDS: &str = "word-position-docids";
pub const WORD_PREFIX_LEVEL_POSITION_DOCIDS: &str = "word-prefix-level-position-docids"; pub const WORD_PREFIX_POSITION_DOCIDS: &str = "word-prefix-position-docids";
pub const FIELD_ID_WORD_COUNT_DOCIDS: &str = "field-id-word-count-docids"; pub const FIELD_ID_WORD_COUNT_DOCIDS: &str = "field-id-word-count-docids";
pub const FACET_ID_F64_DOCIDS: &str = "facet-id-f64-docids"; pub const FACET_ID_F64_DOCIDS: &str = "facet-id-f64-docids";
pub const FACET_ID_STRING_DOCIDS: &str = "facet-id-string-docids"; pub const FACET_ID_STRING_DOCIDS: &str = "facet-id-string-docids";
@ -86,12 +86,12 @@ pub struct Index {
/// Maps the proximity between a pair of word and prefix with all the docids where this relation appears. /// Maps the proximity between a pair of word and prefix with all the docids where this relation appears.
pub word_prefix_pair_proximity_docids: Database<StrStrU8Codec, CboRoaringBitmapCodec>, pub word_prefix_pair_proximity_docids: Database<StrStrU8Codec, CboRoaringBitmapCodec>,
/// Maps the word, level and position range with the docids that corresponds to it. /// Maps the word and the position with the docids that corresponds to it.
pub word_level_position_docids: Database<StrLevelPositionCodec, CboRoaringBitmapCodec>, pub word_position_docids: Database<StrBEU32Codec, CboRoaringBitmapCodec>,
/// Maps the field id and the word count with the docids that corresponds to it. /// Maps the field id and the word count with the docids that corresponds to it.
pub field_id_word_count_docids: Database<FieldIdWordCountCodec, CboRoaringBitmapCodec>, pub field_id_word_count_docids: Database<FieldIdWordCountCodec, CboRoaringBitmapCodec>,
/// Maps the level positions of a word prefix with all the docids where this prefix appears. /// Maps the position of a word prefix with all the docids where this prefix appears.
pub word_prefix_level_position_docids: Database<StrLevelPositionCodec, CboRoaringBitmapCodec>, pub word_prefix_position_docids: Database<StrBEU32Codec, CboRoaringBitmapCodec>,
/// Maps the facet field id, level and the number with the docids that corresponds to it. /// Maps the facet field id, level and the number with the docids that corresponds to it.
pub facet_id_f64_docids: Database<FacetLevelValueF64Codec, CboRoaringBitmapCodec>, pub facet_id_f64_docids: Database<FacetLevelValueF64Codec, CboRoaringBitmapCodec>,
@ -122,10 +122,9 @@ impl Index {
let word_pair_proximity_docids = env.create_database(Some(WORD_PAIR_PROXIMITY_DOCIDS))?; let word_pair_proximity_docids = env.create_database(Some(WORD_PAIR_PROXIMITY_DOCIDS))?;
let word_prefix_pair_proximity_docids = let word_prefix_pair_proximity_docids =
env.create_database(Some(WORD_PREFIX_PAIR_PROXIMITY_DOCIDS))?; env.create_database(Some(WORD_PREFIX_PAIR_PROXIMITY_DOCIDS))?;
let word_level_position_docids = env.create_database(Some(WORD_LEVEL_POSITION_DOCIDS))?; let word_position_docids = env.create_database(Some(WORD_POSITION_DOCIDS))?;
let field_id_word_count_docids = env.create_database(Some(FIELD_ID_WORD_COUNT_DOCIDS))?; let field_id_word_count_docids = env.create_database(Some(FIELD_ID_WORD_COUNT_DOCIDS))?;
let word_prefix_level_position_docids = let word_prefix_position_docids = env.create_database(Some(WORD_PREFIX_POSITION_DOCIDS))?;
env.create_database(Some(WORD_PREFIX_LEVEL_POSITION_DOCIDS))?;
let facet_id_f64_docids = env.create_database(Some(FACET_ID_F64_DOCIDS))?; let facet_id_f64_docids = env.create_database(Some(FACET_ID_F64_DOCIDS))?;
let facet_id_string_docids = env.create_database(Some(FACET_ID_STRING_DOCIDS))?; let facet_id_string_docids = env.create_database(Some(FACET_ID_STRING_DOCIDS))?;
let field_id_docid_facet_f64s = env.create_database(Some(FIELD_ID_DOCID_FACET_F64S))?; let field_id_docid_facet_f64s = env.create_database(Some(FIELD_ID_DOCID_FACET_F64S))?;
@ -143,8 +142,8 @@ impl Index {
docid_word_positions, docid_word_positions,
word_pair_proximity_docids, word_pair_proximity_docids,
word_prefix_pair_proximity_docids, word_prefix_pair_proximity_docids,
word_level_position_docids, word_position_docids,
word_prefix_level_position_docids, word_prefix_position_docids,
field_id_word_count_docids, field_id_word_count_docids,
facet_id_f64_docids, facet_id_f64_docids,
facet_id_string_docids, facet_id_string_docids,

View File

@ -14,7 +14,6 @@ pub mod heed_codec;
pub mod index; pub mod index;
pub mod proximity; pub mod proximity;
mod search; mod search;
pub mod tree_level;
pub mod update; pub mod update;
use std::collections::{BTreeMap, HashMap}; use std::collections::{BTreeMap, HashMap};
@ -35,11 +34,10 @@ pub use self::fields_ids_map::FieldsIdsMap;
pub use self::heed_codec::{ pub use self::heed_codec::{
BEU32StrCodec, BoRoaringBitmapCodec, BoRoaringBitmapLenCodec, CboRoaringBitmapCodec, BEU32StrCodec, BoRoaringBitmapCodec, BoRoaringBitmapLenCodec, CboRoaringBitmapCodec,
CboRoaringBitmapLenCodec, FieldIdWordCountCodec, ObkvCodec, RoaringBitmapCodec, CboRoaringBitmapLenCodec, FieldIdWordCountCodec, ObkvCodec, RoaringBitmapCodec,
RoaringBitmapLenCodec, StrLevelPositionCodec, StrStrU8Codec, RoaringBitmapLenCodec, StrBEU32Codec, StrStrU8Codec,
}; };
pub use self::index::Index; pub use self::index::Index;
pub use self::search::{FacetDistribution, FilterCondition, MatchingWords, Search, SearchResult}; pub use self::search::{FacetDistribution, FilterCondition, MatchingWords, Search, SearchResult};
pub use self::tree_level::TreeLevel;
pub type Result<T> = std::result::Result<T, error::Error>; pub type Result<T> = std::result::Result<T, error::Error>;

View File

@ -1,7 +1,7 @@
use std::borrow::Cow;
use std::cmp::{self, Ordering}; use std::cmp::{self, Ordering};
use std::collections::binary_heap::PeekMut; use std::collections::binary_heap::PeekMut;
use std::collections::{btree_map, BTreeMap, BinaryHeap, HashMap}; use std::collections::{btree_map, BTreeMap, BinaryHeap, HashMap};
use std::iter::Peekable;
use std::mem::take; use std::mem::take;
use roaring::RoaringBitmap; use roaring::RoaringBitmap;
@ -10,20 +10,16 @@ use super::{resolve_query_tree, Context, Criterion, CriterionParameters, Criteri
use crate::search::criteria::Query; use crate::search::criteria::Query;
use crate::search::query_tree::{Operation, QueryKind}; use crate::search::query_tree::{Operation, QueryKind};
use crate::search::{build_dfa, word_derivations, WordDerivationsCache}; use crate::search::{build_dfa, word_derivations, WordDerivationsCache};
use crate::{Result, TreeLevel}; use crate::Result;
/// To be able to divide integers by the number of words in the query /// To be able to divide integers by the number of words in the query
/// we want to find a multiplier that allow us to divide by any number between 1 and 10. /// we want to find a multiplier that allow us to divide by any number between 1 and 10.
/// We chose the LCM of all numbers between 1 and 10 as the multiplier (https://en.wikipedia.org/wiki/Least_common_multiple). /// We chose the LCM of all numbers between 1 and 10 as the multiplier (https://en.wikipedia.org/wiki/Least_common_multiple).
const LCM_10_FIRST_NUMBERS: u32 = 2520; const LCM_10_FIRST_NUMBERS: u32 = 2520;
/// To compute the interval size of a level,
/// we use 4 as the exponentiation base and the level as the exponent.
const LEVEL_EXPONENTIATION_BASE: u32 = 4;
/// Threshold on the number of candidates that will make /// Threshold on the number of candidates that will make
/// the system to choose between one algorithm or another. /// the system to choose between one algorithm or another.
const CANDIDATES_THRESHOLD: u64 = 1000; const CANDIDATES_THRESHOLD: u64 = 500;
type FlattenedQueryTree = Vec<Vec<Vec<Query>>>; type FlattenedQueryTree = Vec<Vec<Vec<Query>>>;
@ -32,7 +28,8 @@ pub struct Attribute<'t> {
state: Option<(Operation, FlattenedQueryTree, RoaringBitmap)>, state: Option<(Operation, FlattenedQueryTree, RoaringBitmap)>,
bucket_candidates: RoaringBitmap, bucket_candidates: RoaringBitmap,
parent: Box<dyn Criterion + 't>, parent: Box<dyn Criterion + 't>,
current_buckets: Option<btree_map::IntoIter<u64, RoaringBitmap>>, linear_buckets: Option<btree_map::IntoIter<u64, RoaringBitmap>>,
set_buckets: Option<BinaryHeap<Branch<'t>>>,
} }
impl<'t> Attribute<'t> { impl<'t> Attribute<'t> {
@ -42,7 +39,8 @@ impl<'t> Attribute<'t> {
state: None, state: None,
bucket_candidates: RoaringBitmap::new(), bucket_candidates: RoaringBitmap::new(),
parent, parent,
current_buckets: None, linear_buckets: None,
set_buckets: None,
} }
} }
} }
@ -67,19 +65,19 @@ impl<'t> Criterion for Attribute<'t> {
} }
Some((query_tree, flattened_query_tree, mut allowed_candidates)) => { Some((query_tree, flattened_query_tree, mut allowed_candidates)) => {
let found_candidates = if allowed_candidates.len() < CANDIDATES_THRESHOLD { let found_candidates = if allowed_candidates.len() < CANDIDATES_THRESHOLD {
let current_buckets = match self.current_buckets.as_mut() { let linear_buckets = match self.linear_buckets.as_mut() {
Some(current_buckets) => current_buckets, Some(linear_buckets) => linear_buckets,
None => { None => {
let new_buckets = linear_compute_candidates( let new_buckets = initialize_linear_buckets(
self.ctx, self.ctx,
&flattened_query_tree, &flattened_query_tree,
&allowed_candidates, &allowed_candidates,
)?; )?;
self.current_buckets.get_or_insert(new_buckets.into_iter()) self.linear_buckets.get_or_insert(new_buckets.into_iter())
} }
}; };
match current_buckets.next() { match linear_buckets.next() {
Some((_score, candidates)) => candidates, Some((_score, candidates)) => candidates,
None => { None => {
return Ok(Some(CriterionResult { return Ok(Some(CriterionResult {
@ -91,13 +89,21 @@ impl<'t> Criterion for Attribute<'t> {
} }
} }
} else { } else {
match set_compute_candidates( let mut set_buckets = match self.set_buckets.as_mut() {
Some(set_buckets) => set_buckets,
None => {
let new_buckets = initialize_set_buckets(
self.ctx, self.ctx,
&flattened_query_tree, &flattened_query_tree,
&allowed_candidates, &allowed_candidates,
params.wdcache, params.wdcache,
)? { )?;
Some(candidates) => candidates, self.set_buckets.get_or_insert(new_buckets)
}
};
match set_compute_candidates(&mut set_buckets, &allowed_candidates)? {
Some((_score, candidates)) => candidates,
None => { None => {
return Ok(Some(CriterionResult { return Ok(Some(CriterionResult {
query_tree: Some(query_tree), query_tree: Some(query_tree),
@ -148,7 +154,7 @@ impl<'t> Criterion for Attribute<'t> {
} }
self.state = Some((query_tree, flattened_query_tree, candidates)); self.state = Some((query_tree, flattened_query_tree, candidates));
self.current_buckets = None; self.linear_buckets = None;
} }
Some(CriterionResult { Some(CriterionResult {
query_tree: None, query_tree: None,
@ -170,142 +176,33 @@ impl<'t> Criterion for Attribute<'t> {
} }
} }
/// WordLevelIterator is an pseudo-Iterator over intervals of word-position for one word, /// QueryPositionIterator is an Iterator over positions of a Query,
/// it will begin at the first non-empty interval and will return every interval without /// It contains iterators over words positions.
/// jumping over empty intervals. struct QueryPositionIterator<'t> {
struct WordLevelIterator<'t, 'q> { inner:
inner: Box< Vec<Peekable<Box<dyn Iterator<Item = heed::Result<((&'t str, u32), RoaringBitmap)>> + 't>>>,
dyn Iterator<Item = heed::Result<((&'t str, TreeLevel, u32, u32), RoaringBitmap)>> + 't,
>,
level: TreeLevel,
interval_size: u32,
word: Cow<'q, str>,
in_prefix_cache: bool,
inner_next: Option<(u32, u32, RoaringBitmap)>,
current_interval: Option<(u32, u32)>,
} }
impl<'t, 'q> WordLevelIterator<'t, 'q> { impl<'t> QueryPositionIterator<'t> {
fn new( fn new(
ctx: &'t dyn Context<'t>, ctx: &'t dyn Context<'t>,
word: Cow<'q, str>, queries: &[Query],
in_prefix_cache: bool,
) -> heed::Result<Option<Self>> {
match ctx.word_position_last_level(&word, in_prefix_cache)? {
Some(_) => {
// HOTFIX Meilisearch#1707: it is better to only iterate over level 0 for performances reasons.
let level = TreeLevel::min_value();
let interval_size = LEVEL_EXPONENTIATION_BASE.pow(Into::<u8>::into(level) as u32);
let inner =
ctx.word_position_iterator(&word, level, in_prefix_cache, None, None)?;
Ok(Some(Self {
inner,
level,
interval_size,
word,
in_prefix_cache,
inner_next: None,
current_interval: None,
}))
}
None => Ok(None),
}
}
fn dig(
&self,
ctx: &'t dyn Context<'t>,
level: &TreeLevel,
left_interval: Option<u32>,
) -> heed::Result<Self> {
let level = *level.min(&self.level);
let interval_size = LEVEL_EXPONENTIATION_BASE.pow(Into::<u8>::into(level) as u32);
let word = self.word.clone();
let in_prefix_cache = self.in_prefix_cache;
let inner =
ctx.word_position_iterator(&word, level, in_prefix_cache, left_interval, None)?;
Ok(Self {
inner,
level,
interval_size,
word,
in_prefix_cache,
inner_next: None,
current_interval: None,
})
}
fn next(&mut self) -> heed::Result<Option<(u32, u32, RoaringBitmap)>> {
fn is_next_interval(last_right: u32, next_left: u32) -> bool {
last_right + 1 == next_left
}
let inner_next = match self.inner_next.take() {
Some(inner_next) => Some(inner_next),
None => self
.inner
.next()
.transpose()?
.map(|((_, _, left, right), docids)| (left, right, docids)),
};
match inner_next {
Some((left, right, docids)) => match self.current_interval {
Some((last_left, last_right)) if !is_next_interval(last_right, left) => {
let blank_left = last_left + self.interval_size;
let blank_right = last_right + self.interval_size;
self.current_interval = Some((blank_left, blank_right));
self.inner_next = Some((left, right, docids));
Ok(Some((blank_left, blank_right, RoaringBitmap::new())))
}
_ => {
self.current_interval = Some((left, right));
Ok(Some((left, right, docids)))
}
},
None => Ok(None),
}
}
}
/// QueryLevelIterator is an pseudo-Iterator for a Query,
/// It contains WordLevelIterators and is chainned with other QueryLevelIterator.
struct QueryLevelIterator<'t, 'q> {
parent: Option<Box<QueryLevelIterator<'t, 'q>>>,
inner: Vec<WordLevelIterator<'t, 'q>>,
level: TreeLevel,
accumulator: Vec<Option<(u32, u32, RoaringBitmap)>>,
parent_accumulator: Vec<Option<(u32, u32, RoaringBitmap)>>,
interval_to_skip: usize,
}
impl<'t, 'q> QueryLevelIterator<'t, 'q> {
fn new(
ctx: &'t dyn Context<'t>,
queries: &'q [Query],
wdcache: &mut WordDerivationsCache, wdcache: &mut WordDerivationsCache,
) -> Result<Option<Self>> { ) -> Result<Self> {
let mut inner = Vec::with_capacity(queries.len()); let mut inner = Vec::with_capacity(queries.len());
for query in queries { for query in queries {
let in_prefix_cache = query.prefix && ctx.in_prefix_cache(query.kind.word());
match &query.kind { match &query.kind {
QueryKind::Exact { word, .. } => { QueryKind::Exact { word, .. } => {
if !query.prefix || ctx.in_prefix_cache(&word) { if !query.prefix || in_prefix_cache {
let word = Cow::Borrowed(query.kind.word()); let word = query.kind.word();
if let Some(word_level_iterator) = let iter = ctx.word_position_iterator(word, in_prefix_cache)?;
WordLevelIterator::new(ctx, word, query.prefix)? inner.push(iter.peekable());
{
inner.push(word_level_iterator);
}
} else { } else {
for (word, _) in word_derivations(&word, true, 0, ctx.words_fst(), wdcache)? for (word, _) in word_derivations(&word, true, 0, ctx.words_fst(), wdcache)?
{ {
let word = Cow::Owned(word.to_owned()); let iter = ctx.word_position_iterator(&word, in_prefix_cache)?;
if let Some(word_level_iterator) = inner.push(iter.peekable());
WordLevelIterator::new(ctx, word, false)?
{
inner.push(word_level_iterator);
}
} }
} }
} }
@ -313,360 +210,247 @@ impl<'t, 'q> QueryLevelIterator<'t, 'q> {
for (word, _) in for (word, _) in
word_derivations(&word, query.prefix, *typo, ctx.words_fst(), wdcache)? word_derivations(&word, query.prefix, *typo, ctx.words_fst(), wdcache)?
{ {
let word = Cow::Owned(word.to_owned()); let iter = ctx.word_position_iterator(&word, in_prefix_cache)?;
if let Some(word_level_iterator) = WordLevelIterator::new(ctx, word, false)? inner.push(iter.peekable());
{
inner.push(word_level_iterator);
} }
} }
}
}
}
let highest = inner.iter().max_by_key(|wli| wli.level).map(|wli| wli.level);
match highest {
Some(level) => Ok(Some(Self {
parent: None,
inner,
level,
accumulator: vec![],
parent_accumulator: vec![],
interval_to_skip: 0,
})),
None => Ok(None),
}
}
fn parent(&mut self, parent: QueryLevelIterator<'t, 'q>) -> &Self {
self.parent = Some(Box::new(parent));
self
}
/// create a new QueryLevelIterator with a lower level than the current one.
fn dig(&self, ctx: &'t dyn Context<'t>) -> heed::Result<Self> {
let (level, parent) = match &self.parent {
Some(parent) => {
let parent = parent.dig(ctx)?;
(parent.level.min(self.level), Some(Box::new(parent)))
}
None => (self.level.saturating_sub(1), None),
}; };
let left_interval = self
.accumulator
.get(self.interval_to_skip)
.map(|opt| opt.as_ref().map(|(left, _, _)| *left))
.flatten();
let mut inner = Vec::with_capacity(self.inner.len());
for word_level_iterator in self.inner.iter() {
inner.push(word_level_iterator.dig(ctx, &level, left_interval)?);
} }
Ok(Self { Ok(Self { inner })
parent, }
inner, }
level,
accumulator: vec![], impl<'t> Iterator for QueryPositionIterator<'t> {
parent_accumulator: vec![], type Item = heed::Result<(u32, RoaringBitmap)>;
interval_to_skip: 0,
fn next(&mut self) -> Option<Self::Item> {
// sort inner words from the closest next position to the farthest next position.
let expected_pos = self
.inner
.iter_mut()
.filter_map(|wli| match wli.peek() {
Some(Ok(((_, pos), _))) => Some(*pos),
_ => None,
}) })
} .min()?;
fn inner_next(&mut self, level: TreeLevel) -> heed::Result<Option<(u32, u32, RoaringBitmap)>> { let mut candidates = None;
let mut accumulated: Option<(u32, u32, RoaringBitmap)> = None;
let u8_level = Into::<u8>::into(level);
let interval_size = LEVEL_EXPONENTIATION_BASE.pow(u8_level as u32);
for wli in self.inner.iter_mut() { for wli in self.inner.iter_mut() {
let wli_u8_level = Into::<u8>::into(wli.level); if let Some(Ok(((_, pos), _))) = wli.peek() {
let accumulated_count = LEVEL_EXPONENTIATION_BASE.pow((u8_level - wli_u8_level) as u32); if *pos > expected_pos {
for _ in 0..accumulated_count { continue;
if let Some((next_left, _, next_docids)) = wli.next()? {
accumulated = match accumulated.take() {
Some((acc_left, acc_right, mut acc_docids)) => {
acc_docids |= next_docids;
Some((acc_left, acc_right, acc_docids))
}
None => Some((next_left, next_left + interval_size, next_docids)),
};
}
}
}
Ok(accumulated)
}
/// return the next meta-interval created from inner WordLevelIterators,
/// and from eventual chainned QueryLevelIterator.
fn next(
&mut self,
allowed_candidates: &RoaringBitmap,
tree_level: TreeLevel,
) -> heed::Result<Option<(u32, u32, RoaringBitmap)>> {
let parent_result = match self.parent.as_mut() {
Some(parent) => Some(parent.next(allowed_candidates, tree_level)?),
None => None,
};
match parent_result {
Some(parent_next) => {
let inner_next = self.inner_next(tree_level)?;
self.interval_to_skip += interval_to_skip(
&self.parent_accumulator,
&self.accumulator,
self.interval_to_skip,
allowed_candidates,
);
self.accumulator.push(inner_next);
self.parent_accumulator.push(parent_next);
let mut merged_interval: Option<(u32, u32, RoaringBitmap)> = None;
for current in self
.accumulator
.iter()
.rev()
.zip(self.parent_accumulator.iter())
.skip(self.interval_to_skip)
{
if let (Some((left_a, right_a, a)), Some((left_b, right_b, b))) = current {
match merged_interval.as_mut() {
Some((_, _, merged_docids)) => *merged_docids |= a & b,
None => {
merged_interval = Some((left_a + left_b, right_a + right_b, a & b))
}
}
}
}
Ok(merged_interval)
}
None => {
let level = self.level;
match self.inner_next(level)? {
Some((left, right, mut candidates)) => {
self.accumulator = vec![Some((left, right, RoaringBitmap::new()))];
candidates &= allowed_candidates;
Ok(Some((left, right, candidates)))
}
None => {
self.accumulator = vec![None];
Ok(None)
}
}
}
}
} }
} }
/// Count the number of interval that can be skiped when we make the cross-intersections match wli.next() {
/// in order to compute the next meta-interval. Some(Ok((_, docids))) => {
/// A pair of intervals is skiped when both intervals doesn't contain any allowed docids. candidates = match candidates.take() {
fn interval_to_skip( Some(candidates) => Some(candidates | docids),
parent_accumulator: &[Option<(u32, u32, RoaringBitmap)>], None => Some(docids),
current_accumulator: &[Option<(u32, u32, RoaringBitmap)>], }
already_skiped: usize, }
allowed_candidates: &RoaringBitmap, Some(Err(e)) => return Some(Err(e)),
) -> usize { None => continue,
parent_accumulator }
.iter() }
.zip(current_accumulator.iter())
.skip(already_skiped) candidates.map(|candidates| Ok((expected_pos, candidates)))
.take_while(|(parent, current)| { }
let skip_parent = parent.as_ref().map_or(true, |(_, _, docids)| docids.is_empty());
let skip_current = current
.as_ref()
.map_or(true, |(_, _, docids)| docids.is_disjoint(allowed_candidates));
skip_parent && skip_current
})
.count()
} }
/// A Branch is represent a possible alternative of the original query and is build with the Query Tree, /// A Branch is represent a possible alternative of the original query and is build with the Query Tree,
/// This branch allows us to iterate over meta-interval of position and to dig in it if it contains interesting candidates. /// This branch allows us to iterate over meta-interval of positions.
struct Branch<'t, 'q> { struct Branch<'t> {
query_level_iterator: QueryLevelIterator<'t, 'q>, query_level_iterator: Vec<(u32, RoaringBitmap, Peekable<QueryPositionIterator<'t>>)>,
last_result: (u32, u32, RoaringBitmap), last_result: (u32, RoaringBitmap),
tree_level: TreeLevel,
branch_size: u32, branch_size: u32,
} }
impl<'t, 'q> Branch<'t, 'q> { impl<'t> Branch<'t> {
fn new(
ctx: &'t dyn Context<'t>,
flatten_branch: &[Vec<Query>],
wdcache: &mut WordDerivationsCache,
allowed_candidates: &RoaringBitmap,
) -> Result<Self> {
let mut query_level_iterator = Vec::new();
for queries in flatten_branch {
let mut qli = QueryPositionIterator::new(ctx, queries, wdcache)?.peekable();
let (pos, docids) = qli.next().transpose()?.unwrap_or((0, RoaringBitmap::new()));
query_level_iterator.push((pos, docids & allowed_candidates, qli));
}
let mut branch = Self {
query_level_iterator,
last_result: (0, RoaringBitmap::new()),
branch_size: flatten_branch.len() as u32,
};
branch.update_last_result();
Ok(branch)
}
/// return the next meta-interval of the branch, /// return the next meta-interval of the branch,
/// and update inner interval in order to be ranked by the BinaryHeap. /// and update inner interval in order to be ranked by the BinaryHeap.
fn next(&mut self, allowed_candidates: &RoaringBitmap) -> heed::Result<bool> { fn next(&mut self, allowed_candidates: &RoaringBitmap) -> heed::Result<bool> {
let tree_level = self.query_level_iterator.level; // update the first query.
match self.query_level_iterator.next(allowed_candidates, tree_level)? { let index = self.lowest_iterator_index();
Some(last_result) => { match self.query_level_iterator.get_mut(index) {
self.last_result = last_result; Some((cur_pos, cur_docids, qli)) => match qli.next().transpose()? {
self.tree_level = tree_level; Some((next_pos, next_docids)) => {
*cur_pos = next_pos;
*cur_docids |= next_docids & allowed_candidates;
self.update_last_result();
Ok(true) Ok(true)
} }
None => Ok(false), None => Ok(false),
},
None => Ok(false),
} }
} }
/// make the current Branch iterate over smaller intervals. fn lowest_iterator_index(&mut self) -> usize {
fn dig(&mut self, ctx: &'t dyn Context<'t>) -> heed::Result<()> { let (index, _) = self
self.query_level_iterator = self.query_level_iterator.dig(ctx)?; .query_level_iterator
Ok(()) .iter_mut()
.map(|(pos, docids, qli)| {
if docids.is_empty() {
0
} else {
match qli.peek() {
Some(result) => {
result.as_ref().map(|(next_pos, _)| *next_pos - *pos).unwrap_or(0)
}
None => u32::MAX,
}
}
})
.enumerate()
.min_by_key(|(_, diff)| *diff)
.unwrap_or((0, 0));
index
} }
/// because next() method could be time consuming, fn update_last_result(&mut self) {
/// update inner interval in order to be ranked by the binary_heap without computing it, let mut result_pos = 0;
/// the next() method should be called when the real interval is needed. let mut result_docids = None;
fn lazy_next(&mut self) {
let u8_level = Into::<u8>::into(self.tree_level);
let interval_size = LEVEL_EXPONENTIATION_BASE.pow(u8_level as u32);
let (left, right, _) = self.last_result;
self.last_result = (left + interval_size, right + interval_size, RoaringBitmap::new()); for (pos, docids, _qli) in self.query_level_iterator.iter() {
result_pos += pos;
result_docids = result_docids
.take()
.map_or_else(|| Some(docids.clone()), |candidates| Some(candidates & docids));
}
// remove last result docids from inner iterators
if let Some(docids) = result_docids.as_ref() {
for (_, query_docids, _) in self.query_level_iterator.iter_mut() {
*query_docids -= docids;
}
}
self.last_result = (result_pos, result_docids.unwrap_or_default());
} }
/// return the score of the current inner interval. /// return the score of the current inner interval.
fn compute_rank(&self) -> u32 { fn compute_rank(&self) -> u32 {
// we compute a rank from the left interval. // we compute a rank from the position.
let (left, _, _) = self.last_result; let (pos, _) = self.last_result;
left.saturating_sub((0..self.branch_size).sum()) * LCM_10_FIRST_NUMBERS / self.branch_size pos.saturating_sub((0..self.branch_size).sum()) * LCM_10_FIRST_NUMBERS / self.branch_size
} }
fn cmp(&self, other: &Self) -> Ordering { fn cmp(&self, other: &Self) -> Ordering {
let self_rank = self.compute_rank(); let self_rank = self.compute_rank();
let other_rank = other.compute_rank(); let other_rank = other.compute_rank();
let left_cmp = self_rank.cmp(&other_rank);
// on level: lower is better,
// we want to dig faster into levels on interesting branches.
let level_cmp = self.tree_level.cmp(&other.tree_level);
left_cmp.then(level_cmp).then(self.last_result.2.len().cmp(&other.last_result.2.len())) // lower rank is better, and because BinaryHeap give the higher ranked branch, we reverse it.
self_rank.cmp(&other_rank).reverse()
} }
} }
impl<'t, 'q> Ord for Branch<'t, 'q> { impl<'t> Ord for Branch<'t> {
fn cmp(&self, other: &Self) -> Ordering { fn cmp(&self, other: &Self) -> Ordering {
self.cmp(other) self.cmp(other)
} }
} }
impl<'t, 'q> PartialOrd for Branch<'t, 'q> { impl<'t> PartialOrd for Branch<'t> {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> { fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other)) Some(self.cmp(other))
} }
} }
impl<'t, 'q> PartialEq for Branch<'t, 'q> { impl<'t> PartialEq for Branch<'t> {
fn eq(&self, other: &Self) -> bool { fn eq(&self, other: &Self) -> bool {
self.cmp(other) == Ordering::Equal self.cmp(other) == Ordering::Equal
} }
} }
impl<'t, 'q> Eq for Branch<'t, 'q> {} impl<'t> Eq for Branch<'t> {}
fn initialize_query_level_iterators<'t, 'q>( fn initialize_set_buckets<'t>(
ctx: &'t dyn Context<'t>,
branches: &'q FlattenedQueryTree,
allowed_candidates: &RoaringBitmap,
wdcache: &mut WordDerivationsCache,
) -> Result<BinaryHeap<Branch<'t, 'q>>> {
let mut positions = BinaryHeap::with_capacity(branches.len());
for branch in branches {
let mut branch_positions = Vec::with_capacity(branch.len());
for queries in branch {
match QueryLevelIterator::new(ctx, queries, wdcache)? {
Some(qli) => branch_positions.push(qli),
None => {
// the branch seems to be invalid, so we skip it.
branch_positions.clear();
break;
}
}
}
// QueryLevelIterator need to be sorted by level and folded in descending order.
branch_positions.sort_unstable_by_key(|qli| qli.level);
let folded_query_level_iterators =
branch_positions.into_iter().fold(None, |fold: Option<QueryLevelIterator>, mut qli| {
match fold {
Some(fold) => {
qli.parent(fold);
Some(qli)
}
None => Some(qli),
}
});
if let Some(mut folded_query_level_iterators) = folded_query_level_iterators {
let tree_level = folded_query_level_iterators.level;
let last_result = folded_query_level_iterators.next(allowed_candidates, tree_level)?;
if let Some(last_result) = last_result {
let branch = Branch {
last_result,
tree_level,
query_level_iterator: folded_query_level_iterators,
branch_size: branch.len() as u32,
};
positions.push(branch);
}
}
}
Ok(positions)
}
fn set_compute_candidates<'t>(
ctx: &'t dyn Context<'t>, ctx: &'t dyn Context<'t>,
branches: &FlattenedQueryTree, branches: &FlattenedQueryTree,
allowed_candidates: &RoaringBitmap, allowed_candidates: &RoaringBitmap,
wdcache: &mut WordDerivationsCache, wdcache: &mut WordDerivationsCache,
) -> Result<Option<RoaringBitmap>> { ) -> Result<BinaryHeap<Branch<'t>>> {
let mut branches_heap = let mut heap = BinaryHeap::new();
initialize_query_level_iterators(ctx, branches, allowed_candidates, wdcache)?; for flatten_branch in branches {
let lowest_level = TreeLevel::min_value(); let branch = Branch::new(ctx, flatten_branch, wdcache, allowed_candidates)?;
heap.push(branch);
}
Ok(heap)
}
fn set_compute_candidates(
branches_heap: &mut BinaryHeap<Branch>,
allowed_candidates: &RoaringBitmap,
) -> Result<Option<(u32, RoaringBitmap)>> {
let mut final_candidates: Option<(u32, RoaringBitmap)> = None; let mut final_candidates: Option<(u32, RoaringBitmap)> = None;
let mut allowed_candidates = allowed_candidates.clone(); let mut allowed_candidates = allowed_candidates.clone();
while let Some(mut branch) = branches_heap.peek_mut() { while let Some(mut branch) = branches_heap.peek_mut() {
let is_lowest_level = branch.tree_level == lowest_level;
let branch_rank = branch.compute_rank();
// if current is worst than best we break to return // if current is worst than best we break to return
// candidates that correspond to the best rank // candidates that correspond to the best rank
let branch_rank = branch.compute_rank();
if let Some((best_rank, _)) = final_candidates { if let Some((best_rank, _)) = final_candidates {
if branch_rank > best_rank { if branch_rank > best_rank {
break; break;
} }
} }
let _left = branch.last_result.0;
let candidates = take(&mut branch.last_result.2); let candidates = take(&mut branch.last_result.1);
if candidates.is_empty() { if candidates.is_empty() {
// we don't have candidates, get next interval. // we don't have candidates, get next interval.
if !branch.next(&allowed_candidates)? { if !branch.next(&allowed_candidates)? {
PeekMut::pop(branch); PeekMut::pop(branch);
} }
} else if is_lowest_level { } else {
// we have candidates, but we can't dig deeper.
allowed_candidates -= &candidates; allowed_candidates -= &candidates;
final_candidates = match final_candidates.take() { final_candidates = match final_candidates.take() {
// we add current candidates to best candidates // we add current candidates to best candidates
Some((best_rank, mut best_candidates)) => { Some((best_rank, mut best_candidates)) => {
best_candidates |= candidates; best_candidates |= candidates;
branch.lazy_next(); branch.next(&allowed_candidates)?;
Some((best_rank, best_candidates)) Some((best_rank, best_candidates))
} }
// we take current candidates as best candidates // we take current candidates as best candidates
None => { None => {
branch.lazy_next(); branch.next(&allowed_candidates)?;
Some((branch_rank, candidates)) Some((branch_rank, candidates))
} }
}; };
} else {
// we have candidates, lets dig deeper in levels.
branch.dig(ctx)?;
if !branch.next(&allowed_candidates)? {
PeekMut::pop(branch);
}
} }
} }
Ok(final_candidates.map(|(_rank, candidates)| candidates)) Ok(final_candidates)
} }
fn linear_compute_candidates( fn initialize_linear_buckets(
ctx: &dyn Context, ctx: &dyn Context,
branches: &FlattenedQueryTree, branches: &FlattenedQueryTree,
allowed_candidates: &RoaringBitmap, allowed_candidates: &RoaringBitmap,

View File

@ -10,7 +10,7 @@ use crate::search::criteria::{
resolve_query_tree, Context, Criterion, CriterionParameters, CriterionResult, resolve_query_tree, Context, Criterion, CriterionParameters, CriterionResult,
}; };
use crate::search::query_tree::{Operation, PrimitiveQueryPart}; use crate::search::query_tree::{Operation, PrimitiveQueryPart};
use crate::{Result, TreeLevel}; use crate::Result;
pub struct Exactness<'t> { pub struct Exactness<'t> {
ctx: &'t dyn Context<'t>, ctx: &'t dyn Context<'t>,
@ -293,7 +293,6 @@ fn attribute_start_with_docids(
attribute_id: u32, attribute_id: u32,
query: &[ExactQueryPart], query: &[ExactQueryPart],
) -> heed::Result<Vec<RoaringBitmap>> { ) -> heed::Result<Vec<RoaringBitmap>> {
let lowest_level = TreeLevel::min_value();
let mut attribute_candidates_array = Vec::new(); let mut attribute_candidates_array = Vec::new();
// start from attribute first position // start from attribute first position
let mut pos = attribute_id * 1000; let mut pos = attribute_id * 1000;
@ -303,7 +302,7 @@ fn attribute_start_with_docids(
Synonyms(synonyms) => { Synonyms(synonyms) => {
let mut synonyms_candidates = RoaringBitmap::new(); let mut synonyms_candidates = RoaringBitmap::new();
for word in synonyms { for word in synonyms {
let wc = ctx.word_level_position_docids(word, lowest_level, pos, pos)?; let wc = ctx.word_position_docids(word, pos)?;
if let Some(word_candidates) = wc { if let Some(word_candidates) = wc {
synonyms_candidates |= word_candidates; synonyms_candidates |= word_candidates;
} }
@ -313,7 +312,7 @@ fn attribute_start_with_docids(
} }
Phrase(phrase) => { Phrase(phrase) => {
for word in phrase { for word in phrase {
let wc = ctx.word_level_position_docids(word, lowest_level, pos, pos)?; let wc = ctx.word_position_docids(word, pos)?;
if let Some(word_candidates) = wc { if let Some(word_candidates) = wc {
attribute_candidates_array.push(word_candidates); attribute_candidates_array.push(word_candidates);
} }

View File

@ -14,7 +14,7 @@ use self::words::Words;
use super::query_tree::{Operation, PrimitiveQueryPart, Query, QueryKind}; use super::query_tree::{Operation, PrimitiveQueryPart, Query, QueryKind};
use crate::search::criteria::geo::Geo; use crate::search::criteria::geo::Geo;
use crate::search::{word_derivations, WordDerivationsCache}; use crate::search::{word_derivations, WordDerivationsCache};
use crate::{AscDesc as AscDescName, DocumentId, FieldId, Index, Member, Result, TreeLevel}; use crate::{AscDesc as AscDescName, DocumentId, FieldId, Index, Member, Result};
mod asc_desc; mod asc_desc;
mod attribute; mod attribute;
@ -90,20 +90,8 @@ pub trait Context<'c> {
fn word_position_iterator( fn word_position_iterator(
&self, &self,
word: &str, word: &str,
level: TreeLevel,
in_prefix_cache: bool, in_prefix_cache: bool,
left: Option<u32>, ) -> heed::Result<Box<dyn Iterator<Item = heed::Result<((&'c str, u32), RoaringBitmap)>> + 'c>>;
right: Option<u32>,
) -> heed::Result<
Box<
dyn Iterator<Item = heed::Result<((&'c str, TreeLevel, u32, u32), RoaringBitmap)>> + 'c,
>,
>;
fn word_position_last_level(
&self,
word: &str,
in_prefix_cache: bool,
) -> heed::Result<Option<TreeLevel>>;
fn synonyms(&self, word: &str) -> heed::Result<Option<Vec<Vec<String>>>>; fn synonyms(&self, word: &str) -> heed::Result<Option<Vec<Vec<String>>>>;
fn searchable_fields_ids(&self) -> Result<Vec<FieldId>>; fn searchable_fields_ids(&self) -> Result<Vec<FieldId>>;
fn field_id_word_count_docids( fn field_id_word_count_docids(
@ -111,13 +99,7 @@ pub trait Context<'c> {
field_id: FieldId, field_id: FieldId,
word_count: u8, word_count: u8,
) -> heed::Result<Option<RoaringBitmap>>; ) -> heed::Result<Option<RoaringBitmap>>;
fn word_level_position_docids( fn word_position_docids(&self, word: &str, pos: u32) -> heed::Result<Option<RoaringBitmap>>;
&self,
word: &str,
level: TreeLevel,
left: u32,
right: u32,
) -> heed::Result<Option<RoaringBitmap>>;
} }
pub struct CriteriaBuilder<'t> { pub struct CriteriaBuilder<'t> {
@ -183,54 +165,24 @@ impl<'c> Context<'c> for CriteriaBuilder<'c> {
fn word_position_iterator( fn word_position_iterator(
&self, &self,
word: &str, word: &str,
level: TreeLevel,
in_prefix_cache: bool, in_prefix_cache: bool,
left: Option<u32>, ) -> heed::Result<Box<dyn Iterator<Item = heed::Result<((&'c str, u32), RoaringBitmap)>> + 'c>>
right: Option<u32>, {
) -> heed::Result<
Box<
dyn Iterator<Item = heed::Result<((&'c str, TreeLevel, u32, u32), RoaringBitmap)>> + 'c,
>,
> {
let range = { let range = {
let left = left.unwrap_or(u32::min_value()); let left = u32::min_value();
let right = right.unwrap_or(u32::max_value()); let right = u32::max_value();
let left = (word, level, left, left); let left = (word, left);
let right = (word, level, right, right); let right = (word, right);
left..=right left..=right
}; };
let db = match in_prefix_cache { let db = match in_prefix_cache {
true => self.index.word_prefix_level_position_docids, true => self.index.word_prefix_position_docids,
false => self.index.word_level_position_docids, false => self.index.word_position_docids,
}; };
Ok(Box::new(db.range(self.rtxn, &range)?)) Ok(Box::new(db.range(self.rtxn, &range)?))
} }
fn word_position_last_level(
&self,
word: &str,
in_prefix_cache: bool,
) -> heed::Result<Option<TreeLevel>> {
let range = {
let left = (word, TreeLevel::min_value(), u32::min_value(), u32::min_value());
let right = (word, TreeLevel::max_value(), u32::max_value(), u32::max_value());
left..=right
};
let db = match in_prefix_cache {
true => self.index.word_prefix_level_position_docids,
false => self.index.word_level_position_docids,
};
let last_level = db
.remap_data_type::<heed::types::DecodeIgnore>()
.range(self.rtxn, &range)?
.last()
.transpose()?
.map(|((_, level, _, _), _)| level);
Ok(last_level)
}
fn synonyms(&self, word: &str) -> heed::Result<Option<Vec<Vec<String>>>> { fn synonyms(&self, word: &str) -> heed::Result<Option<Vec<Vec<String>>>> {
self.index.words_synonyms(self.rtxn, &[word]) self.index.words_synonyms(self.rtxn, &[word])
} }
@ -251,15 +203,9 @@ impl<'c> Context<'c> for CriteriaBuilder<'c> {
self.index.field_id_word_count_docids.get(self.rtxn, &key) self.index.field_id_word_count_docids.get(self.rtxn, &key)
} }
fn word_level_position_docids( fn word_position_docids(&self, word: &str, pos: u32) -> heed::Result<Option<RoaringBitmap>> {
&self, let key = (word, pos);
word: &str, self.index.word_position_docids.get(self.rtxn, &key)
level: TreeLevel,
left: u32,
right: u32,
) -> heed::Result<Option<RoaringBitmap>> {
let key = (word, level, left, right);
self.index.word_level_position_docids.get(self.rtxn, &key)
} }
} }
@ -616,27 +562,13 @@ pub mod test {
fn word_position_iterator( fn word_position_iterator(
&self, &self,
_word: &str, _word: &str,
_level: TreeLevel,
_in_prefix_cache: bool, _in_prefix_cache: bool,
_left: Option<u32>,
_right: Option<u32>,
) -> heed::Result< ) -> heed::Result<
Box< Box<dyn Iterator<Item = heed::Result<((&'c str, u32), RoaringBitmap)>> + 'c>,
dyn Iterator<Item = heed::Result<((&'c str, TreeLevel, u32, u32), RoaringBitmap)>>
+ 'c,
>,
> { > {
todo!() todo!()
} }
fn word_position_last_level(
&self,
_word: &str,
_in_prefix_cache: bool,
) -> heed::Result<Option<TreeLevel>> {
todo!()
}
fn synonyms(&self, _word: &str) -> heed::Result<Option<Vec<Vec<String>>>> { fn synonyms(&self, _word: &str) -> heed::Result<Option<Vec<Vec<String>>>> {
todo!() todo!()
} }
@ -645,12 +577,10 @@ pub mod test {
todo!() todo!()
} }
fn word_level_position_docids( fn word_position_docids(
&self, &self,
_word: &str, _word: &str,
_level: TreeLevel, _pos: u32,
_left: u32,
_right: u32,
) -> heed::Result<Option<RoaringBitmap>> { ) -> heed::Result<Option<RoaringBitmap>> {
todo!() todo!()
} }

View File

@ -1,51 +0,0 @@
use std::convert::TryFrom;
use std::fmt;
/// This is just before the lowest printable character (space, sp, 32)
const MAX_VALUE: u8 = 31;
#[derive(Debug, Copy, Clone)]
pub enum Error {
LevelTooHigh(u8),
}
#[derive(Debug, Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
#[repr(transparent)]
pub struct TreeLevel(u8);
impl TreeLevel {
pub const fn max_value() -> TreeLevel {
TreeLevel(MAX_VALUE)
}
pub const fn min_value() -> TreeLevel {
TreeLevel(0)
}
pub fn saturating_sub(&self, lhs: u8) -> TreeLevel {
TreeLevel(self.0.saturating_sub(lhs))
}
}
impl Into<u8> for TreeLevel {
fn into(self) -> u8 {
self.0
}
}
impl TryFrom<u8> for TreeLevel {
type Error = Error;
fn try_from(value: u8) -> Result<TreeLevel, Error> {
match value {
0..=MAX_VALUE => Ok(TreeLevel(value)),
_ => Err(Error::LevelTooHigh(value)),
}
}
}
impl fmt::Display for TreeLevel {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", self.0)
}
}

View File

@ -28,9 +28,9 @@ impl<'t, 'u, 'i> ClearDocuments<'t, 'u, 'i> {
docid_word_positions, docid_word_positions,
word_pair_proximity_docids, word_pair_proximity_docids,
word_prefix_pair_proximity_docids, word_prefix_pair_proximity_docids,
word_level_position_docids, word_position_docids,
field_id_word_count_docids, field_id_word_count_docids,
word_prefix_level_position_docids, word_prefix_position_docids,
facet_id_f64_docids, facet_id_f64_docids,
facet_id_string_docids, facet_id_string_docids,
field_id_docid_facet_f64s, field_id_docid_facet_f64s,
@ -64,9 +64,9 @@ impl<'t, 'u, 'i> ClearDocuments<'t, 'u, 'i> {
docid_word_positions.clear(self.wtxn)?; docid_word_positions.clear(self.wtxn)?;
word_pair_proximity_docids.clear(self.wtxn)?; word_pair_proximity_docids.clear(self.wtxn)?;
word_prefix_pair_proximity_docids.clear(self.wtxn)?; word_prefix_pair_proximity_docids.clear(self.wtxn)?;
word_level_position_docids.clear(self.wtxn)?; word_position_docids.clear(self.wtxn)?;
field_id_word_count_docids.clear(self.wtxn)?; field_id_word_count_docids.clear(self.wtxn)?;
word_prefix_level_position_docids.clear(self.wtxn)?; word_prefix_position_docids.clear(self.wtxn)?;
facet_id_f64_docids.clear(self.wtxn)?; facet_id_f64_docids.clear(self.wtxn)?;
facet_id_string_docids.clear(self.wtxn)?; facet_id_string_docids.clear(self.wtxn)?;
field_id_docid_facet_f64s.clear(self.wtxn)?; field_id_docid_facet_f64s.clear(self.wtxn)?;

View File

@ -102,8 +102,8 @@ impl<'t, 'u, 'i> DeleteDocuments<'t, 'u, 'i> {
word_pair_proximity_docids, word_pair_proximity_docids,
field_id_word_count_docids, field_id_word_count_docids,
word_prefix_pair_proximity_docids, word_prefix_pair_proximity_docids,
word_level_position_docids, word_position_docids,
word_prefix_level_position_docids, word_prefix_position_docids,
facet_id_f64_docids, facet_id_f64_docids,
facet_id_string_docids, facet_id_string_docids,
field_id_docid_facet_f64s, field_id_docid_facet_f64s,
@ -326,8 +326,7 @@ impl<'t, 'u, 'i> DeleteDocuments<'t, 'u, 'i> {
drop(iter); drop(iter);
// We delete the documents ids that are under the word level position docids. // We delete the documents ids that are under the word level position docids.
let mut iter = let mut iter = word_position_docids.iter_mut(self.wtxn)?.remap_key_type::<ByteSlice>();
word_level_position_docids.iter_mut(self.wtxn)?.remap_key_type::<ByteSlice>();
while let Some(result) = iter.next() { while let Some(result) = iter.next() {
let (bytes, mut docids) = result?; let (bytes, mut docids) = result?;
let previous_len = docids.len(); let previous_len = docids.len();
@ -346,7 +345,7 @@ impl<'t, 'u, 'i> DeleteDocuments<'t, 'u, 'i> {
// We delete the documents ids that are under the word prefix level position docids. // We delete the documents ids that are under the word prefix level position docids.
let mut iter = let mut iter =
word_prefix_level_position_docids.iter_mut(self.wtxn)?.remap_key_type::<ByteSlice>(); word_prefix_position_docids.iter_mut(self.wtxn)?.remap_key_type::<ByteSlice>();
while let Some(result) = iter.next() { while let Some(result) = iter.next() {
let (bytes, mut docids) = result?; let (bytes, mut docids) = result?;
let previous_len = docids.len(); let previous_len = docids.len();

View File

@ -14,13 +14,13 @@ use crate::{DocumentId, Result};
/// Returns a grenad reader with the list of extracted words at positions and /// Returns a grenad reader with the list of extracted words at positions and
/// documents ids from the given chunk of docid word positions. /// documents ids from the given chunk of docid word positions.
#[logging_timer::time] #[logging_timer::time]
pub fn extract_word_level_position_docids<R: io::Read>( pub fn extract_word_position_docids<R: io::Read>(
mut docid_word_positions: grenad::Reader<R>, mut docid_word_positions: grenad::Reader<R>,
indexer: GrenadParameters, indexer: GrenadParameters,
) -> Result<grenad::Reader<File>> { ) -> Result<grenad::Reader<File>> {
let max_memory = indexer.max_memory_by_thread(); let max_memory = indexer.max_memory_by_thread();
let mut word_level_position_docids_sorter = create_sorter( let mut word_position_docids_sorter = create_sorter(
merge_cbo_roaring_bitmaps, merge_cbo_roaring_bitmaps,
indexer.chunk_compression_type, indexer.chunk_compression_type,
indexer.chunk_compression_level, indexer.chunk_compression_level,
@ -37,15 +37,11 @@ pub fn extract_word_level_position_docids<R: io::Read>(
for position in read_u32_ne_bytes(value) { for position in read_u32_ne_bytes(value) {
key_buffer.clear(); key_buffer.clear();
key_buffer.extend_from_slice(word_bytes); key_buffer.extend_from_slice(word_bytes);
key_buffer.push(0); // tree level
// Levels are composed of left and right bounds.
key_buffer.extend_from_slice(&position.to_be_bytes());
key_buffer.extend_from_slice(&position.to_be_bytes()); key_buffer.extend_from_slice(&position.to_be_bytes());
word_level_position_docids_sorter.insert(&key_buffer, &document_id.to_ne_bytes())?; word_position_docids_sorter.insert(&key_buffer, &document_id.to_ne_bytes())?;
} }
} }
sorter_into_reader(word_level_position_docids_sorter, indexer) sorter_into_reader(word_position_docids_sorter, indexer)
} }

View File

@ -5,8 +5,8 @@ mod extract_fid_docid_facet_values;
mod extract_fid_word_count_docids; mod extract_fid_word_count_docids;
mod extract_geo_points; mod extract_geo_points;
mod extract_word_docids; mod extract_word_docids;
mod extract_word_level_position_docids;
mod extract_word_pair_proximity_docids; mod extract_word_pair_proximity_docids;
mod extract_word_position_docids;
use std::collections::HashSet; use std::collections::HashSet;
use std::fs::File; use std::fs::File;
@ -22,8 +22,8 @@ use self::extract_fid_docid_facet_values::extract_fid_docid_facet_values;
use self::extract_fid_word_count_docids::extract_fid_word_count_docids; use self::extract_fid_word_count_docids::extract_fid_word_count_docids;
use self::extract_geo_points::extract_geo_points; use self::extract_geo_points::extract_geo_points;
use self::extract_word_docids::extract_word_docids; use self::extract_word_docids::extract_word_docids;
use self::extract_word_level_position_docids::extract_word_level_position_docids;
use self::extract_word_pair_proximity_docids::extract_word_pair_proximity_docids; use self::extract_word_pair_proximity_docids::extract_word_pair_proximity_docids;
use self::extract_word_position_docids::extract_word_position_docids;
use super::helpers::{ use super::helpers::{
into_clonable_grenad, keep_first_prefix_value_merge_roaring_bitmaps, merge_cbo_roaring_bitmaps, into_clonable_grenad, keep_first_prefix_value_merge_roaring_bitmaps, merge_cbo_roaring_bitmaps,
merge_readers, merge_roaring_bitmaps, CursorClonableMmap, GrenadParameters, MergeFn, merge_readers, merge_roaring_bitmaps, CursorClonableMmap, GrenadParameters, MergeFn,
@ -98,10 +98,10 @@ pub(crate) fn data_from_obkv_documents(
docid_word_positions_chunks.clone(), docid_word_positions_chunks.clone(),
indexer.clone(), indexer.clone(),
lmdb_writer_sx.clone(), lmdb_writer_sx.clone(),
extract_word_level_position_docids, extract_word_position_docids,
merge_cbo_roaring_bitmaps, merge_cbo_roaring_bitmaps,
TypedChunk::WordLevelPositionDocids, TypedChunk::WordPositionDocids,
"word-level-position-docids", "word-position-docids",
); );
spawn_extraction_task( spawn_extraction_task(

View File

@ -27,7 +27,7 @@ pub use self::transform::{Transform, TransformOutput};
use crate::documents::DocumentBatchReader; use crate::documents::DocumentBatchReader;
use crate::update::{ use crate::update::{
Facets, UpdateBuilder, UpdateIndexingStep, WordPrefixDocids, WordPrefixPairProximityDocids, Facets, UpdateBuilder, UpdateIndexingStep, WordPrefixDocids, WordPrefixPairProximityDocids,
WordsLevelPositions, WordsPrefixesFst, WordPrefixPositionDocids, WordsPrefixesFst,
}; };
use crate::{Index, Result}; use crate::{Index, Result};
@ -412,8 +412,8 @@ impl<'t, 'u, 'i, 'a> IndexDocuments<'t, 'u, 'i, 'a> {
total_databases: TOTAL_POSTING_DATABASE_COUNT, total_databases: TOTAL_POSTING_DATABASE_COUNT,
}); });
// Run the words level positions update operation. // Run the words prefix position docids update operation.
let mut builder = WordsLevelPositions::new(self.wtxn, self.index); let mut builder = WordPrefixPositionDocids::new(self.wtxn, self.index);
builder.chunk_compression_type = self.chunk_compression_type; builder.chunk_compression_type = self.chunk_compression_type;
builder.chunk_compression_level = self.chunk_compression_level; builder.chunk_compression_level = self.chunk_compression_level;
builder.max_nb_chunks = self.max_nb_chunks; builder.max_nb_chunks = self.max_nb_chunks;

View File

@ -22,7 +22,7 @@ pub(crate) enum TypedChunk {
FieldIdWordcountDocids(grenad::Reader<File>), FieldIdWordcountDocids(grenad::Reader<File>),
NewDocumentsIds(RoaringBitmap), NewDocumentsIds(RoaringBitmap),
WordDocids(grenad::Reader<File>), WordDocids(grenad::Reader<File>),
WordLevelPositionDocids(grenad::Reader<File>), WordPositionDocids(grenad::Reader<File>),
WordPairProximityDocids(grenad::Reader<File>), WordPairProximityDocids(grenad::Reader<File>),
FieldIdFacetStringDocids(grenad::Reader<File>), FieldIdFacetStringDocids(grenad::Reader<File>),
FieldIdFacetNumberDocids(grenad::Reader<File>), FieldIdFacetNumberDocids(grenad::Reader<File>),
@ -110,10 +110,10 @@ pub(crate) fn write_typed_chunk_into_index(
index.put_words_fst(wtxn, &fst)?; index.put_words_fst(wtxn, &fst)?;
is_merged_database = true; is_merged_database = true;
} }
TypedChunk::WordLevelPositionDocids(word_level_position_docids_iter) => { TypedChunk::WordPositionDocids(word_position_docids_iter) => {
append_entries_into_database( append_entries_into_database(
word_level_position_docids_iter, word_position_docids_iter,
&index.word_level_position_docids, &index.word_position_docids,
wtxn, wtxn,
index_is_empty, index_is_empty,
|value, _buffer| Ok(value), |value, _buffer| Ok(value),

View File

@ -8,7 +8,7 @@ pub use self::update_builder::UpdateBuilder;
pub use self::update_step::UpdateIndexingStep; pub use self::update_step::UpdateIndexingStep;
pub use self::word_prefix_docids::WordPrefixDocids; pub use self::word_prefix_docids::WordPrefixDocids;
pub use self::word_prefix_pair_proximity_docids::WordPrefixPairProximityDocids; pub use self::word_prefix_pair_proximity_docids::WordPrefixPairProximityDocids;
pub use self::words_level_positions::WordsLevelPositions; pub use self::words_prefix_position_docids::WordPrefixPositionDocids;
pub use self::words_prefixes_fst::WordsPrefixesFst; pub use self::words_prefixes_fst::WordsPrefixesFst;
mod available_documents_ids; mod available_documents_ids;
@ -21,5 +21,5 @@ mod update_builder;
mod update_step; mod update_step;
mod word_prefix_docids; mod word_prefix_docids;
mod word_prefix_pair_proximity_docids; mod word_prefix_pair_proximity_docids;
mod words_level_positions; mod words_prefix_position_docids;
mod words_prefixes_fst; mod words_prefixes_fst;

View File

@ -1,268 +0,0 @@
use std::convert::TryFrom;
use std::fs::File;
use std::num::NonZeroU32;
use std::{cmp, str};
use fst::Streamer;
use grenad::{CompressionType, Reader, Writer};
use heed::types::{ByteSlice, DecodeIgnore, Str};
use heed::{BytesEncode, Error};
use log::debug;
use roaring::RoaringBitmap;
use crate::error::{InternalError, SerializationError};
use crate::heed_codec::{CboRoaringBitmapCodec, StrLevelPositionCodec};
use crate::index::main_key::WORDS_PREFIXES_FST_KEY;
use crate::update::index_documents::{
create_sorter, create_writer, merge_cbo_roaring_bitmaps, sorter_into_lmdb_database,
write_into_lmdb_database, writer_into_reader, WriteMethod,
};
use crate::{Index, Result, TreeLevel};
pub struct WordsLevelPositions<'t, 'u, 'i> {
wtxn: &'t mut heed::RwTxn<'i, 'u>,
index: &'i Index,
pub(crate) chunk_compression_type: CompressionType,
pub(crate) chunk_compression_level: Option<u32>,
pub(crate) max_nb_chunks: Option<usize>,
pub(crate) max_memory: Option<usize>,
level_group_size: NonZeroU32,
min_level_size: NonZeroU32,
}
impl<'t, 'u, 'i> WordsLevelPositions<'t, 'u, 'i> {
pub fn new(
wtxn: &'t mut heed::RwTxn<'i, 'u>,
index: &'i Index,
) -> WordsLevelPositions<'t, 'u, 'i> {
WordsLevelPositions {
wtxn,
index,
chunk_compression_type: CompressionType::None,
chunk_compression_level: None,
max_nb_chunks: None,
max_memory: None,
level_group_size: NonZeroU32::new(4).unwrap(),
min_level_size: NonZeroU32::new(5).unwrap(),
}
}
pub fn level_group_size(&mut self, value: NonZeroU32) -> &mut Self {
self.level_group_size = NonZeroU32::new(cmp::max(value.get(), 2)).unwrap();
self
}
pub fn min_level_size(&mut self, value: NonZeroU32) -> &mut Self {
self.min_level_size = value;
self
}
#[logging_timer::time("WordsLevelPositions::{}")]
pub fn execute(self) -> Result<()> {
debug!("Computing and writing the word levels positions docids into LMDB on disk...");
let entries = compute_positions_levels(
self.wtxn,
self.index.word_docids.remap_data_type::<DecodeIgnore>(),
self.index.word_level_position_docids,
self.chunk_compression_type,
self.chunk_compression_level,
self.level_group_size,
self.min_level_size,
)?;
// The previously computed entries also defines the level 0 entries
// so we can clear the database and append all of these entries.
self.index.word_level_position_docids.clear(self.wtxn)?;
write_into_lmdb_database(
self.wtxn,
*self.index.word_level_position_docids.as_polymorph(),
entries,
|_, _| Err(InternalError::IndexingMergingKeys { process: "word level position" })?,
WriteMethod::Append,
)?;
// We compute the word prefix level positions database.
self.index.word_prefix_level_position_docids.clear(self.wtxn)?;
let mut word_prefix_level_positions_docids_sorter = create_sorter(
merge_cbo_roaring_bitmaps,
self.chunk_compression_type,
self.chunk_compression_level,
self.max_nb_chunks,
self.max_memory,
);
// We insert the word prefix level positions where the level is equal to 0 and
// corresponds to the word-prefix level positions where the prefixes appears
// in the prefix FST previously constructed.
let prefix_fst = self.index.words_prefixes_fst(self.wtxn)?;
let db = self.index.word_level_position_docids.remap_data_type::<ByteSlice>();
// iter over all prefixes in the prefix fst.
let mut word_stream = prefix_fst.stream();
while let Some(prefix_bytes) = word_stream.next() {
let prefix = str::from_utf8(prefix_bytes).map_err(|_| {
SerializationError::Decoding { db_name: Some(WORDS_PREFIXES_FST_KEY) }
})?;
// iter over all lines of the DB where the key is prefixed by the current prefix.
let mut iter = db
.remap_key_type::<ByteSlice>()
.prefix_iter(self.wtxn, &prefix_bytes)?
.remap_key_type::<StrLevelPositionCodec>();
while let Some(((_word, level, left, right), data)) = iter.next().transpose()? {
// if level is 0, we push the line in the sorter
// replacing the complete word by the prefix.
if level == TreeLevel::min_value() {
let key = (prefix, level, left, right);
let bytes = StrLevelPositionCodec::bytes_encode(&key).unwrap();
word_prefix_level_positions_docids_sorter.insert(bytes, data)?;
}
}
}
// We finally write all the word prefix level positions docids with
// a level equal to 0 into the LMDB database.
sorter_into_lmdb_database(
self.wtxn,
*self.index.word_prefix_level_position_docids.as_polymorph(),
word_prefix_level_positions_docids_sorter,
merge_cbo_roaring_bitmaps,
WriteMethod::Append,
)?;
let entries = compute_positions_levels(
self.wtxn,
self.index.word_prefix_docids.remap_data_type::<DecodeIgnore>(),
self.index.word_prefix_level_position_docids,
self.chunk_compression_type,
self.chunk_compression_level,
self.level_group_size,
self.min_level_size,
)?;
// The previously computed entries also defines the level 0 entries
// so we can clear the database and append all of these entries.
self.index.word_prefix_level_position_docids.clear(self.wtxn)?;
write_into_lmdb_database(
self.wtxn,
*self.index.word_prefix_level_position_docids.as_polymorph(),
entries,
|_, _| {
Err(InternalError::IndexingMergingKeys { process: "word prefix level position" })?
},
WriteMethod::Append,
)?;
Ok(())
}
}
/// Returns the next number after or equal to `x` that is divisible by `d`.
fn next_divisible(x: u32, d: u32) -> u32 {
(x.saturating_sub(1) | (d - 1)) + 1
}
/// Returns the previous number after or equal to `x` that is divisible by `d`,
/// saturates on zero.
fn previous_divisible(x: u32, d: u32) -> u32 {
match x.checked_sub(d - 1) {
Some(0) | None => 0,
Some(x) => next_divisible(x, d),
}
}
/// Generates all the words positions levels based on the levels zero (including the level zero).
fn compute_positions_levels(
rtxn: &heed::RoTxn,
words_db: heed::Database<Str, DecodeIgnore>,
words_positions_db: heed::Database<StrLevelPositionCodec, CboRoaringBitmapCodec>,
compression_type: CompressionType,
compression_level: Option<u32>,
level_group_size: NonZeroU32,
min_level_size: NonZeroU32,
) -> Result<Reader<File>> {
// It is forbidden to keep a cursor and write in a database at the same time with LMDB
// therefore we write the facet levels entries into a grenad file before transfering them.
let mut writer = tempfile::tempfile()
.and_then(|file| create_writer(compression_type, compression_level, file))?;
for result in words_db.iter(rtxn)? {
let (word, ()) = result?;
let level_0_range = {
let left = (word, TreeLevel::min_value(), u32::min_value(), u32::min_value());
let right = (word, TreeLevel::min_value(), u32::max_value(), u32::max_value());
left..=right
};
let first_level_size = words_positions_db
.remap_data_type::<DecodeIgnore>()
.range(rtxn, &level_0_range)?
.fold(Ok(0u32), |count, result| result.and(count).map(|c| c + 1))?;
// Groups sizes are always a power of the original level_group_size and therefore a group
// always maps groups of the previous level and never splits previous levels groups in half.
let group_size_iter = (1u8..)
.map(|l| (TreeLevel::try_from(l).unwrap(), level_group_size.get().pow(l as u32)))
.take_while(|(_, s)| first_level_size / *s >= min_level_size.get());
// As specified in the documentation, we also write the level 0 entries.
for result in words_positions_db.range(rtxn, &level_0_range)? {
let ((word, level, left, right), docids) = result?;
write_level_entry(&mut writer, word, level, left, right, &docids)?;
}
for (level, group_size) in group_size_iter {
let mut left = 0;
let mut right = 0;
let mut group_docids = RoaringBitmap::new();
for (i, result) in words_positions_db.range(rtxn, &level_0_range)?.enumerate() {
let ((_word, _level, value, _right), docids) = result?;
if i == 0 {
left = previous_divisible(value, group_size);
right = left + (group_size - 1);
}
if value > right {
// we found the first bound of the next group, we must store the left
// and right bounds associated with the docids.
write_level_entry(&mut writer, word, level, left, right, &group_docids)?;
// We save the left bound for the new group and also reset the docids.
group_docids = RoaringBitmap::new();
left = previous_divisible(value, group_size);
right = left + (group_size - 1);
}
// The right bound is always the bound we run through.
group_docids |= docids;
}
if !group_docids.is_empty() {
write_level_entry(&mut writer, word, level, left, right, &group_docids)?;
}
}
}
writer_into_reader(writer)
}
fn write_level_entry(
writer: &mut Writer<File>,
word: &str,
level: TreeLevel,
left: u32,
right: u32,
ids: &RoaringBitmap,
) -> Result<()> {
let key = (word, level, left, right);
let key = StrLevelPositionCodec::bytes_encode(&key).ok_or(Error::Encoding)?;
let data = CboRoaringBitmapCodec::bytes_encode(&ids).ok_or(Error::Encoding)?;
writer.insert(&key, &data)?;
Ok(())
}

View File

@ -0,0 +1,105 @@
use std::num::NonZeroU32;
use std::{cmp, str};
use fst::Streamer;
use grenad::CompressionType;
use heed::types::ByteSlice;
use heed::BytesEncode;
use log::debug;
use crate::error::SerializationError;
use crate::heed_codec::StrBEU32Codec;
use crate::index::main_key::WORDS_PREFIXES_FST_KEY;
use crate::update::index_documents::{
create_sorter, merge_cbo_roaring_bitmaps, sorter_into_lmdb_database, WriteMethod,
};
use crate::{Index, Result};
pub struct WordPrefixPositionDocids<'t, 'u, 'i> {
wtxn: &'t mut heed::RwTxn<'i, 'u>,
index: &'i Index,
pub(crate) chunk_compression_type: CompressionType,
pub(crate) chunk_compression_level: Option<u32>,
pub(crate) max_nb_chunks: Option<usize>,
pub(crate) max_memory: Option<usize>,
level_group_size: NonZeroU32,
min_level_size: NonZeroU32,
}
impl<'t, 'u, 'i> WordPrefixPositionDocids<'t, 'u, 'i> {
pub fn new(
wtxn: &'t mut heed::RwTxn<'i, 'u>,
index: &'i Index,
) -> WordPrefixPositionDocids<'t, 'u, 'i> {
WordPrefixPositionDocids {
wtxn,
index,
chunk_compression_type: CompressionType::None,
chunk_compression_level: None,
max_nb_chunks: None,
max_memory: None,
level_group_size: NonZeroU32::new(4).unwrap(),
min_level_size: NonZeroU32::new(5).unwrap(),
}
}
pub fn level_group_size(&mut self, value: NonZeroU32) -> &mut Self {
self.level_group_size = NonZeroU32::new(cmp::max(value.get(), 2)).unwrap();
self
}
pub fn min_level_size(&mut self, value: NonZeroU32) -> &mut Self {
self.min_level_size = value;
self
}
#[logging_timer::time("WordPrefixPositionDocids::{}")]
pub fn execute(self) -> Result<()> {
debug!("Computing and writing the word levels positions docids into LMDB on disk...");
self.index.word_prefix_position_docids.clear(self.wtxn)?;
let mut word_prefix_positions_docids_sorter = create_sorter(
merge_cbo_roaring_bitmaps,
self.chunk_compression_type,
self.chunk_compression_level,
self.max_nb_chunks,
self.max_memory,
);
// We insert the word prefix position and
// corresponds to the word-prefix position where the prefixes appears
// in the prefix FST previously constructed.
let prefix_fst = self.index.words_prefixes_fst(self.wtxn)?;
let db = self.index.word_position_docids.remap_data_type::<ByteSlice>();
// iter over all prefixes in the prefix fst.
let mut word_stream = prefix_fst.stream();
while let Some(prefix_bytes) = word_stream.next() {
let prefix = str::from_utf8(prefix_bytes).map_err(|_| {
SerializationError::Decoding { db_name: Some(WORDS_PREFIXES_FST_KEY) }
})?;
// iter over all lines of the DB where the key is prefixed by the current prefix.
let mut iter = db
.remap_key_type::<ByteSlice>()
.prefix_iter(self.wtxn, &prefix_bytes)?
.remap_key_type::<StrBEU32Codec>();
while let Some(((_word, pos), data)) = iter.next().transpose()? {
let key = (prefix, pos);
let bytes = StrBEU32Codec::bytes_encode(&key).unwrap();
word_prefix_positions_docids_sorter.insert(bytes, data)?;
}
}
// We finally write all the word prefix position docids into the LMDB database.
sorter_into_lmdb_database(
self.wtxn,
*self.index.word_prefix_position_docids.as_polymorph(),
word_prefix_positions_docids_sorter,
merge_cbo_roaring_bitmaps,
WriteMethod::Append,
)?;
Ok(())
}
}