mirror of
https://github.com/meilisearch/MeiliSearch
synced 2024-11-29 16:24:26 +01:00
3568: CI: Fix `publish-aarch64` job that still uses ubuntu-18.04 r=Kerollmops a=curquiza Fixes #3563 Main change - add the usage of the `ubuntu-18.04` container instead of the native `ubuntu-18.04` of GitHub actions: I had to install docker in the container. Small additional changes - remove useless `fail-fast` and unused/irrelevant matrix inputs (`build`, `linker`, `os`, `use-cross`...) - Remove useless step in job Proof of work with this CI triggered on this current branch: https://github.com/meilisearch/meilisearch/actions/runs/4366233882 3569: Enhance Japanese language detection r=dureuill a=ManyTheFish # Pull Request This PR is a prototype and can be tested by downloading [the dedicated docker image](https://hub.docker.com/layers/getmeili/meilisearch/prototype-better-language-detection-0/images/sha256-a12847de00e21a71ab797879fd09777dadcb0881f65b5f810e7d1ed434d116ef?context=explore): ```bash $ docker pull getmeili/meilisearch:prototype-better-language-detection-0 ``` ## Context Some Languages are harder to detect than others, this miss-detection leads to bad tokenization making some words or even documents completely unsearchable. Japanese is the main Language affected and can be detected as Chinese which has a completely different way of tokenization. A [first iteration has been implemented for v1.1.0](https://github.com/meilisearch/meilisearch/pull/3347) but is an insufficient enhancement to make Japanese work. This first implementation was detecting the Language during the indexing to avoid bad detections during the search. Unfortunately, some documents (shorter ones) can be wrongly detected as Chinese running bad tokenization for these documents and making possible the detection of Chinese during the search because it has been detected during the indexing. For instance, a Japanese document `{"id": 1, "name": "東京スカパラダイスオーケストラ"}` is detected as Japanese during indexing, during the search the query `東京` will be detected as Japanese because only Japanese documents have been detected during indexing despite the fact that v1.0.2 would detect it as Chinese. However if in the dataset there is at least one document containing a field with only Kanjis like: _A document with only 1 field containing only Kanjis:_ ```json { "id":4, "name": "東京特許許可局" } ``` _A document with 1 field containing only Kanjis and 1 field containing several Japanese characters:_ ```json { "id":105, "name": "東京特許許可局", "desc": "日経平均株価は26日 に約8カ月ぶりに2万4000円の心理的な節目を上回った。株高を支える材料のひとつは、自民党総裁選で3選を決めた安倍晋三首相の経済政策への期待だ。恩恵が見込まれるとされる人材サービスや建設株の一角が買われている。ただ思惑が先行して資金が集まっている面 は否めない。実際に政策効果を取り込む企業はどこか、なお未知数だ。" } ``` Then, in both cases, the field `name` will be detected as Chinese during indexing allowing the search to detect Chinese in queries. Therefore, the query `東京` will be detected as Chinese and only the two last documents will be retrieved by Meilisearch. ## Technical Approach The current PR partially fixes these issues by: 1) Adding a check over potential miss-detections and rerunning the extraction of the document forcing the tokenization over the main Languages detected in it. > 1) run a first extraction allowing the tokenizer to detect any Language in any Script > 2) generate a distribution of tokens by Script and Languages (`script_language`) > 3) if for a Script we have a token distribution of one of the Language that is under the threshold, then we rerun the extraction forbidding the tokenizer to detect the marginal Languages > 4) the tokenizer will fall back on the other available Languages to tokenize the text. For example, if the Chinese were marginally detected compared to the Japanese on the CJ script, then the second extraction will force Japanese tokenization for CJ text in the document. however, the text on another script like Latin will not be impacted by this restriction. 2) Adding a filtering threshold during the search over Languages that have been marginally detected in documents ## Limits This PR introduces 2 arbitrary thresholds: 1) during the indexing, a Language is considered miss-detected if the number of detected tokens of this Language is under 10% of the tokens detected in the same Script (Japanese and Chinese are 2 different Languages sharing the "same" script "CJK"). 2) during the search, a Language is considered marginal if less than 5% of documents are detected as this Language. This PR only partially fixes these issues: - ✅ the query `東京` now find Japanese documents if less than 5% of documents are detected as Chinese. - ✅ the document with the id `105` containing the Japanese field `desc` but the miss-detected field `name` is now completely detected and tokenized as Japanese and is found with the query `東京`. - ❌ the document with the id `4` no longer breaks the search Language detection but continues to be detected as a Chinese document and can't be found during the search. ## Related issue Fixes #3565 ## Possible future enhancements - Change or contribute to the Library used to detect the Language - the related issue on Whatlang: https://github.com/greyblake/whatlang-rs/issues/122 Co-authored-by: curquiza <clementine@meilisearch.com> Co-authored-by: ManyTheFish <many@meilisearch.com> Co-authored-by: Many the fish <many@meilisearch.com>
This commit is contained in:
commit
fb1260ee88
40
.github/workflows/publish-binaries.yml
vendored
40
.github/workflows/publish-binaries.yml
vendored
@ -96,14 +96,12 @@ jobs:
|
|||||||
|
|
||||||
publish-macos-apple-silicon:
|
publish-macos-apple-silicon:
|
||||||
name: Publish binary for macOS silicon
|
name: Publish binary for macOS silicon
|
||||||
runs-on: ${{ matrix.os }}
|
runs-on: macos-12
|
||||||
needs: check-version
|
needs: check-version
|
||||||
strategy:
|
strategy:
|
||||||
fail-fast: false
|
|
||||||
matrix:
|
matrix:
|
||||||
include:
|
include:
|
||||||
- os: macos-12
|
- target: aarch64-apple-darwin
|
||||||
target: aarch64-apple-darwin
|
|
||||||
asset_name: meilisearch-macos-apple-silicon
|
asset_name: meilisearch-macos-apple-silicon
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout repository
|
- name: Checkout repository
|
||||||
@ -132,21 +130,29 @@ jobs:
|
|||||||
|
|
||||||
publish-aarch64:
|
publish-aarch64:
|
||||||
name: Publish binary for aarch64
|
name: Publish binary for aarch64
|
||||||
runs-on: ${{ matrix.os }}
|
runs-on: ubuntu-latest
|
||||||
needs: check-version
|
needs: check-version
|
||||||
|
container:
|
||||||
|
# Use ubuntu-18.04 to compile with glibc 2.27
|
||||||
|
image: ubuntu:18.04
|
||||||
strategy:
|
strategy:
|
||||||
fail-fast: false
|
|
||||||
matrix:
|
matrix:
|
||||||
include:
|
include:
|
||||||
- build: aarch64
|
- target: aarch64-unknown-linux-gnu
|
||||||
os: ubuntu-18.04
|
|
||||||
target: aarch64-unknown-linux-gnu
|
|
||||||
linker: gcc-aarch64-linux-gnu
|
|
||||||
use-cross: true
|
|
||||||
asset_name: meilisearch-linux-aarch64
|
asset_name: meilisearch-linux-aarch64
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout repository
|
- name: Checkout repository
|
||||||
uses: actions/checkout@v3
|
uses: actions/checkout@v3
|
||||||
|
- name: Install needed dependencies
|
||||||
|
run: |
|
||||||
|
apt-get update -y && apt upgrade -y
|
||||||
|
apt-get install -y curl build-essential gcc-aarch64-linux-gnu
|
||||||
|
- name: Set up Docker for cross compilation
|
||||||
|
run: |
|
||||||
|
apt-get install -y curl apt-transport-https ca-certificates software-properties-common
|
||||||
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
|
||||||
|
add-apt-repository "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
|
||||||
|
apt-get update -y && apt-get install -y docker-ce
|
||||||
- name: Installing Rust toolchain
|
- name: Installing Rust toolchain
|
||||||
uses: actions-rs/toolchain@v1
|
uses: actions-rs/toolchain@v1
|
||||||
with:
|
with:
|
||||||
@ -154,15 +160,7 @@ jobs:
|
|||||||
profile: minimal
|
profile: minimal
|
||||||
target: ${{ matrix.target }}
|
target: ${{ matrix.target }}
|
||||||
override: true
|
override: true
|
||||||
- name: APT update
|
|
||||||
run: |
|
|
||||||
sudo apt update
|
|
||||||
- name: Install target specific tools
|
|
||||||
if: matrix.use-cross
|
|
||||||
run: |
|
|
||||||
sudo apt-get install -y ${{ matrix.linker }}
|
|
||||||
- name: Configure target aarch64 GNU
|
- name: Configure target aarch64 GNU
|
||||||
if: matrix.target == 'aarch64-unknown-linux-gnu'
|
|
||||||
## Environment variable is not passed using env:
|
## Environment variable is not passed using env:
|
||||||
## LD gold won't work with MUSL
|
## LD gold won't work with MUSL
|
||||||
# env:
|
# env:
|
||||||
@ -176,8 +174,10 @@ jobs:
|
|||||||
uses: actions-rs/cargo@v1
|
uses: actions-rs/cargo@v1
|
||||||
with:
|
with:
|
||||||
command: build
|
command: build
|
||||||
use-cross: ${{ matrix.use-cross }}
|
use-cross: true
|
||||||
args: --release --target ${{ matrix.target }}
|
args: --release --target ${{ matrix.target }}
|
||||||
|
env:
|
||||||
|
CROSS_DOCKER_IN_DOCKER: true
|
||||||
- name: List target output files
|
- name: List target output files
|
||||||
run: ls -lR ./target
|
run: ls -lR ./target
|
||||||
- name: Upload the binary to release
|
- name: Upload the binary to release
|
||||||
|
@ -375,10 +375,15 @@ pub fn perform_search(
|
|||||||
&displayed_ids,
|
&displayed_ids,
|
||||||
);
|
);
|
||||||
|
|
||||||
let mut tokenizer_buidler = TokenizerBuilder::default();
|
let mut tokenizer_builder = TokenizerBuilder::default();
|
||||||
tokenizer_buidler.create_char_map(true);
|
tokenizer_builder.create_char_map(true);
|
||||||
|
|
||||||
let mut formatter_builder = MatcherBuilder::new(matching_words, tokenizer_buidler.build());
|
let script_lang_map = index.script_language(&rtxn)?;
|
||||||
|
if !script_lang_map.is_empty() {
|
||||||
|
tokenizer_builder.allow_list(&script_lang_map);
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut formatter_builder = MatcherBuilder::new(matching_words, tokenizer_builder.build());
|
||||||
formatter_builder.crop_marker(query.crop_marker);
|
formatter_builder.crop_marker(query.crop_marker);
|
||||||
formatter_builder.highlight_prefix(query.highlight_pre_tag);
|
formatter_builder.highlight_prefix(query.highlight_pre_tag);
|
||||||
formatter_builder.highlight_suffix(query.highlight_post_tag);
|
formatter_builder.highlight_suffix(query.highlight_post_tag);
|
||||||
|
@ -1211,11 +1211,22 @@ impl Index {
|
|||||||
let soft_deleted_documents = self.soft_deleted_documents_ids(rtxn)?;
|
let soft_deleted_documents = self.soft_deleted_documents_ids(rtxn)?;
|
||||||
|
|
||||||
let mut script_language: HashMap<Script, Vec<Language>> = HashMap::new();
|
let mut script_language: HashMap<Script, Vec<Language>> = HashMap::new();
|
||||||
|
let mut script_language_doc_count: Vec<(Script, Language, u64)> = Vec::new();
|
||||||
|
let mut total = 0;
|
||||||
for sl in self.script_language_docids.iter(rtxn)? {
|
for sl in self.script_language_docids.iter(rtxn)? {
|
||||||
let ((script, language), docids) = sl?;
|
let ((script, language), docids) = sl?;
|
||||||
|
|
||||||
// keep only Languages that contains at least 1 document.
|
// keep only Languages that contains at least 1 document.
|
||||||
if !soft_deleted_documents.is_superset(&docids) {
|
let remaining_documents_count = (docids - &soft_deleted_documents).len();
|
||||||
|
total += remaining_documents_count;
|
||||||
|
if remaining_documents_count > 0 {
|
||||||
|
script_language_doc_count.push((script, language, remaining_documents_count));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let threshold = total / 20; // 5% (arbitrary)
|
||||||
|
for (script, language, count) in script_language_doc_count {
|
||||||
|
if count > threshold {
|
||||||
if let Some(languages) = script_language.get_mut(&script) {
|
if let Some(languages) = script_language.get_mut(&script) {
|
||||||
(*languages).push(language);
|
(*languages).push(language);
|
||||||
} else {
|
} else {
|
||||||
|
@ -3,12 +3,14 @@ use std::convert::TryInto;
|
|||||||
use std::fs::File;
|
use std::fs::File;
|
||||||
use std::{io, mem, str};
|
use std::{io, mem, str};
|
||||||
|
|
||||||
use charabia::{Language, Script, SeparatorKind, Token, TokenKind, TokenizerBuilder};
|
use charabia::{Language, Script, SeparatorKind, Token, TokenKind, Tokenizer, TokenizerBuilder};
|
||||||
|
use obkv::KvReader;
|
||||||
use roaring::RoaringBitmap;
|
use roaring::RoaringBitmap;
|
||||||
use serde_json::Value;
|
use serde_json::Value;
|
||||||
|
|
||||||
use super::helpers::{concat_u32s_array, create_sorter, sorter_into_reader, GrenadParameters};
|
use super::helpers::{concat_u32s_array, create_sorter, sorter_into_reader, GrenadParameters};
|
||||||
use crate::error::{InternalError, SerializationError};
|
use crate::error::{InternalError, SerializationError};
|
||||||
|
use crate::update::index_documents::MergeFn;
|
||||||
use crate::{
|
use crate::{
|
||||||
absolute_from_relative_position, FieldId, Result, MAX_POSITION_PER_ATTRIBUTE, MAX_WORD_LENGTH,
|
absolute_from_relative_position, FieldId, Result, MAX_POSITION_PER_ATTRIBUTE, MAX_WORD_LENGTH,
|
||||||
};
|
};
|
||||||
@ -33,7 +35,7 @@ pub fn extract_docid_word_positions<R: io::Read + io::Seek>(
|
|||||||
let max_memory = indexer.max_memory_by_thread();
|
let max_memory = indexer.max_memory_by_thread();
|
||||||
|
|
||||||
let mut documents_ids = RoaringBitmap::new();
|
let mut documents_ids = RoaringBitmap::new();
|
||||||
let mut script_language_pair = HashMap::new();
|
let mut script_language_docids = HashMap::new();
|
||||||
let mut docid_word_positions_sorter = create_sorter(
|
let mut docid_word_positions_sorter = create_sorter(
|
||||||
grenad::SortAlgorithm::Stable,
|
grenad::SortAlgorithm::Stable,
|
||||||
concat_u32s_array,
|
concat_u32s_array,
|
||||||
@ -43,13 +45,12 @@ pub fn extract_docid_word_positions<R: io::Read + io::Seek>(
|
|||||||
max_memory,
|
max_memory,
|
||||||
);
|
);
|
||||||
|
|
||||||
let mut key_buffer = Vec::new();
|
let mut buffers = Buffers::default();
|
||||||
let mut field_buffer = String::new();
|
let mut tokenizer_builder = TokenizerBuilder::new();
|
||||||
let mut builder = TokenizerBuilder::new();
|
|
||||||
if let Some(stop_words) = stop_words {
|
if let Some(stop_words) = stop_words {
|
||||||
builder.stop_words(stop_words);
|
tokenizer_builder.stop_words(stop_words);
|
||||||
}
|
}
|
||||||
let tokenizer = builder.build();
|
let tokenizer = tokenizer_builder.build();
|
||||||
|
|
||||||
let mut cursor = obkv_documents.into_cursor()?;
|
let mut cursor = obkv_documents.into_cursor()?;
|
||||||
while let Some((key, value)) = cursor.move_on_next()? {
|
while let Some((key, value)) = cursor.move_on_next()? {
|
||||||
@ -57,49 +58,122 @@ pub fn extract_docid_word_positions<R: io::Read + io::Seek>(
|
|||||||
.try_into()
|
.try_into()
|
||||||
.map(u32::from_be_bytes)
|
.map(u32::from_be_bytes)
|
||||||
.map_err(|_| SerializationError::InvalidNumberSerialization)?;
|
.map_err(|_| SerializationError::InvalidNumberSerialization)?;
|
||||||
let obkv = obkv::KvReader::<FieldId>::new(value);
|
let obkv = KvReader::<FieldId>::new(value);
|
||||||
|
|
||||||
documents_ids.push(document_id);
|
documents_ids.push(document_id);
|
||||||
key_buffer.clear();
|
buffers.key_buffer.clear();
|
||||||
key_buffer.extend_from_slice(&document_id.to_be_bytes());
|
buffers.key_buffer.extend_from_slice(&document_id.to_be_bytes());
|
||||||
|
|
||||||
for (field_id, field_bytes) in obkv.iter() {
|
let mut script_language_word_count = HashMap::new();
|
||||||
if searchable_fields.as_ref().map_or(true, |sf| sf.contains(&field_id)) {
|
|
||||||
let value =
|
|
||||||
serde_json::from_slice(field_bytes).map_err(InternalError::SerdeJson)?;
|
|
||||||
field_buffer.clear();
|
|
||||||
if let Some(field) = json_to_string(&value, &mut field_buffer) {
|
|
||||||
let tokens = process_tokens(tokenizer.tokenize(field))
|
|
||||||
.take_while(|(p, _)| (*p as u32) < max_positions_per_attributes);
|
|
||||||
|
|
||||||
for (index, token) in tokens {
|
extract_tokens_from_document(
|
||||||
if let Some(language) = token.language {
|
&obkv,
|
||||||
let script = token.script;
|
searchable_fields,
|
||||||
let entry = script_language_pair
|
&tokenizer,
|
||||||
|
max_positions_per_attributes,
|
||||||
|
&mut buffers,
|
||||||
|
&mut script_language_word_count,
|
||||||
|
&mut docid_word_positions_sorter,
|
||||||
|
)?;
|
||||||
|
|
||||||
|
// if we detect a potetial mistake in the language detection,
|
||||||
|
// we rerun the extraction forcing the tokenizer to detect the most frequently detected Languages.
|
||||||
|
// context: https://github.com/meilisearch/meilisearch/issues/3565
|
||||||
|
if script_language_word_count
|
||||||
|
.values()
|
||||||
|
.map(Vec::as_slice)
|
||||||
|
.any(potential_language_detection_error)
|
||||||
|
{
|
||||||
|
// build an allow list with the most frequent detected languages in the document.
|
||||||
|
let script_language: HashMap<_, _> =
|
||||||
|
script_language_word_count.iter().filter_map(most_frequent_languages).collect();
|
||||||
|
|
||||||
|
// if the allow list is empty, meaning that no Language is considered frequent,
|
||||||
|
// then we don't rerun the extraction.
|
||||||
|
if !script_language.is_empty() {
|
||||||
|
// build a new temporary tokenizer including the allow list.
|
||||||
|
let mut tokenizer_builder = TokenizerBuilder::new();
|
||||||
|
if let Some(stop_words) = stop_words {
|
||||||
|
tokenizer_builder.stop_words(stop_words);
|
||||||
|
}
|
||||||
|
tokenizer_builder.allow_list(&script_language);
|
||||||
|
let tokenizer = tokenizer_builder.build();
|
||||||
|
|
||||||
|
script_language_word_count.clear();
|
||||||
|
|
||||||
|
// rerun the extraction.
|
||||||
|
extract_tokens_from_document(
|
||||||
|
&obkv,
|
||||||
|
searchable_fields,
|
||||||
|
&tokenizer,
|
||||||
|
max_positions_per_attributes,
|
||||||
|
&mut buffers,
|
||||||
|
&mut script_language_word_count,
|
||||||
|
&mut docid_word_positions_sorter,
|
||||||
|
)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for (script, languages_frequency) in script_language_word_count {
|
||||||
|
for (language, _) in languages_frequency {
|
||||||
|
let entry = script_language_docids
|
||||||
.entry((script, language))
|
.entry((script, language))
|
||||||
.or_insert_with(RoaringBitmap::new);
|
.or_insert_with(RoaringBitmap::new);
|
||||||
entry.push(document_id);
|
entry.push(document_id);
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
sorter_into_reader(docid_word_positions_sorter, indexer)
|
||||||
|
.map(|reader| (documents_ids, reader, script_language_docids))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn extract_tokens_from_document<T: AsRef<[u8]>>(
|
||||||
|
obkv: &KvReader<FieldId>,
|
||||||
|
searchable_fields: &Option<HashSet<FieldId>>,
|
||||||
|
tokenizer: &Tokenizer<T>,
|
||||||
|
max_positions_per_attributes: u32,
|
||||||
|
buffers: &mut Buffers,
|
||||||
|
script_language_word_count: &mut HashMap<Script, Vec<(Language, usize)>>,
|
||||||
|
docid_word_positions_sorter: &mut grenad::Sorter<MergeFn>,
|
||||||
|
) -> Result<()> {
|
||||||
|
for (field_id, field_bytes) in obkv.iter() {
|
||||||
|
if searchable_fields.as_ref().map_or(true, |sf| sf.contains(&field_id)) {
|
||||||
|
let value = serde_json::from_slice(field_bytes).map_err(InternalError::SerdeJson)?;
|
||||||
|
buffers.field_buffer.clear();
|
||||||
|
if let Some(field) = json_to_string(&value, &mut buffers.field_buffer) {
|
||||||
|
let tokens = process_tokens(tokenizer.tokenize(field))
|
||||||
|
.take_while(|(p, _)| (*p as u32) < max_positions_per_attributes);
|
||||||
|
|
||||||
|
for (index, token) in tokens {
|
||||||
|
// if a language has been detected for the token, we update the counter.
|
||||||
|
if let Some(language) = token.language {
|
||||||
|
let script = token.script;
|
||||||
|
let entry =
|
||||||
|
script_language_word_count.entry(script).or_insert_with(Vec::new);
|
||||||
|
match entry.iter_mut().find(|(l, _)| *l == language) {
|
||||||
|
Some((_, n)) => *n += 1,
|
||||||
|
None => entry.push((language, 1)),
|
||||||
|
}
|
||||||
|
}
|
||||||
let token = token.lemma().trim();
|
let token = token.lemma().trim();
|
||||||
if !token.is_empty() && token.len() <= MAX_WORD_LENGTH {
|
if !token.is_empty() && token.len() <= MAX_WORD_LENGTH {
|
||||||
key_buffer.truncate(mem::size_of::<u32>());
|
buffers.key_buffer.truncate(mem::size_of::<u32>());
|
||||||
key_buffer.extend_from_slice(token.as_bytes());
|
buffers.key_buffer.extend_from_slice(token.as_bytes());
|
||||||
|
|
||||||
let position: u16 = index
|
let position: u16 = index
|
||||||
.try_into()
|
.try_into()
|
||||||
.map_err(|_| SerializationError::InvalidNumberSerialization)?;
|
.map_err(|_| SerializationError::InvalidNumberSerialization)?;
|
||||||
let position = absolute_from_relative_position(field_id, position);
|
let position = absolute_from_relative_position(field_id, position);
|
||||||
docid_word_positions_sorter
|
docid_word_positions_sorter
|
||||||
.insert(&key_buffer, position.to_ne_bytes())?;
|
.insert(&buffers.key_buffer, position.to_ne_bytes())?;
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
sorter_into_reader(docid_word_positions_sorter, indexer)
|
Ok(())
|
||||||
.map(|reader| (documents_ids, reader, script_language_pair))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Transform a JSON value into a string that can be indexed.
|
/// Transform a JSON value into a string that can be indexed.
|
||||||
@ -183,3 +257,46 @@ fn process_tokens<'a>(
|
|||||||
})
|
})
|
||||||
.filter(|(_, t)| t.is_word())
|
.filter(|(_, t)| t.is_word())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn potential_language_detection_error(languages_frequency: &[(Language, usize)]) -> bool {
|
||||||
|
if languages_frequency.len() > 1 {
|
||||||
|
let threshold = compute_language_frequency_threshold(languages_frequency);
|
||||||
|
languages_frequency.iter().any(|(_, c)| *c <= threshold)
|
||||||
|
} else {
|
||||||
|
false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn most_frequent_languages(
|
||||||
|
(script, languages_frequency): (&Script, &Vec<(Language, usize)>),
|
||||||
|
) -> Option<(Script, Vec<Language>)> {
|
||||||
|
if languages_frequency.len() > 1 {
|
||||||
|
let threshold = compute_language_frequency_threshold(languages_frequency);
|
||||||
|
|
||||||
|
let languages: Vec<_> =
|
||||||
|
languages_frequency.iter().filter(|(_, c)| *c > threshold).map(|(l, _)| *l).collect();
|
||||||
|
|
||||||
|
if languages.is_empty() {
|
||||||
|
None
|
||||||
|
} else {
|
||||||
|
Some((*script, languages))
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn compute_language_frequency_threshold(languages_frequency: &[(Language, usize)]) -> usize {
|
||||||
|
let total: usize = languages_frequency.iter().map(|(_, c)| c).sum();
|
||||||
|
total / 10 // 10% is a completely arbitrary value.
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Default)]
|
||||||
|
struct Buffers {
|
||||||
|
// the key buffer is the concatenation of the internal document id with the field id.
|
||||||
|
// The buffer has to be completelly cleared between documents,
|
||||||
|
// and the field id part must be cleared between each field.
|
||||||
|
key_buffer: Vec<u8>,
|
||||||
|
// the field buffer for each fields desserialization, and must be cleared between each field.
|
||||||
|
field_buffer: String,
|
||||||
|
}
|
||||||
|
Loading…
Reference in New Issue
Block a user