491: remove the unused key warning r=curquiza a=irevoire
When I copy-pasted my flatten crate I forgot to remove the key used to publish the package and that throw a warning.
Co-authored-by: Tamo <tamo@meilisearch.com>
490: Enforce labelling for the PRs r=curquiza a=curquiza
- Enforce one of the following labels to make the CI pass: `no breaking`, `DB breaking`, `API breaking` (milli API, not the Meilisearch API of course), or `skip changelog`. This new CI is now `Required` in the GitHub settings for merging a PR.
- Adapt the release drafter to these new labels
- rename `skip-changelog` into `skip changelog` according to the new label name
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
458: Nested fields r=Kerollmops a=irevoire
For the following document:
```json
{
"id": 1,
"person": {
"name": "tamo",
"age": 25,
}
}
```
Suppose the user sets `person` as a filterable attribute. We need to store `person` in the filterable _obviously_. But we also need to keep track of `person.name` and `person.age` somewhere.
That’s where I changed a little bit the logic of the engine.
Currently, we have a function called `faceted_field` that returns the union of the filterable and sortable.
I renamed this function in `user_defined_faceted_field`. And now, when we finish indexing documents, we look at all the fields and see if they « match » a `user_defined_faceted_field`.
So in our case:
- does `id` match `person`: 🔴
- does `person.name` match `person`: 🟢
- does `person.age` match `person`: 🟢
And thus, we insert in the database the following faceted fields: `person, person.name, person.age`.
The good thing about that solution is that we generate everything during the indexing phase, and then during the search, we can access our field without recomputing too much globbing.
-----
Now the bad thing is that I had to create a new db.
And if that was only one db, that would be ok, but actually, I need to do the same for the:
- Displayed attributes
- Attributes to retrieve
- Attributes to highlight
- Attribute to crop
`@Kerollmops`
Do you think there is a better way to do it?
Apart from all the code, can we have a problem because we have too many dbs?
Co-authored-by: Irevoire <tamo@meilisearch.com>
Co-authored-by: Tamo <tamo@meilisearch.com>
486: Update version (v0.25.0) r=curquiza a=curquiza
v0.25.0 will be released once #478 is merged
Co-authored-by: Clémentine Urquizar <clementine@meilisearch.com>
474: Disable typos on exact word r=MarinPostma a=MarinPostma
This PR introduces the `exact_word` setting to disable typo tolerance on custom words.
If a user query contains a word from `exact_words`, no typo derivation will be made for that particular word.
I have chosen to store the words in a FST, to save on deserialization, and allow for fast lookups.
I had some trouble with the `serde` module, and had to rename it `serde_impl`.
## steps:
- [x] introduce new settings to register words to disable typos on
- [x] in `typos`, return exact match is the current word is part of the word to disable typos for.
- [x] update `Context` to return the exact words dictionary.
- [x] merge #473
Co-authored-by: ad hoc <postma.marin@protonmail.com>