1172: Fix atomic snapshot creation r=MarinPostma a=raszi
Compress gzip files to a temporary file first and then do an atomic rename.
In our setup we have an indexer which does snapshoting for the instances serving the requests. Since currently the snapshoting mechanism is replacing the file in place therefore the indexer could not share the snapshot with a live instance.
With this small patch we first create a new temporary file in the same directory as the snapshot dir and then we do an atomic rename therefore the snapshot path would always contain a valid snapshot.
After applying this change it would be enough to simply restart the serving instances to pick up the new snapshot from a shared storage without worrying them to die because of an incomplete snapshot.
Co-authored-by: KARASZI István <ikaraszi@gmail.com>
1176: fix race condition in document addition r=Kerollmops a=MarinPostma
As described in #1160, there was a race condition when updating settings and adding documents simultaneously. This was due to the schema being updated and document addition being processed in two different transactions. This PR moves the schema update logic for the primary key in the same transaction as the document addition, while maintaining the input checks for the validity of the primary key in the http route, in order not to break the error reporting for the document addition route.
close#1160.
Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: marin <postma.marin@protonmail.com>
1184: normalize synonyms during indexation r=MarinPostma a=LegendreM
fix#1135#964
Normalizes the synonyms before indexing them, so they are not case sensitive anymore. Then normalization also involves deunicoding is some cases, such as accents, so `été` and `ete` are considered equivalent in a search for synonyms.
Co-authored-by: many <maxime@meilisearch.com>
Co-authored-by: Many <legendre.maxime.isn@gmail.com>
1174: Limit query words number r=MarinPostma a=MarinPostma
This pr adds a limit to the number of words taken into account in a search query. Using query string that are too long leads to huge performance hits and ressources consumtion, that occasionally crashes the machine. The limit has been hard set to 10, and tests have been added to make sure that it is taken into account.
close#941
Co-authored-by: mpostma <postma.marin@protonmail.com>
1207: fix homebrew name r=MarinPostma a=fharper
brew is the command, the package manager name is homebrew
Co-authored-by: Frédéric Harper <hi@fred.dev>
1185: fix cors issue r=MarinPostma a=MarinPostma
This PR fixes a bug where foreign origin were not accepted.
This was due to an update to actix-cors
It also fixes the cors bug when authentication failed, with the caveat that request that are denied for permissions reason are not logged.
it introduces a bug described in #1186
Co-authored-by: mpostma <postma.marin@protonmail.com>
1167: Update dumps ci r=LegendreM a=MarinPostma
Now that the dump test are re-entrant, they can be run from a multithreaded context, whereas they used to be ran from a single threaded context, in a separate CI task.
Co-authored-by: mpostma <postma.marin@protonmail.com>
1091: New tokenizer r=LegendreM a=MarinPostma
Integration of the new tokenizer to meilisearch.
- Tokenize and normalizes the query string for better search results
- Language sensitive tokenization and normalization during indexation
- better support for Chinese thanks to jieba (when Chinese characters are detected)
To do in a later PR:
- Use a common tokenization instance
- use tokenization for synonyms
close#624
Co-authored-by: mpostma <postma.marin@protonmail.com>
Co-authored-by: many <maxime@meilisearch.com>