Go to file
bors[bot] fd032165d7
Merge #217
217: Improve the benchmarks readme r=Kerollmops a=irevoire

- Move the Dataset part to the end of the readme so when peoples just want to run the benchmarks they are not tempted to download the benchmarks by hand (which are going to be downloaded anyway by the `build.rs` scritp)
- Fix the links in the dataset -- wiki part


Co-authored-by: Irevoire <tamo@meilisearch.com>
2021-06-08 08:44:16 +00:00
.github Add --locked in CI tests 2021-06-03 16:23:59 +02:00
benchmarks improve the benchmark’s readme 2021-06-08 10:38:23 +02:00
helpers Update Cargo.toml for next release v0.3.0 2021-06-03 12:24:27 +02:00
http-ui Update Cargo.toml for next release v0.3.0 2021-06-03 12:24:27 +02:00
infos Update Cargo.toml for next release v0.3.0 2021-06-03 12:24:27 +02:00
milli Merge #211 2021-06-03 10:42:52 +00:00
search Update Cargo.toml for next release v0.3.0 2021-06-03 12:24:27 +02:00
.gitignore Change the project to become a workspace with milli as a default-member 2021-02-12 16:15:09 +01:00
bors.toml Add bors 2021-05-03 12:29:30 +02:00
Cargo.lock Update Cargo.lock 2021-06-03 16:22:43 +02:00
Cargo.toml move the benchmarks to another crate so we can download the datasets automatically without adding overhead to the build of milli 2021-06-02 11:11:50 +02:00
LICENSE Update LICENSE 2021-03-15 16:15:14 +01:00
qc_loop.sh Initial commit 2020-05-31 14:22:06 +02:00
README.md do not use echo that espaces newline 2021-04-29 09:25:35 +02:00

the milli logo

a concurrent indexer combined with fast and relevant search algorithms

Introduction

This engine is a prototype, do not use it in production. This is one of the most advanced search engine I have worked on. It currently only supports the proximity criterion.

Compile and Run the server

You can specify the number of threads to use to index documents and many other settings too.

cd http-ui
cargo run --release -- --db my-database.mdb -vvv --indexing-jobs 8

Index your documents

It can index a massive amount of documents in not much time, I already achieved to index:

  • 115m songs (song and artist name) in ~1h and take 107GB on disk.
  • 12m cities (name, timezone and country ID) in 15min and take 10GB on disk.

All of that on a 39$/month machine with 4cores.

You can feed the engine with your CSV (comma-seperated, yes) data like this:

printf "name,age\nhello,32\nkiki,24\n" | http POST 127.0.0.1:9700/documents content-type:text/csv

Here ids will be automatically generated as UUID v4 if they doesn't exist in some or every documents.

Note that it also support JSON and JSON streaming, you can send them to the engine by using the content-type:application/json and content-type:application/x-ndjson headers respectively.

Querying the engine via the website

You can query the engine by going to the HTML page itself.