This reverts commit 12fb509d8470e6d0c3a424756c9838a1efe306d2. We revert this commit because it's causing the bug #150. The initial algorithm we implemented for the stop_words was: 1. remove the stop_words from the dataset 2. keep the stop_words in the query to see if we can generate new words by integrating typos or if the word was a prefix => This was causing the bug since, in the case of “The hobbit”, we were **always** looking for something starting with “t he” or “th e” instead of ignoring the word completely. For now we are going to fix the bug by completely ignoring the stop_words in the query. This could cause another problem were someone mistyped a normal word and ended up typing a stop_word. For example imagine someone searching for the music “Won't he do it”. If that person misplace one space and write “Won' the do it” then we will loose a part of the request. One fix would be to update our query tree to something like that: --------------------- OR OR TOLERANT hobbit # the first option is to ignore the stop_word AND CONSECUTIVE # the second option is to do as we are doing EXACT t # currently EXACT he TOLERANT hobbit --------------------- This would increase drastically the size of our query tree on request with a lot of stop_words. For example think of “The Lord Of The Rings”. For now whatsoever we decided we were going to ignore this problem and consider that it doesn't reduce too much the relevancy of the search to do that while it improves the performances.
a concurrent indexer combined with fast and relevant search algorithms
Introduction
This engine is a prototype, do not use it in production. This is one of the most advanced search engine I have worked on. It currently only supports the proximity criterion.
Compile and Run the server
You can specify the number of threads to use to index documents and many other settings too.
cd http-ui
cargo run --release -- serve --db my-database.mdb -vvv --indexing-jobs 8
Index your documents
It can index a massive amount of documents in not much time, I already achieved to index:
- 115m songs (song and artist name) in ~1h and take 107GB on disk.
- 12m cities (name, timezone and country ID) in 15min and take 10GB on disk.
All of that on a 39$/month machine with 4cores.
You can feed the engine with your CSV (comma-seperated, yes) data like this:
echo "name,age\nhello,32\nkiki,24\n" | http POST 127.0.0.1:9700/documents content-type:text/csv
Here ids will be automatically generated as UUID v4 if they doesn't exist in some or every documents.
Note that it also support JSON and JSON streaming, you can send them to the engine by using
the content-type:application/json
and content-type:application/x-ndjson
headers respectively.
Querying the engine via the website
You can query the engine by going to the HTML page itself.