Is there any way control or influence Algolia’s exhaustive behaviour, so that facet item counts are not approximated.
It says in the Algolia documentation:
. If a query returns a huge number of results, the engine will approximate the hits count to avoid having to scan the full results set. It has been put in place to protect other search and indexing operations
We are working with a dataset of 140k of items, but are still experiencing approximated counts. We assumed this would not be regarded as “huge”.
Does anyone know what is the threshold is before counts get approximated?
I get that is done for performance reasons, but can this thread-hold be changed or can we configure our index in such a way that, this is less likely to happen?