Handling "exhaustiveFacetsCount": false


I have a page with >100k records. Each record has ID, type, title, text and a couple of optional attributes per type or record. One of the search frontends let’s the user search all records with a text search thru title, text, etc. and a sidebar showing the faceted type, for quick filtering.

But the sidebar with the types isn’t always showing all existing types because we hit timeoutCounts and timeoutHits with exhaustiveFacetsCount set to false. Of course that’s bad for our users, because they might be missing existing record types. And as we generate the first result page on the server it’s not just a single user with bad results as long as a incomplete response gets cached.

At first I thought it might happen after an index update or on one of several servers. But sampling over some time shows always the same server responding and an even distribution, with about every third result being incomplete.

An example of a complete vs. an incomplete/timeout response can be seen at https://www.diffchecker.com/bgdPBDl3 The record values have been simplified, but all counts are the original data. You can see two of the existing types missing in the second result.

Is there a workaround or am I doing something wrong? Should I just retry the request and hope exhaustiveFacetsCount becomes true? Or is there a parameter signaling I’d prefer complete, but maybe slower, responses?


Nicolas here, I’m a Solutions Engineer at Algolia, happy to help you on this one :wink:
The behavior that you see is expected (but we might be able to do something about it!).

Counting the facets is a very expensive operation, (we basically need to browse every matching results one by one). So we browse the beginning of the list, and if the list if very big, we use a rule of three to make an estimation of the counts for the whole results set.

One solution to this is to simplify the complexity of the queries, to give time to the engine to process more results from the list. Since the number of records is just at the limit where we begin to approximate, that should be enough in your case.

I identified 2 potential configuration improvements on your index that should make the searches much faster (and should give us enough time to browse the full list of results):

1. Move the ranking attributes from the Ranking Formula to the Custom Ranking
Your Ranking Formula contains custom ranking attributes at the bottom:
ranking = [typo, geo, words ... exact, A, B, C]
You can put these attributes directly in the Custom Ranking, for a better performance (while keeping the same relevance):
custom = [A, B, C, ...]

2. Simplify the searchableAttributes
If possible, removing some items from the list searchableAttributes will increase the speed of the searches. Can you check that all the attributes here are required?

Can you try these and tell me if it improves the chances of getting an exhaustive count?


I’ve moved the attributes to the custom ranking, but we are still too slow. According to my monitor about 16% of the requests get timeouts.

The items in searchableAttributes are all required. Or at least the words in these attributes.

What I could do is create an artificial attribute with the searchable words (= the values of the current attributes concatenated) and only use this for indexing - like our current words attribute, which only has distinct words of the item text without stop words. As long as the words are in the same order as the current attributes that shouldn’t change the result order.

Would take me some time to implement, but would that yield an improvement?

A tool like EXPLAIN in SQL would be awesome to find problems in the index.


Ok, I see on my side that moving the ranking attributes to the ranking formula had a positive impact.

For the searchableAttributes, what matters is actually the total number of words searched, so no need to combine them into the same attribute, you can keep them like they are now. Thanks for checking if all of them are required.

I tried reproducing the queries where the timeouts are triggered, but I always had exhaustiveFacetsCount=true, I’m not sure I’m using the right index though. Can you tell me which index you have this issue on?

Also, in your production searches, do you usually use the same parameters than in your test?
"params": "getRankingInfo=true&hitsPerPage=0&query=&facets=recordType&facetFilters=%5B%22valid_for%3Aat%22%5D&page=0",

The additional info should help me troubleshoot this one.
We’ll get to the bottom of thisThanks!

PS: We’re actually thinking about building tools to analyze the queries, thanks for the good idea :wink:

Did some tests and indeed I can do a search with “exhaustiveFacetsCount”: true, but only if I remove one of two important parameters.

We are using facetFilters=valid_for:at to filter content that’s valid for the current domain. We could replace that with tags, if that would improve things.

We are also using optionalFacetFilters=["region.country_short:AUT","premium:1"] to push local entities for the current domain. premium is a duplicate from the normal ranking formular, because otherwise the country would be more important than the premium flag. I don’t have an idea how we could improve that without losing the ranking and not duplicating the indices for each domain.

Thanks for your support.


Ah, that makes sense. I understand what’s happening now!
Optional filters is a feature very intensive in CPU, which explains why you reach the timeout more often.

I unfortunately don’t have a solution at this point. During the beta, the optional filters feature was open to everyone, but when we officially released it, we decided to restrict it only to Enterprise given the high CPU costs (on Enterprise accounts we have access to more CPU optimizations). We kept the access to the feature to all the users that used it during the beta.

Even without the optional filters, always having an exhaustive facet count would require a lot of CPU ressources, and would make the queries potentially very slow (on some queries it would take seconds to compute the counts).

As a side note, removing the optional filter on premium would help. To achieve the same results, you can simply add the attribute directly in the Ranking Formula, just above “filters”:
ranking = [typo, geo, words, desc(premium), filters, proximity, attribute, exact, custom].

What do you think?

I’ll try to remove premium from optionalFacetFilters. If I remember correctly we had to add it, because otherwise all country records were pushed and ranking was only used afterwards. (i.e. AT_premium, AT, premium, others instead of AT_premium, premium, AT, others).

I’ll also talk to our client if we can replace optionalFacetFilters. It was added, because creating and maintaining many slave indices respectively syncing their settings isn’t easy. As we are now more stable that might be an option.