I just want to add a bit more information on my latest answer. The more refinements you'll add, the slower the query will get, you are right. It's hard to estimate what the threshold will be as it really depends on the number of records you have and some other factors, but it's safe to say that if you have ~10 refinements, you will see no perf decrease. If you go to 100 you might notice some delay and 500 might get you truncated results.
There is another way to do what you're trying to achieve, though. It is much more scalable, but will require more code and more operations on your plan. Algolia indices are schemaless, and there is no built-in way to do relationships between items like you would do in a relational database. It means that if we want to do a "get all the posts that belongs to my friends" kind of query, we need to be clever.
What I would suggest is to add a
viewable_by attribute to each of your posts. This attribute would contain an array of all the
user_ids that can see the post. It means, all the people that are friends with the poster.
Now, your front-end code is much simpler, you just have to add a facet on
viewable_by with a value equal to the current
On the other hand, it also means that whenever someone adds/removes a friend, you'll have to update all the records to add/remove their
viewable_by. This might create many operations and might not fit in your current plan.
It also means that anyone could tinker with the js code and replace their
user_id with the
user_id of someone else and see other's publications. This can be fixed by using our secured API keys, which are API keys with a set of filters already baked in. You would then create a new secured api-key for each user, that will allow them to only search into their own
Those are the two possible approaches: a quick one that works for a small dataset, and a more powerful one that will scale for bigger datasets, but will require more code and a bigger plan. Your choice