Record eventual consistency guidelines

Hi, are there any guidelines or best practice for ensuring records in algolia are eventually consistent?

For example, my service is constantly creating and changing its searchable data and sending these changes as records to algolia.
If there is a network partition, my service fails or algolia is down, and I’m not sure if a record has made it to algolia, what is the best way to detect this and resend the record?

One option is the keep a record data store in my service and when a record change response is received from algolia, the local record is marked consistent. Then at later time the inconsistent local records can be resent to algolia.


Hello there, and welcome to the Algolia Community!

The first thing you can do to make your Algolia indexing pipeline more secure is to use one of our API clients, instead of querying the REST API directly. The back-end API clients come in 11 different languages, and we’ve designed them to handle most of the Algolia complexity for you. For example, when hitting your Algolia cluster to perform a write operation (indexing, updating, changing the settings, etc.) and the reached node is down, all API clients will transparently retry with a different node instead of returning a network error.

Depending on the language (thus, the API client) you use, the scenario will be slightly different. For example, if you’re using the JavaScript client, the write methods are all asynchronous and return promises. You can leverage them to catch errors and write your own retry logic.

const objects = [ /* ... */ ];

  .then(content => {
    // your job was successfully queued
  .catch(err => {
    // your retry logic

In the above example, if there was any error (network, malformed objects, etc.), you would enter the catch clause. This is where you want to perform your own retry logic.

Every API client manage this idiomatically to their respective language. You can refer to the documentation to know more.

One final note: from an engine perspective, most of the write methods are asynchronous. What you are actually doing when calling these methods is adding a new job (or task) to a queue: it is this job, and not the method, that actually performs the desired action . In most cases, the job is executed within seconds if not milliseconds. But it all depends on what is in the queue: if the queue has many pending tasks, the new job will need to wait its turn.

Each method returns a unique task ID which you can use with the waitTask method. Using the waitTask method guarantees that the job has finished before proceeding with your new requests. We don’t recommend abusing waitTask, nor using it in lieu of proper error management. This is a costly method that will slow down your indexing script and drain your cluster, especially if you run the script often and with many records. The waitTask method is used most often in debugging scenarios, where you are testing a search immediately after updating an index, or to manage dependencies, for example, when deleting an index before creating a new index with the same name, or clearing an index before adding new objects.

Hope this helps!