My application has two main object types (experts and projects). Each has between 2,000 and 3,000 records, and including a few test indices, I currently have a total of 7,195 records in my account. The account limit is 10,000.
My current (clunky) implementation of Algolia is simple. Each Monday morning, the server makes two API calls for each index. The first call is to clear the index, and the second is a batch upload (to update it with all the latest information).
Usually this works fine. However, sometimes the second call fails, with dire consequences: the entire index is empty until I start work later in the morning, manually check it, and rerun the upload task manually
Looking at the Logs on the Algolia dashboard this morning, I saw that, for one of the two indices, the upload job failed with a 403 response code and the message “Record quota exceeded, change plan or delete records.” If you were to double the number of objects in the experts index, you would indeed exceed the 10,000 limit.
However, this call is logged as taking place at 2017-10-30T07:00:04Z, which is two seconds after the
clear call (at 2017-10-30T07:00:02Z), which is logged as having succeeded and taken 1ms to process.
Is there a delay between the
clear call being processed and the result being propagated to whatever part of the system is responsible for blocking calls that would result in too many records? If so, what is the best way of handling this from within a PHP script? Should I just introduce a few seconds’ delay? Or call the
listIndexes PHP method and check the number of objects in the index is indeed 0 before making the
waitTask method in the PHP SDK seems promising, but the docs suggest it only applies to tasks that involve adding/updating objects (not clearing them).