AlgoliaException duplicated key

Hi everyone,

I m not sure if this is the place for this but here goes.

we ve been running on algolia for over 4 months now, having 8 indexes, one with over 2M records. Today we have encountered an error that ultimately led to application exception and code:500. More info follows.


We re still using Algolia v1

compile “com.algolia:algoliasearch:1.9.2”

Call itself is made by:

(this call fails on given algolia client exception)
val algoliaResults = AlgoliaSearch.client.multipleQueries(
AlgoliaSearch.getIndexQueries(query, pagination, context, redis, index),

where AlgoliaSearch.getIndexQueries returns List of IndexQuery


fun saveObject(index: Index, obj: JSONObject, objectId: String) {
index.saveObject(obj, objectId)

fun partialUpdateObjectNoCreate(index: Index, obj: JSONObject, objectId: String) {
index.partialUpdateObjectNoCreate(obj, objectId)

where obj is a JSONObject created from a strongly typed class representaion (saying no property can be duplicated here)

We tracked issued down to a single index: MessagesIndex.

MessagesIndex example json structure:

“hits” : [{
“directedToId” : “someId”,
“score” : 0,
“hasFile” : false,
“hasLink” : false,
“inserted” : 1490713110,
“teamId” : “someId”,
“destination” : “CONV”,
“isEdited” : false,
“updated” : 1490713110,
“content” : “someValue”,
“createdById” : “someId”,
“objectID” : “objectIdUnique”,
“_highlightResult” : {
“content” : {
“value” : “someValue”,
“matchLevel” : “none”,
“matchedWords” :
“_rankingInfo” : {
“nbTypos” : 0,
“firstMatchedWord” : 0,
“proximityDistance” : 0,
“userScore” : 20,
“geoDistance” : 0,
“geoPrecision” : 1,
“nbExactWords” : 0,
“words” : 0,
“filters” : 1
}, …}]

and “fixed” the issue by setting distinct=True, attributesToDistinct=objectID. However we whould like to know what is the real issue here, what that exception actually means given that objectID is always unique how can there be any duplicities (this happened now for the first time). And also how does the distinct works - Difference between setting a) only distinct=True b) distinct=True and attributesToDistinct=objectID. As setting a) only didnt work (what is the default value for distinct when non is set ?). Also implications of setting b) on index via algolia interface (any records are deleted ?).

Thanks in advance guys, altho search seems to be working now, we whould love to know what s goin on and how does it all work.

Tomas from TeamZeusApp

Core Exception: JSON decode error:Duplicate key “CONV”


Mar 28 14:37:26 com.teamzeusapp: API server unhandled exception, statusCode: -1 [ERROR] 2017-03-28 12:37:25.984 Api - API server unhandled exception, statusCode: -1 JSON decode error:Duplicate key “CONV” at at at at at com.teamzeusapp.api.handlers.SearchApi.all(SearchApi.kt:87) at com.teamzeusapp.api.verticles.ApiWebServerVerticle$apiRouter$105.handle(ApiWebServerVerticle.kt:284) at com.teamzeusapp.api.verticles.ApiWebServerVerticle$apiRouter$105.handle(ApiWebServerVerticle.kt:25) at io.vertx.ext.web.impl.RouteImpl.handleContext( at io.vertx.ext.web.impl.RoutingContextImplBase.iterateNext( at at com.teamzeusapp.api.verticles.ApiWebServerVerticleKt.authenticateInContext(ApiWebServerVerticle.kt:423) at com.teamzeusapp.api.verticles.ApiWebServerVerticleKt.authenticateInContext$default(ApiWebServerVerticle.kt:418) at com.teamzeusapp.api.verticles.ApiWebServerVerticle.handleAuthenticationToken(ApiWebServerVerticle.kt:346) at com.teamzeusapp.api.verticles.ApiWebServerVerticle.access$handleAuthenticationToken(ApiWebServerVerticle.kt:25) at com.teamzeusapp.api.verticles.ApiWebServerVerticle$apiRouter$11.handle(ApiWebServerVerticle.kt:151) at com.teamzeusapp.api.verticles.ApiWebServerVerticle$apiRouter$11.handle(ApiWebServerVerticle.kt:25) at io.vertx.ext.web.impl.RouteImpl.handleContext( at io.vertx.ext.web.impl.RoutingContextImplBase.iterateNext( at at com.teamzeusapp.api.verticles.ApiWebServerVerticle.addCorsAjaxHeaders(ApiWebServerVerticle.kt:414) at com.teamzeusapp.api.verticles.ApiWebServerVerticle.access$addCorsAjaxHeaders(ApiWebServerVerticle.kt:25) at com.teamzeusapp.api.verticles.ApiWebServerVerticle$apiRouter$3.handle(ApiWebServerVerticle.kt:139) at com.teamzeusapp.api.verticles.ApiWebServerVerticle$apiRouter$3.handle(ApiWebServerVerticle.kt:25) at io.vertx.ext.web.impl.RouteImpl.handleContext( at io.vertx.ext.web.impl.RoutingContextImplBase.iterateNext( at at com.teamzeusapp.api.verticles.ApiWebServerVerticle.addSecureHeaders(ApiWebServerVerticle.kt:404) at com.teamzeusapp.api.verticles.ApiWebServerVerticle.access$addSecureHeaders(ApiWebServerVerticle.kt:25) at com.teamzeusapp.api.verticles.ApiWebServerVerticle$apiRouter$2.handle(ApiWebServerVerticle.kt:136) at com.teamzeusapp.api.verticles.ApiWebServerVerticle$apiRouter$2.handle(ApiWebServerVerticle.kt:25) at io.vertx.ext.web.impl.RouteImpl.handleContext( at io.vertx.ext.web.impl.RoutingContextImplBase.iterateNext( at at com.teamzeusapp.api.verticles.ApiWebServerVerticle$apiRouter$1.handle(ApiWebServerVerticle.kt:132) at com.teamzeusapp.api.verticles.ApiWebServerVerticle$apiRouter$1.handle(ApiWebServerVerticle.kt:25) at io.vertx.ext.web.impl.RouteImpl.handleContext( at io.vertx.ext.web.impl.RoutingContextImplBase.iterateNext( at at io.vertx.ext.web.handler.impl.BodyHandlerImpl$BHandler.doEnd( at io.vertx.ext.web.handler.impl.BodyHandlerImpl$BHandler.end( at io.vertx.ext.web.handler.impl.BodyHandlerImpl.lambda$handle$0( at io.vertx.core.http.impl.HttpServerRequestImpl.handleEnd( at io.vertx.core.http.impl.ServerConnection.handleEnd( at io.vertx.core.http.impl.ServerConnection.processMessage( at io.vertx.core.http.impl.ServerConnection.handleMessage( at io.vertx.core.http.impl.HttpServerImpl$ServerHandler.doMessageReceived( at io.vertx.core.http.impl.HttpServerImpl$ServerHandler.doMessageReceived( at io.vertx.core.http.impl.VertxHttpHandler.lambda$channelRead$0( at io.vertx.core.impl.ContextImpl.lambda$wrapTask$2( at io.vertx.core.impl.ContextImpl.executeFromIO( at io.vertx.core.http.impl.VertxHttpHandler.channelRead( at at at at at at at at at io.netty.handler.codec.MessageToMessageDecoder.channelRead( at io.netty.handler.codec.MessageToMessageCodec.channelRead( at at at at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead( at io.netty.handler.codec.ByteToMessageDecoder.channelRead( at at at at io.vertx.core.http.impl.HttpServerImpl$Http1xOrHttp2Handler.http1( at io.vertx.core.http.impl.HttpServerImpl$Http1xOrHttp2Handler.channelRead( at at at at$HeadContext.channelRead( at at at at$ at at at at at io.netty.util.concurrent.SingleThreadEventExecutor$ at

Hi @servers, thanks for posting. One favor to ask, can you put code and stack traces in blocks for better readability. Just highlight the text and chose the </> icon. In the meantime we’ll have a look.

Hi @dzello thanks for reply, i ve moved code segments to block. The problem with duplicated key still persists, encountered same problem again today. We re not sure as to why its occuring again and how exactly to fix this.

More on that problem seems to be reocurring -> one query works fine, then call on a same query returns duplicated key error, then it works fine again.

Thanks for formatting.

What version of the Java client are you using?

Hi @servers,

I saw that you have a lot of http code 400 on your account, like this one:

"message": "Record at the position 0 is too big size=107004 bytes. Contact us if you need an extended quota"

Could you also give us the full object you are sending? The error seems pretty obvious, but I do not see it in the logs.


Hi guys,

Jozef from Team Zeus here, I’m stepping in in hope for quick resolution of an issue.
I have to say I’m quite disappointed by how this issue is handled so far:

  • it’s been almost 2 days with no progress

  • You are requesting Java library version from us, but
    a) that has been posted in initial bug report - ‘Setup: We re still using Algolia v1 compile “com.algolia:algoliasearch:1.9.2”’
    b) java library version is something you already know, as I can clearly see it on Algolia dashboard after logging in:
    So we are exchanging obvious info that you already have.
    Anyway, upgrade to newest java library is not an option as APIs changed and it’s huge rewrite effort on our side comparable to moving to other search solution.

  • http code 400 has nothing to do with our issue - they are quite common. As our first post clearly described,
    we have problem with duplicates, causing AlgoliaException and therefore crashing most of our searches.

Original post:

Core Exception: JSON decode error:Duplicate key “CONV”
…stacktrace follows…

As my coworker pointed out:

More on that problem seems to be reocurring -> one query works fine, then call on a same query returns duplicated key error, then it works fine again.

To address:

Could you also give us the full object you are sending?

  • you have access to all our index data and executed queries, why don’t you inspect it ?
  • write requests are not something we can easily recreate - it’s user-generated content
  • again - our issue is that searches are crashing on AlgoliaException because of duplicates, which is not relevant to http 400 returned from algolia
    We’ve never faced this error before and our application wasn’t changed for months so I doubt it can be caused by changes on our side.

What we did in order to decrease search failures in our product:

  • enabled Distinct for indices as my coworker pointed in first post. This did not fix issue permanently.
  • added retry mechanism (5 maximum) in our app - if algolia search fails, there is retry with same parameters, because errors does not occur all the time:

one query works fine, then call on a same query returns duplicated key error

This retry is not nice technique to use, may affect our billing, but was the only viable choice in short time how to avoid production issues in our app
affecting our customers.

Can you please focus on original bug report we provided and start from there ?
We believe dealing with how duplicates can happen in responses is source of the problem.

Also, is there any option to discuss issue without sharing content to public ?
This community forum page is visible to anyone including search engines and I’m not going to allow sharing additional stacktraces or index data/queries of our customers in public.
Private issue tracker/support tool is a way to go, we are paying customer and dealing with support on publicly available community forum is awkward and insecure.


Hi Jozef,

We have a support e-mail:
I do not have access to your e-mail, so please could you send an e-mail there so we can continue this issue privately.

As for your other issues, I’m sorry for the java version, you are right you wrote it since the beginning.

We do not have access to your executed queries, at least we do not have the full body of the request/response. That is why we are asking for the full body/response. Providing it will help us debug the issue. Same thing about the queries. You are doing 9000 queries per day, it will be very difficult for us to find the one which have issues. So providing us the query will speed up the resolution of your bug.

For the 400, I’m sorry but that was not something that was clear for me on your first message, that is why I asked.

Feel free to contact us on with whatever information you have on the query/body/response of the problematic queries.