Multiplexer consumer and producer at kafka

In my kafka consumer threads(high level), after I consumed a message I am applying some business logic to this message and forwarding this to a WS. But this webservice may be down sometimes and since I consumed this object from kafka and offset is moved forward, i would missed this object.

One way get rid of from this problem is to disabling autocommit in zookeeper and committing offset by calling programmaticaly but i expect that this is a very costly operation. I will be producing to kafka at about 2000 tps and may increase later times.

Another way – which i am not sure if it is a good idea – is if i face with any problem, producing this consumed object to kafka again but i didn’t see any post related to this across all my googleings. Is this a thing which is even not considerable?

Can you please give me some insights about handling this situation.


You can post back the failed message to the same topic or another of your choice.

If you use the same topic, you will push the messages at the end of the topic and they will be picked up after the others (so if order matters to you don’t do this). Also if the action that you perform before sending the message is not idempotent you will have to something to identifying this records so they don’t perform the action twice.

If you use a failed_topic, you can push the messages that you can’t send to this topic and when the WS is healthy again you need to create a consumer that consumes all the messages there and sends them to the WS.

Hope it helps!

Hello, buddy!稿源:Hello, buddy! (源链) | 关于 | 阅读提示

本站遵循[CC BY-NC-SA 4.0]。如您有版权、意见投诉等问题,请通过eMail联系我们处理。
酷辣虫 » 后端存储 » Multiplexer consumer and producer at kafka

喜欢 (0)or分享给?

专业 x 专注 x 聚合 x 分享 CC BY-NC-SA 4.0

使用声明 | 英豪名录