Asynchronous design patterns – Callback topics

By | November 29, 2020

An explanation of when to use callback topics

Callbacks topics are a handy tool in an architect’s tool belt. Why? Well, let us discuss. In an event-driven architecture, one of the main drivers is to allow plugin play. What is plugin play, this is when you don’t need to keep changing existing applications when you need to add a new process or additional to notify other processes/systems. If you have a function which processes items what you could do is to have the processor publish state changes to topic.

A topic is a Kafka naming convention. A topic is a log of messages, an event stream, a queue of events/messages

Let’s say you are designing a solution where multiple publishers will trigger asynchronous processing of a message, and you require the processor (the consumer) of the triggering events to publish an event with the results of the processing to a queue which the publishing process system can handle the result.

You could publish all the result irrelevant of which originating system or process requires the result; this would result in all the subscribers (consumers) of the results to have to read and filter out messages which could be costly. As shown in the below image.

One result topic (event log/stream) for all processes

Or you could publish each message to a specific topic which for each process which requires the results, this in my view is the preferred solution.

Each process has its own callback topic which a result handler will consume

However, you could add logic and hard code the processor to publish to the originating publishers preferred top. However, this implementation is one which I believe could have long term negative impact on the development process.

  1. The processor would have to know the broader context of the message it is processing, which in my view could put too much responsibility onto to the function which is processing the message
  2. How would a developer decide which topic to publish the results to; by implementing an if statement, a switch statement, using delegates.
  3. Do you keep changing the service when new processes need to know about the results, which will involve full development cycles to modify, test and update a stable service

This is where the callback topic comes into play.

Implementing callback topics

How do you do it?

First, you need to define a schema for your event messages. In that schema, you will need to define a few “header” properties which would tell the processing service were to publish the result events to. The two which I add are:

  • callback topic name
  • callback external reference Id

The reader can quickly identify what the first property is used for; it tells the processor which topic to publish the results to.
The second property is the identifier which the consumer of the outcome event will use to handle the reconciliation of the results and the item it was processing when it triggered the processor to perform its actions.

The second thing you do is implement the logic which will after processing check to see if the callback topic properties are not null and if they are not will wrap the processing results in a message along with a “header” containing the external reference id. I keep the naming convention the same.

{ "header": { "callbackExternalReferenceId": "", "callbackTopic": "" }, "body": {...} }

Thirdly if you require the process which is publishing the triggering event to handle the results then when you publish the message, you ensure the callback topic value and the external reference id are populated.

And lastly, you have the consumer of the result events consuming the callback topic.

Of course, you must ensure that the call back topic is created before publishing or subscribing to it. If you have implemented security where you need to explicitly define at the (Kafka/application) user level which users can publish and consume which topics, you will, of course, need to ensure that actions are taken.

Having designed solutions which have been running in production for some time and which have required to have new processes plugged on, the only thing we have had to do is assign the processor’s Kafka user another topic it can publish to. Therefore not having to touch a stable service.

Leave a Reply