Avoid Common Pitfalls: Expert Tips for Effective Event-Driven Architecture

By | June 19, 2024


In this blog post, I want to share the hard-earned lessons I’ve learned from my own experiences with Event-Driven Architecture (EDA). We’ll delve into key considerations, best practices, and real-world scenarios to help you design robust and scalable event-driven systems. Whether you’re a seasoned developer or just starting out, this post is packed with valuable insights. The topics discussed here are explored in more detail in my other posts and on my YouTube EDA playlist, and I’ll expand on these points in future posts.

1. Data Loss, Data Liberation, and Ensuring Data Integrity

Data Loss

First and foremost, consider if your system can afford to lose data. If not, what mechanisms will you employ to ensure data integrity and durability? Using database transactions, distributed logs like Kafka, or idempotent consumers can help maintain data integrity and durability. Ensuring that your data remains intact and recoverable is critical for maintaining the reliability of your system.

Data Liberation and Publishing Events Reliably

How will you liberate data into events without losing it? Efficiently converting and handling data as events is essential to avoid any loss during the process. Understanding side publishing, the outbox pattern, or change data capture (CDC) can help achieve reliable event publishing. Ensuring data integrity during this process guarantees that your system remains consistent and trustworthy.

2. Schema Enforcement and Interoperability

Schema Enforcement

How will you enforce schemas to maintain data consistency across your system? Defining and adhering to strict schemas is essential for ensuring that the data exchanged between services remains consistent and understandable. Using tools like Avro, Protobuf, or JSON Schema can help define strict schemas.


What strategies will you use to ensure interoperability between different services and platforms? Ensuring that different components of your system can effectively communicate and work together is essential for a seamless event-driven architecture. This includes following standardized schemas, using common protocols, and implementing robust API contracts. Tools like Pact for contract testing can ensure that services adhere to agreed-upon schemas, preventing issues when new fields are added.

Interoperability also relates to how consumers or subscribers handle events that include additional fields which they may not be coded for. Consumers must be future-proof and should not break when processing events that have new fields. Publishers should avoid introducing breaking changes. Managing schema evolution carefully ensures that new versions of services can still interact with older ones without issues, maintaining a flexible, scalable, and resilient event-driven system.

Best Practices

  • Follow specific schemas.
  • Implement versioning to avoid breaking changes.
  • Allow for enhancements and future-proofing.
  • Ensure backward compatibility if needed.

3. Flexibility, Diverse Tools, and Choosing the Right Tool for the Job

When implementing Event-Driven Architecture, it’s essential to maintain flexibility, use a diverse set of tools, and choose the right tool for specific parts of the system or particular use cases. Instead of sticking to a single service like Kafka or Kinesis, consider a flexible approach that combines queuing, streaming, and pub-sub services.

For example, using SQS and Kinesis within a bounded context and Kafka externally can provide the necessary flexibility and scalability. Choosing the right tool for the job is crucial.

Example Scenario

Imagine you have a front-end experience API and need to get events into your system. You could:

  • Publish an SQS message from your API.
  • A subscribing service picks up the message and acknowledges it, removing it from the queue.
  • If multiple consumers are needed, use SNS to push the message into their own SQS queues, or use Amazon EventBridge, or a Kafka topic consumed by multiple consumers.

4. Reducing Coupling

A common misconception about Event-Driven Architecture is that by using events and messages, you will by default have a decoupled system. EDA enables you to have decoupled systems a lot easier, but the degree of decoupling depends on the system’s design. You can still design a tightly coupled system even when using event-driven architecture.

Reducing Coupling

Consider the degree of coupling between publishers and consumers. One way to reduce coupling is to have a publishing system publish a final state event straight to an internally public event stream as soon as it’s liberated or published from the processing function. This method is more highly coupled because if you need to change the version, you’ll have to introduce a breaking change either on the outbound or change the chain of functions.

To avoid this, you can implement a two-step publishing approach. Internally, you publish the final state event, but it’s not directly published to an external stream consumed by external systems. Instead, a consumer converts it to a version of the public event. If in the future you need to upgrade or change that event, you can implement another module to convert it to a different version of the same event.

This approach provides several benefits, such as separating internal and public schemas, ensuring that changes can be made internally without affecting external consumers, offering greater flexibility in managing versioning, and enabling more efficient handling of different versions of the same message.

5. Idempotency and Processing Approaches

Idempotency ensures an event can be processed multiple times without changing the result beyond the initial application. This is crucial in EDA to handle retries and ensure consistency. Implementing idempotent consumers helps in making your system resilient to duplicates and ensures that side effects are not repeated.

Processing Approaches

  • Exactly Once Processing: Ensures an event is processed only once, no matter how many times it is delivered. This is the ideal scenario but can be complex to implement and often involves additional overhead.
  • At-Least Once Processing: Ensures an event is processed at least once. This is easier to achieve but can result in duplicate processing, which is why idempotency is crucial.
  • At-Most Once Processing: Ensures an event is processed at most once, which can lead to some events being missed if there is a failure.

In event-driven systems, especially with log-based services like Kafka, you have the ability to replay events. If you implement replayability, consider how you will administer consumers and processes to handle trigger event replays. For services using exactly-once processing, think about how you will manage retries.

6. Scalability in Event-Driven Architecture

Scalability is critical in EDA to handle varying loads and ensure system responsiveness. Understanding how to scale consumers by partition is essential for managing your system effectively.

Partitioning and Load Balancing

When considering scalability, especially if event ordering is required, you need to choose an appropriate strategy:

  • Kafka Partitions and Kinesis Shards: Both Kafka and Kinesis allow you to partition or shard your data streams. Partitioning helps distribute the load and maintain order within each partition. If you need to maintain the order of events, partition based on specific keys such as customer group, country, or currency. For example, by partitioning events by customer ID, you can ensure that all events related to a specific customer are processed in order while allowing multiple consumers to handle different partitions.
  • Understanding Partitioning: It’s crucial to understand how partitioning works for your events. Proper partitioning ensures that events are distributed efficiently and that consumers can process them effectively. Each partition can be consumed by a single consumer, and the number of partitions can determine the level of parallelism in your system.
  • Load Balancing with Ordering: If your system requires event ordering, you need to plan how to load balance your events accordingly. This involves assigning events to partitions in a way that maintains the required order. For instance, partitioning by a unique key ensures that all related events are processed in sequence, while allowing other partitions to handle unrelated events concurrently.
  • Consumer Scaling: As you scale your system, consider how consumers will be assigned to partitions.

By carefully planning and implementing these strategies, you can build a scalable, responsive, and robust event-driven system. Understanding partitioning and effectively managing consumer load balancing are key to maintaining event order and ensuring efficient processing.


Implementing an event-driven architecture requires careful consideration of factors like data loss prevention, schema enforcement, interoperability, idempotency, scalability, and choosing the right tools and processing approaches. By maintaining flexibility and reducing coupling, you can build a robust, scalable, and future-proof system.

Final Thoughts

Remember, having a diverse set of tools and understanding how to use them effectively is key to your EDA success. For more in-depth discussions, check out my YouTube EDA playlist and future posts.

Thanks for reading, and I’ll see you in the next post!

Leave a Reply