Event-Driven Architecture with Spring Boot and Kafka

Introduction
Event-driven architecture (EDA) is a powerful approach to building scalable, decoupled, and reactive applications. Unlike traditional request-response models, EDA revolves around events that propagate through the system asynchronously, enabling real-time processing and high availability.
In this article, we'll explore EDA using Spring Boot and Apache Kafka, leveraging a real-world example of an Order and Commerce API. We'll also compare it with traditional architectures, discuss benefits like event replayability, and highlight key challenges.
Below is the introduction video on same.
Understanding Event-Driven Architecture
EDA consists of three core components:
Event Producers: Generate events (e.g., order placed, payment processed).
Event Brokers: Store and distribute events (e.g., Apache Kafka).
Event Consumers: Process events asynchronously (e.g., notification service, inventory update service).
Instead of synchronous communication (e.g., REST APIs), events are published and processed independently by different microservices.
Traditional vs. Event-Driven Approach

| Feature | Traditional Approach | Event-Driven Approach |
| Communication | Synchronous (REST, RPC) | Asynchronous (events) |
| Coupling | Tightly coupled | Loosely coupled |
| Scalability | Limited | Highly scalable |
| Data Flow | Request-response | Event propagation |
| Resilience | Single point of failure | Fault-tolerant |
| Replayability | No event history | Events can be replayed |
Building an Event-Driven Order and Commerce API
We'll build a system where users place orders, and multiple services (e.g., payment, inventory, notification) react asynchronously.
Step 1: Set Up Kafka Using Docker
To avoid manual setup, we will use Docker to set up Kafka and Zookeeper.
Docker Compose File
Create a docker-compose.yml file:
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
Start Kafka and Zookeeper:
docker-compose up -d
Step 2: Define an Order Event
Create a Java class representing an order event:
@Data
@AllArgsConstructor
@NoArgsConstructor
public class OrderEvent {
private String orderId;
private String status; // CREATED, PAYMENT_SUCCESS, PAYMENT_FAILED
private Instant timestamp;
}
Step 3: Implement the Order Producer
Use KafkaTemplate to publish order events.
@RestController
@RequestMapping("/orders")
public class OrderController {
@Autowired
private KafkaTemplate<String, OrderEvent> kafkaTemplate;
@PostMapping
public ResponseEntity<String> placeOrder(@RequestBody Order order) {
OrderEvent event = new OrderEvent(order.getId(), "CREATED", Instant.now());
kafkaTemplate.send("order-events", event);
return ResponseEntity.ok("Order placed!");
}
}
Step 4: Implement the Order Consumer
Consume order events asynchronously.
@Component
public class OrderEventConsumer {
@KafkaListener(topics = "order-events", groupId = "order-group")
public void consume(OrderEvent event) {
System.out.println("Received Order Event: " + event);
}
}
Step 5: Running the Application in Docker
To containerize our Spring Boot services, create a Dockerfile:
FROM openjdk:17-jdk-slim
VOLUME /tmp
COPY target/order-service.jar order-service.jar
ENTRYPOINT ["java", "-jar", "/order-service.jar"]
Build and run the container:
mvn clean package -DskipTests
docker build -t order-service .
docker run -p 8080:8080 order-service
Benefits of Event Replayability
One major advantage of Kafka-based EDA is the ability to replay events. This is crucial for:
Data recovery: If a service fails, it can reprocess past events.
Audit logging: Maintaining a history of system actions.
Machine learning: Training models based on past events.
Challenges of Event-Driven Systems

While EDA is powerful, it comes with challenges:
Event Ordering: Kafka guarantees ordering per partition, but cross-partition ordering needs extra handling.
Idempotency: Consumers must handle duplicate events gracefully.
Schema Evolution: Changing event structures requires backward compatibility.
Debugging Complexity: Tracing issues in an asynchronous system is harder than in monolithic architectures.
Conclusion
Event-driven architecture, powered by Spring Boot and Kafka, enables highly scalable, decoupled applications. By using event replayability, services can recover from failures and derive insights from historical data.
Would you like to see an alternative use case, such as real-time ride-sharing or stock market trade processing? Let me know your thoughts!





