Back to Blog

Modular Monoliths: The Best of Both Worlds

Nakul Shukla
Modular Monoliths: The Best of Both Worlds

This article was originally published on Medium.

Modern application development is often described as a balancing act — balancing speed, scalability, and simplicity. Over the years, we’ve transitioned from monoliths to micro-services, often in pursuit of agility and faster releases. However, the journey is not without its pitfalls. I’ve seen teams (and, let’s be honest, I’ve been part of some) dive headfirst into microservices, only to find themselves entangled in distributed chaos.

Microservices promise a lot: independent deployments, scalability, and faster time to market. But they also bring challenges:

  • Operational complexity: Managing inter-service communication and distributed systems can quickly escalate.
  • Premature granularity: Splitting services before fully understanding domain boundaries often leads to rework.
  • Cost: Both in infrastructure and developer time.

Having worked on both sides of the architectural spectrum, I’ve realized that the key decision isn’t whether to start with microservices or a monolith — it’s to think modular from the very beginning. This mindset lays the groundwork for flexibility and scalability. By respecting core principles of module division — clear boundaries, independence, and loose coupling — you create a system that can grow and adapt naturally. It’s a more fundamental approach that simplifies architecture without overcommitting to either extreme.

Mistakes developers make with Microservices

Let me be frank — diving into micro-services without a plan is like trying to assemble IKEA furniture without the manual: you’ll probably end up with something functional, but not without a lot of frustration and extra screws. Or maybe it’s like deciding to juggle flaming torches before mastering a tennis ball — impressive if it works, but more likely to end in disaster. Okay, enough with the analogies; I think you get the point. Here are some of the most common mistakes I’ve seen teams make when rushing into microservices.

  • Too granular, too soon: Splitting services before fully understanding domain boundaries. Result? Distributed chaos.
  • Over-engineering the infrastructure: Spending months setting up Kafka, Redis, and Kubernetes for an app that barely has users.
  • Ignoring team and process readiness: Microservices work best when teams are independent and mature. But if you still need a meeting to decide who writes a unit test, well…

The modular monolith lets you avoid all this while keeping an eye on the future. Think of it as the pragmatic architect’s safety net.

A Modular-Monolith

The term modular monolith is not new. It has been widely discussed in the software community and supported by frameworks such as Service Weaver (by Google) and Spring Modulith (an extension of Spring Boot for modular architecture). These frameworks provide tools and best practices for designing modular systems that can evolve into distributed architectures if needed.

While frameworks can be helpful for supporting modular monoliths, I believe we don’t always need them to achieve this architectural style. Instead, it’s about following a few core principles. Now, I know terms like loose coupling, dependency injection, and replaceable dependencies may sound like architectural buzzwords, but trust me — they have solid reasoning behind them. Let me explain why they are crucial for making a modular monolith work.

  1. Loose Coupling: Each module should know as little as possible about other modules. Why? Because tightly coupled modules become a maintenance nightmare. If one module changes, others can easily break. Loose coupling means that modules interact through well-defined interfaces and contracts, making changes isolated and safer. Think of modules as neighbours. They shouldn’t rely on each other’s furniture arrangements to live peacefully!
  2. Dependency Injection (DI): Swap out implementations without changing the module’s core logic. Why? Because modularity thrives when you can replace components easily. In monolith mode, your modules might use direct Java calls. When you deploy as microservices, those same calls need to turn into HTTP requests. DI makes this possible by injecting the correct implementation based on the environment.
  3. Replaceable Dependencies: Design modules to work with different implementations of services. Why? Because this future-proofs your system. When you eventually split modules into micro-services, you’ll want them to communicate over HTTP or messaging. If modules are too dependent on internal calls, you’ll face rewrites. By keeping dependencies modular and replaceable, your system adapts without needing a total overhaul.

Understanding the Modular Monolith with an Example

To better understand this architecture, let’s take an example of an Employee Management System. The system has two core functional modules -

  • Payroll: Handles salary calculations, deductions, and payments.
  • Profile: Manages employee personal details, departments, and roles.

Initially, the system can be built as a monolithic application, where both modules coexist in a single deployment. However, the design must consider the possibility of splitting the modules into separate services in the future.

Monolithic Setup

  • The Payroll and Profile modules will share the same runtime but have clear boundaries in terms of responsibilities.
  • Each module will have its own web layer, business logic, and database schema.
  • Communication between modules will happen via direct in-process method calls.

Modules are packaged together, deployed as a single runtime, interacts with direct in-process calls

Scaling Teams and Services: The Case for Splitting

Let’s paint a scenario. Imagine this employee management application starts gaining a growing number of users, and with that, the demand for new features also rises. To keep up, management decides to expand the engineering team. To handle both the workload and organizational complexity more effectively, they choose to split the team into two — one focused on employee profile management and the other on payroll management. This setup aligns with the natural division of responsibilities, allowing teams to work independently and deliver features faster. At this point, splitting the application into separate services for each module becomes a logical next step to support team autonomy, scalability, and maintainability.

Future Microservices Setup

  • The Payroll and Profile modules can be deployed as independent services.
  • Module dependencies will be replaced with HTTP calls or messaging systems like Kafka for inter-service communication.
  • Developers have the flexibility to choose between maintaining consistency through distributed transactions over network calls or adopting an eventual consistency approach, based on the requirements and trade-offs of their use case.
  • While many monoliths might use a shared database, I propose a database per module approach. This doesn’t necessarily mean separate database instances — it can be achieved by using separate schemas within the same database. This promotes clear module boundaries and minimizes cross-module coupling at the data level.

Each module is deployed as a separate runtime, interacts with each other over network (http)

By starting with a modular monolith, we can deliver features quickly while preparing the architecture for scalability and separation.

Key Principles

  • Split the application into clear domains, each encapsulated as a module.
  • Ensure these modules are independent — Profile shouldn’t peek into Payroll’s database, no matter how tempting it might be.
  • Each module has its own - ➡ Web layer: Handles incoming requests through REST controllers or endpoints, serving as the entry point for external interactions. ➡API layer: Defines public interfaces to expose operations in respective module, promoting loose coupling between modules. ➡ Implementation layer: Contains business logic, API implementations, and database interactions, encapsulating the module’s internal workings.
  • Separation of Concerns — modules should interact via APIs or interfaces, not by peeking under the hood.
  • Database Independence — Separate schemas for each module ensure minimal coupling. Avoid cross-module database dependencies, like foreign keys between tables in different schemas. (Trust me, future-you will thank you when it’s time to split the modules into services.)
  • Non-Functional Conventions — Centralised logging, security, and monitoring to reduce duplication across modules.

Managing Transactions in a Modular Monolith

Transactions are simple in a traditional monolith — everything happens in one runtime, with a single database. But a modular monolith introduces a twist: each module has its own database.

Alternatively, you could choose to keep a single schema for all modules while being careful not to create dependencies between entities belonging to different modules. This approach avoids the complexity of multiple schemas but introduces a different challenge: it puts the burden on development teams to manually ensure that cross-module relationships are avoided. This adds cognitive load and increases the risk of accidental coupling, especially as the application evolves.

A cleaner and more scalable approach is to create separate schemas for each module and ensure strict isolation at the database level. Each schema is associated with its respective module, which helps enforce domain boundaries. To achieve this, we configure separate data sources for each schema, making each module independently responsible for its persistence.

While this separation simplifies domain modelling, it introduces a new challenge: managing transactions across multiple data sources. Distributed transactions require coordination to ensure that operations involving multiple data sources succeed or fail as a single unit. This means handling the lifecycle of transactions explicitly, which can complicate the application logic.

Fortunately, tools like Atomikos can help manage this complexity. Atomikos supports distributed transactions by coordinating separate transaction managers for each data source. Each module’s data source is assigned its own transaction manager, and Atomikos ensures consistency across them by providing a unified transaction lifecycle.

This approach strikes a balance between maintaining clear separation of module responsibilities at the database level and ensuring robust transactional integrity across the application. In practice, it enables modules like Profile and Payroll to operate independently while still supporting atomic operations. For instance, when creating a new profile and initializing payroll as part of a single process, Atomikos ensures that both actions are either committed together or rolled back entirely, preserving consistency.

For those interested in the details about how datasources are configured and how we are using distributed transaction management with Atomikos, see the section below. Otherwise, feel free to skip ahead.

[other]Configuring Employee Profile datasource and transaction manager[/other]

Dynamic Bean Wiring for Deployment Modes

A key advantage of a modular monolith is its flexibility to adapt between deployment modes — monolith or microservices — without requiring major code changes. One way to achieve this is by dynamically wiring the appropriate bean implementation based on the deployment configuration.

For example, consider a scenario where the Profile module needs to interact with the Payroll module. Depending on the deployment mode, this interaction could either be:

  1. An HTTP call (in microservices mode), or
  2. An in-process Java call (in monolith mode).

Here’s how we can configure this dynamically using Spring’s @ConditionalOnProperty annotation:

// In microservice mode
@Bean
@ConditionalOnProperty(name = "deployment.mode", havingValue = "micro")
public PayrollApi payrollApi(WebClient.Builder webClientBuilder) {
return new PayrollApiHttpImpl(webClientBuilder, payrollServiceURL);
}
// In monolith mode
@Bean
public PayrollApi payrollApi(SalaryRepository salaryRepository){
return new PayrollApiImpl(salaryRepository);
}

In this configuration:

  • If deployment.mode is set to micro (microservices mode), the application wires an HTTP-based implementation of the PayrollApi interface.
  • If the property is not set to micro (defaulting to monolith mode), we use the in-process implementation.

Similarly, we can dynamically wire the appropriate implementation for an asynchronous event gateway:

// AsyncGateway as an interface to enable async communication between modules.
public interface AsyncGateway<T extends BaseEvent> {
void publish(T event);
}
// PayrollApiHttpImpl is an HTTP based implementation of AsyncGateway,
// while ProfileSpringEventsGateway is a Spring Event based implementation
@Bean
public AsyncGateway<ProfileEvent> profileEventPublisher(ApplicationEventPublisher eventPublisher, PayrollApi payrollApi) {
if (payrollApi instanceof PayrollApiHttpImpl) {
return new ProfileHttpAsyncGateway(payrollApi);
}
return new ProfileSpringEventsGateway(eventPublisher);
}

Extensibility and Sophistication

While this is a straightforward example, you can adopt more sophisticated approaches for dynamic configuration. For instance:

  1. Profiles and Conditional Beans: Use Spring profiles or externalized configuration files to drive the bean wiring.
  2. Dependency Injection Frameworks: Leverage advanced DI frameworks or factory classes for modular and decoupled bean creation.
  3. Feature Toggles: Implement feature toggle frameworks to dynamically switch between implementations at runtime.

Deployment Modes: Monolith or Microservices

Using Maven Profiles for Build Flexibility

To support both deployment modes, we use Maven profiles to generate appropriate build artifacts:

  • Monolith Profile: Packages all modules into a single JAR.
  • Microservices Profiles: Builds and packages each module independently.

Monolith Profile Configuration

<profile>
<id>monolith</id>
<activation>
<property>
<name>deployment.mode</name>
<value>monolith</value>
</property>
</activation>
<build>
<finalName>employee-management-monolith</finalName>
<directory>${project.basedir}/target/monolith</directory>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</profile>

Microservices Profile Example (Profile Module):

<profile>
<id>employee-profile</id>
<activation>
<property>
<name>employee-profile</name>
</property>
</activation>
<build>
<finalName>employee-profile-app</finalName>
<directory>${project.basedir}/target/profile</directory>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<excludes>
<exclude>
<groupId>com.tech</groupId>
<artifactId>employee-payroll-impl</artifactId>
</exclude>
</excludes>
</configuration>
</plugin>
</plugins>
</build>
</profile>

Building and Running the Apps

Monolith Mode

mvn clean install -Pmonolith
java -jar target/monolith/employee-management-monolith.jar

The above command would run the application as a single java process, profile and payroll modules embedded together, talking to each other within the process.

Microservice Mode

# Build Profile Module
mvn clean install -Pemployee-profile
java -jar target/profile/employee-profile-app.jar
# Build Payroll Module
mvn clean install -Pemployee-payroll
java -jar target/payroll/employee-payroll-app.jar

The above commands run both the modules as separate services, and we didn’t change anything in application code to enable this. While these commands illustrate how to run the application in both modes, I’d rather prefer to encapsulate this logic in a well-organized script for better maintainability and usability. This script can handle the setup, build, and deployment process in a streamlined manner.

Reference

For the complete codebase and detailed project setup, visit the Employee Management Project on GitHub.

Conclusion: Build for the Present, Plan for the Future

A modular monolith is the perfect starting point for modern applications. It lets you:

  • Deliver features quickly, without the overhead of microservices.
  • Lay a strong foundation for scalability and independence.
  • Transition to microservices seamlessly when the time comes.

My takeaway? Start modular. It keeps you out of the usual traps of monoliths and microservices while giving you room to adapt as your application evolves. A modular monolith isn’t just a stepping stone; it’s a solid foundation to build on.

What are your thoughts on modular monoliths? Have you faced similar challenges when scaling your architecture?

Share this post

Stay Updated

Subscribe to my newsletter to receive updates on new blog posts and tech insights.