There’s a moment in every engineer’s career where microservices feel like the answer to everything. I had that moment about six years ago.
Every new project, every greenfield product — same playbook: separate services, message queues, API gateways.
It felt like doing things “the right way”.
It took me shipping multiple products to realize something uncomfortable:
I was optimizing for scale before I had anything worth scaling.
The seduction of “doing it right”
If you’ve worked on systems at scale, microservices make intuitive sense.
You’ve seen:
- monoliths that became unmanageable
- deploy bottlenecks across teams
- one failure cascading into everything
So when you start something new, your instinct is:
“Let’s not repeat those mistakes.”
You split things early:
- auth service
- notification service
- billing service
Each with its own repo, deploy pipeline, and database.
On paper, it looks clean.
In reality, you just multiplied your complexity — for a product with zero users.
What actually happens at day zero
Here’s the pattern I’ve seen repeatedly when teams go microservices-first:
Week 1–2 You’re not building product. You’re building infrastructure:
- Docker setups
- service discovery
- inter-service communication
You’re solving distributed systems problems before you’ve validated your core feature.
Week 3–4 A simple change becomes coordination:
- multiple services
- versioned APIs
- cross-service data consistency
What should’ve been one migration is now a system design problem.
Week 5–8 You realize:
- your “notification service” handles almost nothing
- your “auth service” is a thin wrapper around JWT
You built for scale you don’t have — and might never need.
Meanwhile, someone else shipped a monolith in two weeks and is already talking to users.
The monolith that actually works
The last products I’ve built started as modular monoliths.
Not spaghetti code. Not a big ball of mud.
A system with clear internal boundaries, but without distributed complexity.
src/
modules/
auth/
flags/
billing/
Each module:
- owns its domain
- exposes explicit interfaces
- stays isolated internally
But everything:
- deploys together
- shares one database
- communicates through function calls (not HTTP)
The result:
- faster iteration
- simpler debugging
- fewer moving parts
The key insight:
You don’t need distributed systems to have good architecture.
Where modular monoliths fail
A monolith doesn’t stay clean by default.
I’ve seen them break down when:
- modules start reaching into each other’s internals
- the database becomes a shared dumping ground
- “quick fixes” bypass boundaries
At that point, the problem isn’t architecture — it’s discipline.
A bad monolith is painful. A premature microservices system is worse.
When microservices actually make sense
Microservices are not wrong. They’re just often early.
They start to make sense when:
- You have multiple teams that need independent deploy cycles
- A specific part of the system has distinct scaling needs
- You’ve observed real boundaries over time, not imagined them upfront
Notice what’s missing:
- “best practices”
- “future scaling”
- “this is how big companies do it”
The hidden tax of microservices
Every service adds cost:
- deploy pipelines to maintain
- monitoring and alerting to configure
- network failures (timeouts, retries, circuit breakers)
- data consistency challenges
- local development complexity
- cognitive load for every engineer
For a small team, this is a direct tax on velocity.
And at early stages, velocity is the only advantage you have.
The rule I follow now
Don’t design for scale. Design for change.
Scale problems are rare. Change problems are constant.
A modular monolith optimizes for change. Microservices optimize for scale.
Most early-stage products need the first — not the second.
My decision framework
When starting a product:
- 0 → early traction → Modular monolith
- Multiple teams / coordination pain → Evaluate splitting
- Clear scaling bottleneck → Extract that part only
Everything else is premature optimization.
What I optimize for now
My default setup:
- Modular monolith with clear boundaries
- Single PostgreSQL database
- Strong internal interfaces between modules
- Extraction only when the pain is real
Because it lets me:
- ship faster
- change direction quickly
- learn from real users
And that’s what actually matters early on.
I’d rather have a well-structured monolith serving real users than a perfectly designed microservices architecture serving nobody.