Microservices or Monoliths – Fences and Neighbors

Photo of fence separating fields from a road

 

At the end of my last post, “What Makes a Monolith Monolithic?”, I stated that I didn’t consider the term “monolithic” to be inherently derogatory. It is, rather, a descriptive term relating to the style of organizing an application’s architecture. Depending on the context the system operates within, a monolithic architectural style could lie anywhere on the continuum between perfectly suited and perfectly disastrous. Placing it on that continuum requires a sense of what qualities are most needed or desired and which can be traded off in their stead. Everything comes with a cost, and attempting to ignore that fact merely sets us up for unpleasant future surprises.

After an initial period of unbridled enthusiasm, opinion seemed to gel around the idea that highly distributed application architectures (aka microservice architectures) were not suitable to all contexts. There are prerequisites for jumping into the microservices pool in terms of problem architecture, infrastructure, and organization. Attempting to shoehorn a microservice architecture into an environment that cannot support it will be overly expensive at best and a failure of apocalyptic proportions at worst.

There are many aspects of application design that are commonly recognized as beneficial: modularity, loose-coupling, high cohesion, and separation of concerns. It is critical to realize that these aspects can be found in systems with microservice architectures, monolithic systems, and everything in between. Distributed architectures are not necessary for modularity, nor any of those other aspects. In fact, one could easily create an application with a microservice architecture whose qualities are opposite to these desirable ones.

There are, however, situations where the benefit of a microservice architecture outweighs the costs and complexity. The ability to independently deploy and scale the various parts of an application is a major benefit, in my opinion. A well designed microservice architecture can even allow for the components of an application to be replaced on the fly. These features are not unique to microservice architectures, but are arguably easier to achieve than in other application architectures.

Real design, balancing both costs and benefits, is required. Sticking a bit of network in between the components is insufficient to ensure success. Deliberate design, especially as the boundaries multiply, is critical for an effective system. Identifying and providing for those boundaries at the conceptual level (i.e. before they become physical) is key. Good fences can either make for good neighbors, or they can create a maze of barriers.

Microservices, Monoliths, and Modularity

Iceberg

 

There are very valid reasons for considering a microservice architecture (MSA) when building/evolving an application. In my opinion, however, forcing modularity isn’t one of those very valid reasons.

Just the other day, I saw tweet from Simon Brown saying this same thing:

I still like his comment from two years back: “I’ll keep saying this … if people can’t build monoliths properly, microservices won’t help”. I believe that if you’re having problems building a monolith properly, trying to use a distributed architecture to force modularity may actually cause harm.

MSAs, like any distributed application architecture, involve increased complexity and costs; table stakes, if you will. Like an iceberg, there’s both a lot more to it than just what’s showing above the waterline and a fair amount of hazard for the unwary. If a development team cannot or will not comply with design guidelines (e.g. modularity requirements), injecting additional complexity is probably not the solution you need.

Distributing an application makes it harder to accidentally entangle different concerns, but it doesn’t make it impossible:

I’d argue that making it harder to accidentally break modularity addresses neither of the groups I mentioned earlier: those that cannot or will not comply. It’s ironic, but those who fail to understand the need for modularity can be very creative in their “solutions”, regardless of the obstacles. Likewise, those who refuse to comply.

In short, distribution as a means of “ensuring” modularity fails the fitness for purpose test.

The situation becomes worse when you factor in the additional complexity inherent in a distributed system. Likewise, there’s the cost of the table stakes (infrastructure, process, staffing, etc.) mentioned above. Of course, having abandoned the principle of cause and effect, one could attempt some “creative” workarounds to avoid having to pay the price (in other words, adding more and more complexity).

When you introduce significant additional complexity (with all its attendant risk) with little chance of the technique actually achieving its goal, you’ve caused harm.

These concerns are not solely limited to the application architecture. Distributing the data architecture has the same limitations in terms of ensuring modularity and introduces additional complexity. Adding boundaries adds the need for governance. A disciplined, monolithic team can maintain modularity in a monolithic data architecture. Multiple separate teams trying to share a monolithic data architecture will either experience a crippling level of governance overhead or a complete breakdown in modularity.

MSAs can be useful when you need independently scalable and replaceable components. When you have multiple teams working on one logical application, they can also be appropriate as well. Using the technique when the cost outweighs the potential payoff, however, is a losing bet.

Monolithic Applications and Enterprise Gravel

Pebbles

It’s been almost a year since I’ve written anything about microservices, and while a lot has been said on that subject, it’s one I still monitor to see what new pops up. The opening of a blog post that I read last week caught my attention:

Coined by Melvin Conway in 1968, Conway’s Law states: “Any organization that designs a system will produce a design whose structure is a copy of the organization’s communication structure.” In software development terms, Conway’s Law suggests that a given team will build apps that mirror the team’s organizational structure. Siloed functional teams produce siloed application architectures.

The result is a monolith: A massive application whose functionality is crammed into a few crowded parts. Scaling a simple pattern to the enterprise level often results in a monolith.

None of this is wrong, per se, but in reading it, one could come to a wrong conclusion. Siloed functional teams (particularly where the culture of the organization encourages siloed business units) produce siloed application architectures that are most likely monoliths. From an enterprise IT architecture aspect, though, the result is not monolithic. Googling the definition of “monolithic”, we get this:

mon·o·lith·ic
ˌmänəˈliTHik/
adjective
  1. formed of a single large block of stone.
  2. (of an organization or system) large, powerful, and intractably indivisible and uniform.
    “rejecting any move toward a monolithic European superstate”
    synonyms: inflexible, rigid, unbending, unchanging, fossilized
    “a monolithic organization”

Rather than “a single large block of stone”, we get gravel. The architecture of the enterprise’s IT isn’t “large, powerful, and intractably indivisible and uniform”. It may well be large, but its power in relation to its size will be lacking. Too much effort is wasted reinventing wheels and maintaining redundant data (most likely with no real sense of which set of data is authoritative). Likewise, while “intractably indivisible” isn’t a virtue, being intractable while also lacking cohesion is worse. Such an IT architecture is a foundation built on shifting sand. Lastly, whether the EITA is uniform or not (and I would give good odds that it’s not), is irrelevant given the other negative aspects. Under the circumstances, worrying about uniformity would be like worrying about whether the superstructure of the Titanic had a fresh paint job.

Does this mean that microservices are the answer to having an effective EITA? Hardly.

There are prerequisites for being able to support a microservice architecture; table stakes, if you will. However, the service-oriented mindset can be of value whether it’s applied as far down as the intra-application level (i.e. microservices – it is an application architecture pattern) or inter-application (the more traditional SOA). Where the line is drawn depends on the context of the application(s) and their ecosystem. What can be afforded and supported are critical aspects of the equation at all levels.

What is necessary for an effective EITA is a full-stack approach. Governance and data architecture in particular are important aspects to consider. The goal is consistent, intentional alignment across all levels (enterprise, EITA, solution, and application), promoting a cohesive architecture throughout, not a top-down dictatorship.

Large edifices that last are built from smaller pieces that fit together on purpose.

Can you afford microservices?

Check

Much has been written about the potential benefits of designing applications using microservices. A fair amount has also been written about the potential pitfalls. On this blog, there’s been a combination of both. As I noted in “Are Microservices the Next Big Thing?”: It’s not the technique itself that makes or breaks a design, it’s how applicable the technique is to problem at hand.

It’s important, however, to understand that “applicable to the problem at hand” isn’t strictly a technical question. The diagram in Philippe Kruchten‘s tweet below captures the full picture of a workable solution:

As Kruchten pointed out in his post ‘Three “-tures”: architecture, infrastructure, and team structure’, the architecture of the system, the system’s infrastructure, and the structure of the team developing the system are mutually supporting. These aspects of the architecture of the solution must be kept aligned in order for the solution to work. In my opinion, it should be a taken as a given that this architecture of the solution must also align with the architecture of the problem as a minimum condition to be considered fit for purpose.

Martin Fowler alluded to the need to align architecture, infrastructure, and team structure in “MicroservicePrerequisites” when he listed rapid provisioning, basic monitoring, and rapid deployment as pre-conditions for microservices. These capabilities not only represent infrastructure requirements, but also “…imply an important organizational shift – close collaboration between developers and operations: the DevOps culture”. Permanent product teams building and operating applications are, in my opinion, an extremely effective way to deliver IT. It must be realized, however, that effectiveness comes with a price tag, in terms of people, tools, and infrastructure.

In “MicroservicePremium”, Fowler further stated “don’t even consider microservices unless you have a system that’s too complex to manage as a monolith”, identifying “sheer size” as the biggest source of complexity. Size will encompass both technical and organizational concerns:

The microservice approach to division is different, splitting up into services organized around business capability. Such services take a broad-stack implementation of software for that business area, including user-interface, persistant storage, and any external collaborations. Consequently the teams are cross-functional, including the full range of skills required for the development: user-experience, database, and project management.

Expanding on this, the ideal organization will be one cross-functional team per microservice/bounded context. Even with very small teams, this requires either significant expenditure or a compromise of how the architectural and social aspects (i.e. Conway’s Law) work together in this architectural style.

Other requirements inherent in a microservice architecture are things like API governance and infrastructure services to support distributed processing (e.g. a service registry). Data considerations that are trivial in monolithic environment like transactions, referential integrity, and complex queries are absent in a distributed environment and facilities may need to be bought or built to compensate. In a distributed environment, even error logging requires special consideration to avoid drowning in complexity:

The overhead in terms of organization, infrastructure, and tooling, whether in ideal or comprised form, will introduce complexity and cost. I would, in fact, expect compromises to avoid costs to introduce even more complexity. If the profile of the system in terms of business value and necessary complexity (i.e. complexity inherent in the business function) warrants the additional overhead, then that overhead can represent a valid solution to the problem at hand. If, however, the complexity is solely created by the overhead, without an underlying need, the solution becomes suspect. Adding cost and complexity without offsetting benefits will likely lead to problems. Matching the solution to the problem and balancing those costs and benefits requires the attention of an architectural role at the application level, rather than relying on each team to work independently and hope for coherence and economy.

“Microservices and API Complexity – Inside and Out” on Iasa Global

The signature benefit of a microservice architecture is that its highly granular nature allows for a great deal of flexibility in composing applications. Components are simplified by virtue of a high degree of focus. The ability to replace individual components is enhanced by the modularity inherent in the style.

A very significant drawback to microservice architecture is that its highly granular nature can lead to a great deal of complexity in composing applications. Highly focused components can force service consumers to become more involved in the internals of an interaction than they might otherwise wish. Unwanted options can become more of a source of confusion than useful modularity.

How do you resolve this paradox? See the full post on the Iasa Global Site

Technical Debt and Rolling Re-writes (Who Needs Architects?)

If you think building a system is challenging, try maintaining one.

Tom Cagley‘s recent post “Plan to Throw One Away Re-Read Saturday: The Mythical Man-Month, Part 11”, was a good reminder that while “technical debt” may be something currently on the radar for many, it’s far from a new phenomenon. The concept of instant legacy applications was in place when forty years ago when Frederick Brooks wrote his masterpiece, even if they weren’t called that. As Tom observed in the post:

Rarely is the first attempt useful to the end consumer, and the usefulness of that first attempt is less in the code than in the feedback it generates. Software development is no different. The initial conceptual design and anticipated technical architecture of a large project rarely stands up to the rigors of the discovery process, and those designs should be learned from and then thrown away.

The faulty assumptions and design flaws accumulate not only from sprint to sprint leading up to the initial release, but also from release to release. In spite of the fact that a product can be so seriously flawed, throwing it away and starting over is easier said than done. While sunk costs cannot be recovered, too sanguine an attitude towards them may not enhance your credibility with the customer. Having to pay for the same thing over and over can make them grumpy.

This sets up a dilemma, one that frequently leads to living with technical debt and attempting to incrementally patch it up. There are limits, however, to the number of band-aids that can be applied. This might make it tempting to propose a rewrite, but as Erik Dietrich stated in “The Myth of the Software Rewrite”:

Sure, they know things now that they didn’t know when they started on this code 3 years ago. But won’t the same thing be true in 3 years? Won’t the developers then be looking at the code and saying, “this is a mess — if only we knew in 2015 what we now know in 2018!” And, beyond that, what makes you think that giving the same group of people the same marching orders won’t result in the same kind of code?

The “big rewrite from scratch because this is a mess” is a losing strategy.

Fortunately, there is an alternative. Quoting Tom Cagley again from the same post as above:

If change is both inevitable and good (within limits), then both systems and organizations (a type of system) need to be engineered to support and facilitate change. Architecturally, techniques such as modularization, object-oriented design and other processes that foster simplification and incremental change create an environment in which change isn’t avoided, but rather encouraged.

While we may laugh at the image of changing a tire while the vehicle is in motion, it is an accurate metaphor. Customers expect flexibility and change on the go; waiting equals lost business. The keys to evolving in place are having an intentionally designed, modular architecture and an understanding of where the weaknesses lie. Both of these are concerns that reside squarely on the architect’s plate.

Modularity not only makes an application more easily maintainable via separation of concerns, but it also embraces change by making components replaceable. This is one of the qualities that has made microservices such a hot topic, although it would be a mistake to think that microservices are the only way (or best way in all cases) to achieve modularity.

Modularity brings benefits beyond the purely technical as well. Rewrites of a fraction of an application are more easily sold than big-bang efforts. Demonstrating forethought (while you can’t predict what the change will be, predicting the need for change is more of a sure thing) demonstrates concern for the customer’s welfare, which should make for a better relationship.

Being able to throw a system away a little at a time allows us to keep the car on the road while it changes and adapts to changing conditions.

“Microservice Architectures aren’t for Everyone” on Iasa Global

Simon Brown says it nicely:

Architect Clippy is a bit more snarky:

In both cases, the message is the same: microservice architectures (MSAs), in and of themselves, are not necessarily “better”. There are aspects where moving to a distributed architecture from a monolithic one can improve a system (e.g. enabling selective load balancing and incremental deployment). However, if someone isn’t capable of building a modular monolith, distributing the pieces is unlikely to help matters.

See the full post on the Iasa Global site (a re-post, originally published here).

Microservices, Monoliths, and Conflicts to Resolve

Two tweets, opposites in position, and both have merit. Welcome to the wonderful world of architecture, where the only rule is that there is no rule that survives contact with reality.

Enhancing resilience via redundancy is a technique with a long pedigree. While microservices are a relatively recent and extreme example of this, they’re hardly groundbreaking in that respect. Caching, mirroring, load-balancing, etc. has been with us a long, long time. Redundancy is a path to high availability.

Centralization (as exemplified by monolithic systems) can be a useful technique for simplification, increasing data and behavioral integrity, and promoting cohesion. Like redundancy, it’s a system design technique older than automation. There was a reason that “all roads lead to Rome”. Centralization provides authoritativeness and, frequently, economies of scale.

The problem with both techniques is that neither comes without costs. Redundancy introduces complexity in order to support distributing changes between multiple points and reconciling conflicts. Centralization constrains access and can introduce a single point of failure. Getting the benefits without the incurring the costs remains a known issue.

The essence of architectural design is decision-making. Given that those decisions will involve costs as well as benefits, both must be taken into account to ensure that, on balance, the decision achieves its aims. Additionally, decisions must be evaluated in the greater context rather than in isolation. As Tom Graves is fond of saying “things work better when they work together, on purpose”.

This need for designs to not only be internally optimal, but also optimized for their ecosystem means that these, as well as other principles, transcend the boundaries between application architecture, enterprise IT architecture, and even enterprise architecture. The effectiveness of this fractal architecture of systems of systems (both automated and human) is a direct result of the appropriateness of the decisions made across the full range of the organization to the contexts in play.

Since there is no one context, no rule can suffice. The answer we’re looking for is neither “microservice” nor “monolith” (or any other one tactic or technique), but fit to purpose for our context.

Form Follows Function on SPaMCast 353

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast features Tom’s essay on learning styles and Steve Tendon discussing knowledge workers, along with a Form Follows Function installment on why microservice architectures are not a silver bullet.

In SPaMCast 353, Tom and I discuss my post “Microservice Architectures aren’t for Everyone”.

Form Follows Function on SPaMCast 347

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast features Tom’s essay on project management in an Agile environment (aka “Project Management is Dead”) and a Software Sensei column on testing from Kim Pries in addition to a Form Follows Function installment on microservices, Devops and Conway’s Law.

In SPaMCast 347, Tom and I discuss my “Fixing IT – Microservices and DevOps to the Rescue?” post, specifically on how microservice architectures are not just a technical approach but an organizational one as well.