Microservices or Monoliths – Fences and Neighbors

Photo of fence separating fields from a road

 

At the end of my last post, “What Makes a Monolith Monolithic?”, I stated that I didn’t consider the term “monolithic” to be inherently derogatory. It is, rather, a descriptive term relating to the style of organizing an application’s architecture. Depending on the context the system operates within, a monolithic architectural style could lie anywhere on the continuum between perfectly suited and perfectly disastrous. Placing it on that continuum requires a sense of what qualities are most needed or desired and which can be traded off in their stead. Everything comes with a cost, and attempting to ignore that fact merely sets us up for unpleasant future surprises.

After an initial period of unbridled enthusiasm, opinion seemed to gel around the idea that highly distributed application architectures (aka microservice architectures) were not suitable to all contexts. There are prerequisites for jumping into the microservices pool in terms of problem architecture, infrastructure, and organization. Attempting to shoehorn a microservice architecture into an environment that cannot support it will be overly expensive at best and a failure of apocalyptic proportions at worst.

There are many aspects of application design that are commonly recognized as beneficial: modularity, loose-coupling, high cohesion, and separation of concerns. It is critical to realize that these aspects can be found in systems with microservice architectures, monolithic systems, and everything in between. Distributed architectures are not necessary for modularity, nor any of those other aspects. In fact, one could easily create an application with a microservice architecture whose qualities are opposite to these desirable ones.

There are, however, situations where the benefit of a microservice architecture outweighs the costs and complexity. The ability to independently deploy and scale the various parts of an application is a major benefit, in my opinion. A well designed microservice architecture can even allow for the components of an application to be replaced on the fly. These features are not unique to microservice architectures, but are arguably easier to achieve than in other application architectures.

Real design, balancing both costs and benefits, is required. Sticking a bit of network in between the components is insufficient to ensure success. Deliberate design, especially as the boundaries multiply, is critical for an effective system. Identifying and providing for those boundaries at the conceptual level (i.e. before they become physical) is key. Good fences can either make for good neighbors, or they can create a maze of barriers.

Advertisements

What Makes a Monolith Monolithic?

Photo of Stonehenge, 1877

 

It seems like everybody throws around the term “monolith”, but what do we mean by that?

Sam Newman started the ball rolling yesterday with this tweet:

My first response was a (semi) joke:

I say semi joke because, in truth, semantics (i.e. meaning) is critical. The English language has a horrible tendency to overload terms as it is, and in our line of work we tend to make it even worse. Lack of specificity obscures, rather than enlightens. The problem with the term “monolith” is that, while it’s a powerfully evocative term, it isn’t a simple one to define. My second response was closer to an actual definition:

The purpose of this post is to expand on that a bit.

The “mono” portion of the term is, in my opinion, the crucial part. I believe that quality of oneness is what defines a monolithic system. As I noted in the second tweet, it’s a matter of meta-coupling, whether that coupling exists in the form of deployment, data architecture, or execution style (Jeppe Cramon‘s post “Microservices: It’s not (only) the size that matters, it’s (also) how you use them – part 3” shows how temporal coupling can turn a distributed system into a runtime monolith). The following tweets between Anne Currie and Sam illustrate the amorphous nature of what is and isn’t a monolith:

Modules that can be deployed to run in a single process need not be considered monolithic, if they’re not tightly coupled. Likewise, running distributed isn’t a guarantee against being monolithic if the components are tightly coupled in any way. The emphasis on “in any way” is due to the fact that any of the types of coupling I mentioned above can be a deal killer. If all the “microservices” must be deployed simultaneously for the system to work, it’s a distributed monolith. If the communication is both synchronous and fault intolerant, it’s a distributed monolith. If there’s a single data store backing the entire system, it’s a distributed monolith. It’s not the modularity that defines it (you can have a modular monolith), but the inability to separate the parts without damaging the whole system.

I would also point out that I don’t consider “monolithic” to be derogatory, in and of itself. There is a trade-off involved in terms of coupling and complexity (and cost). While I generally prefer more flexibility, there is always the danger of over-engineering. If we’re hand-carving marble gargoyles to stick on a tool shed, chances are the customer won’t be pleased. The solution should bear at least a passing resemblance to the problem context it’s supposed to address.

Microservices, Monoliths, and Modularity

Iceberg

 

There are very valid reasons for considering a microservice architecture (MSA) when building/evolving an application. In my opinion, however, forcing modularity isn’t one of those very valid reasons.

Just the other day, I saw tweet from Simon Brown saying this same thing:

I still like his comment from two years back: “I’ll keep saying this … if people can’t build monoliths properly, microservices won’t help”. I believe that if you’re having problems building a monolith properly, trying to use a distributed architecture to force modularity may actually cause harm.

MSAs, like any distributed application architecture, involve increased complexity and costs; table stakes, if you will. Like an iceberg, there’s both a lot more to it than just what’s showing above the waterline and a fair amount of hazard for the unwary. If a development team cannot or will not comply with design guidelines (e.g. modularity requirements), injecting additional complexity is probably not the solution you need.

Distributing an application makes it harder to accidentally entangle different concerns, but it doesn’t make it impossible:

I’d argue that making it harder to accidentally break modularity addresses neither of the groups I mentioned earlier: those that cannot or will not comply. It’s ironic, but those who fail to understand the need for modularity can be very creative in their “solutions”, regardless of the obstacles. Likewise, those who refuse to comply.

In short, distribution as a means of “ensuring” modularity fails the fitness for purpose test.

The situation becomes worse when you factor in the additional complexity inherent in a distributed system. Likewise, there’s the cost of the table stakes (infrastructure, process, staffing, etc.) mentioned above. Of course, having abandoned the principle of cause and effect, one could attempt some “creative” workarounds to avoid having to pay the price (in other words, adding more and more complexity).

When you introduce significant additional complexity (with all its attendant risk) with little chance of the technique actually achieving its goal, you’ve caused harm.

These concerns are not solely limited to the application architecture. Distributing the data architecture has the same limitations in terms of ensuring modularity and introduces additional complexity. Adding boundaries adds the need for governance. A disciplined, monolithic team can maintain modularity in a monolithic data architecture. Multiple separate teams trying to share a monolithic data architecture will either experience a crippling level of governance overhead or a complete breakdown in modularity.

MSAs can be useful when you need independently scalable and replaceable components. When you have multiple teams working on one logical application, they can also be appropriate as well. Using the technique when the cost outweighs the potential payoff, however, is a losing bet.

Monolithic Applications and Enterprise Gravel

Pebbles

It’s been almost a year since I’ve written anything about microservices, and while a lot has been said on that subject, it’s one I still monitor to see what new pops up. The opening of a blog post that I read last week caught my attention:

Coined by Melvin Conway in 1968, Conway’s Law states: “Any organization that designs a system will produce a design whose structure is a copy of the organization’s communication structure.” In software development terms, Conway’s Law suggests that a given team will build apps that mirror the team’s organizational structure. Siloed functional teams produce siloed application architectures.

The result is a monolith: A massive application whose functionality is crammed into a few crowded parts. Scaling a simple pattern to the enterprise level often results in a monolith.

None of this is wrong, per se, but in reading it, one could come to a wrong conclusion. Siloed functional teams (particularly where the culture of the organization encourages siloed business units) produce siloed application architectures that are most likely monoliths. From an enterprise IT architecture aspect, though, the result is not monolithic. Googling the definition of “monolithic”, we get this:

mon·o·lith·ic
ˌmänəˈliTHik/
adjective
  1. formed of a single large block of stone.
  2. (of an organization or system) large, powerful, and intractably indivisible and uniform.
    “rejecting any move toward a monolithic European superstate”
    synonyms: inflexible, rigid, unbending, unchanging, fossilized
    “a monolithic organization”

Rather than “a single large block of stone”, we get gravel. The architecture of the enterprise’s IT isn’t “large, powerful, and intractably indivisible and uniform”. It may well be large, but its power in relation to its size will be lacking. Too much effort is wasted reinventing wheels and maintaining redundant data (most likely with no real sense of which set of data is authoritative). Likewise, while “intractably indivisible” isn’t a virtue, being intractable while also lacking cohesion is worse. Such an IT architecture is a foundation built on shifting sand. Lastly, whether the EITA is uniform or not (and I would give good odds that it’s not), is irrelevant given the other negative aspects. Under the circumstances, worrying about uniformity would be like worrying about whether the superstructure of the Titanic had a fresh paint job.

Does this mean that microservices are the answer to having an effective EITA? Hardly.

There are prerequisites for being able to support a microservice architecture; table stakes, if you will. However, the service-oriented mindset can be of value whether it’s applied as far down as the intra-application level (i.e. microservices – it is an application architecture pattern) or inter-application (the more traditional SOA). Where the line is drawn depends on the context of the application(s) and their ecosystem. What can be afforded and supported are critical aspects of the equation at all levels.

What is necessary for an effective EITA is a full-stack approach. Governance and data architecture in particular are important aspects to consider. The goal is consistent, intentional alignment across all levels (enterprise, EITA, solution, and application), promoting a cohesive architecture throughout, not a top-down dictatorship.

Large edifices that last are built from smaller pieces that fit together on purpose.

Microservices, Monoliths, and Conflicts to Resolve

Two tweets, opposites in position, and both have merit. Welcome to the wonderful world of architecture, where the only rule is that there is no rule that survives contact with reality.

Enhancing resilience via redundancy is a technique with a long pedigree. While microservices are a relatively recent and extreme example of this, they’re hardly groundbreaking in that respect. Caching, mirroring, load-balancing, etc. has been with us a long, long time. Redundancy is a path to high availability.

Centralization (as exemplified by monolithic systems) can be a useful technique for simplification, increasing data and behavioral integrity, and promoting cohesion. Like redundancy, it’s a system design technique older than automation. There was a reason that “all roads lead to Rome”. Centralization provides authoritativeness and, frequently, economies of scale.

The problem with both techniques is that neither comes without costs. Redundancy introduces complexity in order to support distributing changes between multiple points and reconciling conflicts. Centralization constrains access and can introduce a single point of failure. Getting the benefits without the incurring the costs remains a known issue.

The essence of architectural design is decision-making. Given that those decisions will involve costs as well as benefits, both must be taken into account to ensure that, on balance, the decision achieves its aims. Additionally, decisions must be evaluated in the greater context rather than in isolation. As Tom Graves is fond of saying “things work better when they work together, on purpose”.

This need for designs to not only be internally optimal, but also optimized for their ecosystem means that these, as well as other principles, transcend the boundaries between application architecture, enterprise IT architecture, and even enterprise architecture. The effectiveness of this fractal architecture of systems of systems (both automated and human) is a direct result of the appropriateness of the decisions made across the full range of the organization to the contexts in play.

Since there is no one context, no rule can suffice. The answer we’re looking for is neither “microservice” nor “monolith” (or any other one tactic or technique), but fit to purpose for our context.

Who Needs Architects? – Monoliths as Systems of Stuff

Platypus

In my experience, IT is not a “one size fits all” operation. In both their latest two-speed vision and their older three-speed one, Gartner’s opinion is the same – there is no one process that works for every system across the enterprise (for what it’s worth, I agree with Simon Wardley that Bimodal IT is still too restrictive and three modes comes closer to reflecting the types of systems in use). Process and governance that is appropriate to one system may be too strict for another and too loose for a third. In this light, attempting to find one compromise ensures that all are poorly served. Consequently, more than one mode of governance just makes sense.

The problem is more complex, however, than just picking trimodal or bimodal and dividing applications up according to whether they are systems of record, systems of differentiation, or systems of innovation (or digital versus traditional). Just as “accidental architecture” can result in a “Big Ball of Mud” at the application level, it can also do so in terms of enterprise IT architecture. Monoliths that have grown organically may cross boundaries of the multimodal framework taxonomy, essentially becoming incoherent systems of “stuff”. This complicates their assignment to a process that fits their nature. When the application fits more than one category, do you force it into the more restrictive category or the least restrictive? No matter which way you choose, the answer will be problematic.

Given the fractal nature of IT, it should not be a surprise that design decisions made at the level of individual applications can bubble up to affect the IT architecture of the enterprise as a whole. Separation of concerns (logical) and modularity (physical) remain important from the lowest level to the top. Without a strategic direction, tactical excellence can lead to waste from lack of focus.

Monolithic architectures trade simplicity for modularity at the application architecture level, which may be a valid trade at that level. If, however, a monolith crosses framework category boundaries, then major architectural refactoring may be required to avoid making ugly compromises. Separation of concerns within a monolith can ease the pain of this kind of refactoring, but avoidance of the need for refactoring is even more painless. Paying attention to cohesion across all levels of granularity and designing with extra-application as well as intra-application concerns in mind is necessary to achieve this avoidance.

Knowing the issues and being able to say why you made the choices you did is key.

Microservices – Sharpening the Focus

Motion Blurred London Bus

While it was not the genesis of the architectural style known as microservices, the March 2014 post by James Lewis and Martin Fowler certainly put it on the software development community’s radar. Although the level of interest generated has been considerable, the article was far from an unqualified endorsement:

Despite these positive experiences, however, we aren’t arguing that we are certain that microservices are the future direction for software architectures. While our experiences so far are positive compared to monolithic applications, we’re conscious of the fact that not enough time has passed for us to make a full judgement.

One reasonable argument we’ve heard is that you shouldn’t start with a microservices architecture. Instead begin with a monolith, keep it modular, and split it into microservices once the monolith becomes a problem. (Although this advice isn’t ideal, since a good in-process interface is usually not a good service interface.)

So we write this with cautious optimism. So far, we’ve seen enough about the microservice style to feel that it can be a worthwhile road to tread. We can’t say for sure where we’ll end up, but one of the challenges of software development is that you can only make decisions based on the imperfect information that you currently have to hand.

In the course of roughly fourteen months, Fowler’s opinion has gelled around the “reasonable argument”:

So my primary guideline would be don’t even consider microservices unless you have a system that’s too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don’t try to separate it into separate services.

This mirrors what Sam Newman stated in “Microservices For Greenfield?”:

I remain convinced that it is much easier to partition an existing, “brownfield” system than to do so up front with a new, greenfield system. You have more to work with. You have code you can examine, you can speak to people who use and maintain the system. You also know what ‘good’ looks like – you have a working system to change, making it easier for you to know when you may have got something wrong or been too aggressive in your decision making process.

You also have a system that is actually running. You understand how it operates, how it behaves in production. Decomposition into microservices can cause some nasty performance issues for example, but with a brownfield system you have a chance to establish a healthy baseline before making potentially performance-impacting changes.

I’m certainly not saying ‘never do microservices for greenfield’, but I am saying that the factors above lead me to conclude that you should be cautious. Only split around those boundaries that are very clear at the beginning, and keep the rest on the more monolithic side. This will also give you time to assess how how mature you are from an operational point of view – if you struggle to manage two services, managing 10 is going to be difficult.

In short, the application architectural style known as microservice architecture (MSA), is unlikely to be an appropriate choice for the early stages of an application. Rather it is a style that is most likely migrated to from a more monolithic beginning. Some subset of applications may benefit from that form of distributed componentization at some point, but distribution, at any degree of granularity, should be based on need. Separation of concerns and modularity does not imply a need for distribution. In fact, poorly planned distribution may actually increase complexity and coupling while destroying encapsulation. Dependencies must be managed whether local or remote.

This is probably a good point to note that there is a great deal of room between a purely monolithic approach and a full-blown MSA. Rather than a binary choice, there is a wide range of options between the two. The fractal nature of the environment we inhabit means that responsibilities can be described as singular and separate without their being required to share the same granularity. Monoliths can be carved up and the resulting component parts still be considered monolithic compared to an extremely fine-grained sub-application microservice and that’s okay. The granularity of the partitioning (and the associated complexity) can be tailored to the desired outcome (such as making components reusable across multiple applications or more easily replaceable).

The moral of the story, at least in my opinion, is that intentional design concentrating on separation of concerns, loose coupling, and high cohesion is beneficial from the very start. Vertical (functional) slices, perhaps combined with layers (what I call “dicing”), can be used to achieve these ends. Regardless of whether the components are to be distributed at first, designing them with that in mind from the start will ease any transition that comes in the future without ill effects for the present. Neglecting these issues, risks hampering, if not outright preventing, breaking them out at a later date without resorting to a re-write.

These same concerns apply higher levels of abstraction as well. Rather than blindly growing a monolith that is all things to all people, adding new features should be treated as an opportunity to evaluate whether that functionality coheres with the existing application or is better suited to being a service from an external provider. Just as the application architecture should aim for modularity, so too should the solution architecture.

A modular design is a flexible design. While we cannot know up front the extent of change an application will undergo over its lifetime, we can be sure that there will be change. Designing with flexibility in mind means that change, when it comes, is less likely to be an existential crisis. As Hayim Makabee noted in his write-up of Rotem Hermon’s talk, “Change Driven Design”: “Change should entail extending the system rather than refactoring.”

A full-blown MSA architecture is one possible outcome for an application. It is, however, not the most likely outcome for most applications. What is important is to avoid unnecessary constraints and retain sufficient flexibility to deal with the needs that arise.

[London Bus Image by E01 via Wikimedia Commons.]