What Makes a Monolith Monolithic?

Photo of Stonehenge, 1877

 

It seems like everybody throws around the term “monolith”, but what do we mean by that?

Sam Newman started the ball rolling yesterday with this tweet:

My first response was a (semi) joke:

I say semi joke because, in truth, semantics (i.e. meaning) is critical. The English language has a horrible tendency to overload terms as it is, and in our line of work we tend to make it even worse. Lack of specificity obscures, rather than enlightens. The problem with the term “monolith” is that, while it’s a powerfully evocative term, it isn’t a simple one to define. My second response was closer to an actual definition:

The purpose of this post is to expand on that a bit.

The “mono” portion of the term is, in my opinion, the crucial part. I believe that quality of oneness is what defines a monolithic system. As I noted in the second tweet, it’s a matter of meta-coupling, whether that coupling exists in the form of deployment, data architecture, or execution style (Jeppe Cramon‘s post “Microservices: It’s not (only) the size that matters, it’s (also) how you use them – part 3” shows how temporal coupling can turn a distributed system into a runtime monolith). The following tweets between Anne Currie and Sam illustrate the amorphous nature of what is and isn’t a monolith:

Modules that can be deployed to run in a single process need not be considered monolithic, if they’re not tightly coupled. Likewise, running distributed isn’t a guarantee against being monolithic if the components are tightly coupled in any way. The emphasis on “in any way” is due to the fact that any of the types of coupling I mentioned above can be a deal killer. If all the “microservices” must be deployed simultaneously for the system to work, it’s a distributed monolith. If the communication is both synchronous and fault intolerant, it’s a distributed monolith. If there’s a single data store backing the entire system, it’s a distributed monolith. It’s not the modularity that defines it (you can have a modular monolith), but the inability to separate the parts without damaging the whole system.

I would also point out that I don’t consider “monolithic” to be derogatory, in and of itself. There is a trade-off involved in terms of coupling and complexity (and cost). While I generally prefer more flexibility, there is always the danger of over-engineering. If we’re hand-carving marble gargoyles to stick on a tool shed, chances are the customer won’t be pleased. The solution should bear at least a passing resemblance to the problem context it’s supposed to address.

Advertisements

Microservices, Monoliths, and Modularity

Iceberg

 

There are very valid reasons for considering a microservice architecture (MSA) when building/evolving an application. In my opinion, however, forcing modularity isn’t one of those very valid reasons.

Just the other day, I saw tweet from Simon Brown saying this same thing:

I still like his comment from two years back: “I’ll keep saying this … if people can’t build monoliths properly, microservices won’t help”. I believe that if you’re having problems building a monolith properly, trying to use a distributed architecture to force modularity may actually cause harm.

MSAs, like any distributed application architecture, involve increased complexity and costs; table stakes, if you will. Like an iceberg, there’s both a lot more to it than just what’s showing above the waterline and a fair amount of hazard for the unwary. If a development team cannot or will not comply with design guidelines (e.g. modularity requirements), injecting additional complexity is probably not the solution you need.

Distributing an application makes it harder to accidentally entangle different concerns, but it doesn’t make it impossible:

I’d argue that making it harder to accidentally break modularity addresses neither of the groups I mentioned earlier: those that cannot or will not comply. It’s ironic, but those who fail to understand the need for modularity can be very creative in their “solutions”, regardless of the obstacles. Likewise, those who refuse to comply.

In short, distribution as a means of “ensuring” modularity fails the fitness for purpose test.

The situation becomes worse when you factor in the additional complexity inherent in a distributed system. Likewise, there’s the cost of the table stakes (infrastructure, process, staffing, etc.) mentioned above. Of course, having abandoned the principle of cause and effect, one could attempt some “creative” workarounds to avoid having to pay the price (in other words, adding more and more complexity).

When you introduce significant additional complexity (with all its attendant risk) with little chance of the technique actually achieving its goal, you’ve caused harm.

These concerns are not solely limited to the application architecture. Distributing the data architecture has the same limitations in terms of ensuring modularity and introduces additional complexity. Adding boundaries adds the need for governance. A disciplined, monolithic team can maintain modularity in a monolithic data architecture. Multiple separate teams trying to share a monolithic data architecture will either experience a crippling level of governance overhead or a complete breakdown in modularity.

MSAs can be useful when you need independently scalable and replaceable components. When you have multiple teams working on one logical application, they can also be appropriate as well. Using the technique when the cost outweighs the potential payoff, however, is a losing bet.

Monolithic Applications and Enterprise Gravel

Pebbles

It’s been almost a year since I’ve written anything about microservices, and while a lot has been said on that subject, it’s one I still monitor to see what new pops up. The opening of a blog post that I read last week caught my attention:

Coined by Melvin Conway in 1968, Conway’s Law states: “Any organization that designs a system will produce a design whose structure is a copy of the organization’s communication structure.” In software development terms, Conway’s Law suggests that a given team will build apps that mirror the team’s organizational structure. Siloed functional teams produce siloed application architectures.

The result is a monolith: A massive application whose functionality is crammed into a few crowded parts. Scaling a simple pattern to the enterprise level often results in a monolith.

None of this is wrong, per se, but in reading it, one could come to a wrong conclusion. Siloed functional teams (particularly where the culture of the organization encourages siloed business units) produce siloed application architectures that are most likely monoliths. From an enterprise IT architecture aspect, though, the result is not monolithic. Googling the definition of “monolithic”, we get this:

mon·o·lith·ic
ˌmänəˈliTHik/
adjective
  1. formed of a single large block of stone.
  2. (of an organization or system) large, powerful, and intractably indivisible and uniform.
    “rejecting any move toward a monolithic European superstate”
    synonyms: inflexible, rigid, unbending, unchanging, fossilized
    “a monolithic organization”

Rather than “a single large block of stone”, we get gravel. The architecture of the enterprise’s IT isn’t “large, powerful, and intractably indivisible and uniform”. It may well be large, but its power in relation to its size will be lacking. Too much effort is wasted reinventing wheels and maintaining redundant data (most likely with no real sense of which set of data is authoritative). Likewise, while “intractably indivisible” isn’t a virtue, being intractable while also lacking cohesion is worse. Such an IT architecture is a foundation built on shifting sand. Lastly, whether the EITA is uniform or not (and I would give good odds that it’s not), is irrelevant given the other negative aspects. Under the circumstances, worrying about uniformity would be like worrying about whether the superstructure of the Titanic had a fresh paint job.

Does this mean that microservices are the answer to having an effective EITA? Hardly.

There are prerequisites for being able to support a microservice architecture; table stakes, if you will. However, the service-oriented mindset can be of value whether it’s applied as far down as the intra-application level (i.e. microservices – it is an application architecture pattern) or inter-application (the more traditional SOA). Where the line is drawn depends on the context of the application(s) and their ecosystem. What can be afforded and supported are critical aspects of the equation at all levels.

What is necessary for an effective EITA is a full-stack approach. Governance and data architecture in particular are important aspects to consider. The goal is consistent, intentional alignment across all levels (enterprise, EITA, solution, and application), promoting a cohesive architecture throughout, not a top-down dictatorship.

Large edifices that last are built from smaller pieces that fit together on purpose.

Microservices and Data Architecture – Who Owns What Data?

Medieval Master and Scholars

An important consideration with microservice architectures is their effect on an enterprise’s data architecture. In “Carving it up – Microservices, Monoliths, & Conway’s Law”, I touched on the likelihood of there being data fragmentation and redundancy with this style (not that monoliths have a good record regarding these attributes either). More important than data fragmentation and redundancy, is the notion of authoritativeness, which was mentioned in “More on Microservices – Boundaries, Governance, Reuse & Complexity”. When redundant data is locked up in monoliths with little or no interoperability and little or no governance, then it’s very easy to have conflicting data without an approved method of determining which copy represents the true state. The answer lies not in removing all redundancy, but in recognizing the fact that systems will share concepts and managing that sharing.

Multiple systems routinely share conceptual entities. In his post on bounded contexts, Martin Fowler used the example of Customer and Product concepts spanning Sales and Support contexts. In all likelihood, these would be shared by other contexts as well, Customer perhaps also appearing in an Accounts Receivable context while Product might also be part of the Catalog and Inventory contexts. While these contexts might share concepts, the details will differ from one to another (e.g. the Price attribute of a Product may be relevant to the Sales and Catalog contexts, but irrelevant to Support and Inventory). Control over these details will likely be isolated to a single context (e.g. while Price may be relevant to both the Sales and the Catalog contexts, it’s likely that the setting of the price is only the responsibility of the Catalog context).

Why the discussion about bounded contexts? The Organized around Business Capabilities characteristic of microservices elaborated in the Lewis and Fowler post is analogous to bounded contexts within a system. A microservice style of architecture brings the idea of a bounded context to a system of systems.

A common characteristic of monoliths is that they may be owned by one organizational unit, but contain several contexts that are rightly the domain of another organizational unit due to a lack of integration. Re-keying or scripting extracted data from one system to another can arguably be called eventual consistency, but it’s a poor alternative to what can be done with a service-based architecture. The section “Data duplication over Events” in Jeppe Cramon’s “Microservices: It’s not (only) the size that matters, it’s (also) how you use them – part 4” illustrates how cooperating services can share data in a controlled manner with the service owning the data broadcasting out changes to those consuming that data, whether they be other transactional systems or data stores used for analytics and reporting.

The “bounded context writ large” nature of a microservices style allows you to use Conway’s Law to improve your data architecture. When the technology supports the same communication and governance model as the business it is supposed to support, data conflicts can be reduced, if not eliminated. Don’t underestimate, however, the effects that legacy systems may have had on the business’ communication and governance model. Years of working around the existing systems may have influenced (officially and/or unofficially) that model. As Udi Dahan noted in “People, Politics, and the Single Responsibility Principle”:

This just makes it that much harder to decide how to structure our software – there is no map with nice clean borders. We need to be able to see past the organizational dysfunction around us, possibly looking for how the company might have worked 100 years ago if everything was done by paper. While this might be possible in domains that have been around that long (like banking, shipping, etc) but even there, given the networked world we now live in, things that used to be done entirely within a single company are now spread across many different entities taking part in transnational value networks.

In short – it’s freakin’ hard.

But it’s still important.

Just don’t buy too deeply into the idea that by getting the responsibilities of your software right, that you will somehow reduce the impact that all of that business dysfunction has on you as a software developer. Part of the maturation process for a company is cleaning up its’ business processes in parallel to cleaning up its’ software processes.

It should be obvious that some governance will be needed to untangle the monoliths and keep them so. The good news is that this particular style provides the tools to do it incrementally.