Monolithic Applications and Enterprise Gravel

Pebbles

It’s been almost a year since I’ve written anything about microservices, and while a lot has been said on that subject, it’s one I still monitor to see what new pops up. The opening of a blog post that I read last week caught my attention:

Coined by Melvin Conway in 1968, Conway’s Law states: “Any organization that designs a system will produce a design whose structure is a copy of the organization’s communication structure.” In software development terms, Conway’s Law suggests that a given team will build apps that mirror the team’s organizational structure. Siloed functional teams produce siloed application architectures.

The result is a monolith: A massive application whose functionality is crammed into a few crowded parts. Scaling a simple pattern to the enterprise level often results in a monolith.

None of this is wrong, per se, but in reading it, one could come to a wrong conclusion. Siloed functional teams (particularly where the culture of the organization encourages siloed business units) produce siloed application architectures that are most likely monoliths. From an enterprise IT architecture aspect, though, the result is not monolithic. Googling the definition of “monolithic”, we get this:

mon·o·lith·ic
ˌmänəˈliTHik/
adjective
  1. formed of a single large block of stone.
  2. (of an organization or system) large, powerful, and intractably indivisible and uniform.
    “rejecting any move toward a monolithic European superstate”
    synonyms: inflexible, rigid, unbending, unchanging, fossilized
    “a monolithic organization”

Rather than “a single large block of stone”, we get gravel. The architecture of the enterprise’s IT isn’t “large, powerful, and intractably indivisible and uniform”. It may well be large, but its power in relation to its size will be lacking. Too much effort is wasted reinventing wheels and maintaining redundant data (most likely with no real sense of which set of data is authoritative). Likewise, while “intractably indivisible” isn’t a virtue, being intractable while also lacking cohesion is worse. Such an IT architecture is a foundation built on shifting sand. Lastly, whether the EITA is uniform or not (and I would give good odds that it’s not), is irrelevant given the other negative aspects. Under the circumstances, worrying about uniformity would be like worrying about whether the superstructure of the Titanic had a fresh paint job.

Does this mean that microservices are the answer to having an effective EITA? Hardly.

There are prerequisites for being able to support a microservice architecture; table stakes, if you will. However, the service-oriented mindset can be of value whether it’s applied as far down as the intra-application level (i.e. microservices – it is an application architecture pattern) or inter-application (the more traditional SOA). Where the line is drawn depends on the context of the application(s) and their ecosystem. What can be afforded and supported are critical aspects of the equation at all levels.

What is necessary for an effective EITA is a full-stack approach. Governance and data architecture in particular are important aspects to consider. The goal is consistent, intentional alignment across all levels (enterprise, EITA, solution, and application), promoting a cohesive architecture throughout, not a top-down dictatorship.

Large edifices that last are built from smaller pieces that fit together on purpose.

“Microservice Architectures aren’t for Everyone” on Iasa Global

Simon Brown says it nicely:

Architect Clippy is a bit more snarky:

In both cases, the message is the same: microservice architectures (MSAs), in and of themselves, are not necessarily “better”. There are aspects where moving to a distributed architecture from a monolithic one can improve a system (e.g. enabling selective load balancing and incremental deployment). However, if someone isn’t capable of building a modular monolith, distributing the pieces is unlikely to help matters.

See the full post on the Iasa Global site (a re-post, originally published here).

“Microservices, SOA, and EITA: Where To Draw the Line? Why to Draw the Line?” on Iasa Global

In my part of the world, it’s not uncommon for people to say that someone wouldn’t recognize something if it “bit them in the [rude rump reference]”. For many organizations, that seems to be the explanation for the state of their enterprise IT architecture. For while we might claim to understand terms like “design”, “encapsulation”, and “separation of concerns”, the facts on the ground fail to show it. Just as software systems can degenerate into a “Big Ball of Mud”, so too can the systems of systems comprising our enterprise IT architectures. If we look at organizations as the systems they are, it should be obvious that this same entropy can infect the organization as well.

See the full post on the Iasa Global site (a re-post, originally published here).

Form Follows Function on SPaMCast 339

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast features Tom’s essay on demonstrations and a Form Follows Function installment on microservices, SOA, and Enterprise IT Architecture.

For SPaMCast 339, Tom and I discuss my “Microservices, SOA, and EITA: Where To Draw the Line? Why to Draw the Line?” post.

Microservices, SOA, Reuse and Replaceability

Unicorn

While it’s not as elusive as the unicorn, the concept of reuse tends to be talked about more often talked about than seen. Over the years, object-orientation, design patterns, and services have all held out the promise of reuse of either code or at least, design. Similar claims have been made regarding microservices.

Reuse is a creature of extremes. Very fine grained components (e.g. the classes that make up the standard libraries of Java and .Net) are highly reusable but require glue code to coordinate their interaction in order to yield something useful. This will often be the case with microservices, although not always; it is possible to have very small services with few or no dependencies on other services (it’s important to remember, unlike libraries, services generally share both behavior and data.). Coarse grained components, such as traditional SOA services, can be reused across an enterprise’s IT architecture to provide standard high-level interfaces into centralized systems for other applications.

The important thing to bear in mind, though, is that reuse is not an end in itself. It can be a means of achieving consistency and/or efficiency, but its benefits come from avoiding cost and duplication rather than from the extra usage. Just as other forms of reuse have had costs in addition to benefits, so it is with microservices as well.

Anything that is reused rather than duplicated becomes a dependency of its client application. This dependency relationship is a form of coupling, tying the two codebases together and constraining the ability of the dependency to change. Within the confines of an application, it is generally better for reuse to emerge. Inter-application reuse will require more coordination and tend to be more deliberately designed. As with most things, there is no free lunch. Context is required to determine whether the trade is a good one or not.

Replaceability is, in my opinion, just as important, if not more so, than reuse. Being able to switch from one dependency to another (or from one version of a dependency to another) because that dependency has its own independent lifecycle and is independently deployed enables a great deal of flexibility. That flexibility enables easier upgrades (rolling migration rather than a big bang). Reducing the friction inherent in migrations reduces the likelihood of technical debt due to inertia.

While a shared service may well find more constraints with each additional client, each client can determine how much replaceability is appropriate for itself.

Microservice Architectures aren’t for Everyone

Simon Brown says it nicely:

Architect Clippy is a bit more snarky:

In both cases, the message is the same: microservice architectures (MSAs), in and of themselves, are not necessarily “better”. There are aspects where moving to a distributed architecture from a monolithic one can improve a system (e.g. enabling selective load balancing and incremental deployment). However, if someone isn’t capable of building a modular monolith, distributing the pieces is unlikely to help matters.

Even where the requisite architectural design capability exists, however, it isn’t a given that distributing the components of the application is the right choice. Distributed applications will incur additional complexity from both a development and an operations standpoint. The more granularly the application is distributed, the more complexity will result. It makes little sense to incur complexity costs that outweigh any benefits received. The decision, however is not a binary one; system architectures that fall between MSA and monolith are possible. One can selectively apply MSA and/or SOA principles from the application level all the way to the enterprise’s IT architecture.

Daniel Bryant’s “Drafting a Proposal for a Microservice Maturity/Classification Model” contains an interesting taxonomy of application architectural styles:

  • Megalith Platform
    • Humongous single codebase resulting in a single application
  • Monolith Platform
    • Large single codebase resulting in a single application
  • Macro SOA Platform
    • Classical SOA applications, and platforms consisting of loosely-coupled large services (potentially a series of interconnected monoliths)
  • Meso Application Platform
    • ‘Meso’ or middle-sized services interconnected to form a single application or platform. Essentially a monolith and microservice hybrid
  • Microservice Platform
    • ‘Cloud native’ loosely-coupled small services focused around DDD-inspired ‘bounded contexts’
  • Nanoservice Platform
    • Extremely small single-purpose (primarily reactive) services

As a taxonomy of application types, Bryant’s work is interesting. Some of the details he’s listed for each type are highly generalized (e.g. Megaliths and Monoliths are assumed to be highly coupled, low cohesion big balls of mud that are difficult to understand), but it could serve as a basis for categorizing applications. As a maturity model, however, it has serious issues. As a maturity model, it would assume that nanoservices(!) represent the zenith of application design. I would argue that this is inherently flawed. Not all enterprises will necessarily require applications fitting the full MSA model. The idea that all applications would benefit from this style of architectural design is extremely hard to believe. I would argue that forcing this style onto smaller applications with limited user bases would be harmful – incurring unnecessary expense for these applications would risk discrediting the technique in circumstances where it would be appropriate.

Where a technique is chosen because of evidence that it delivers specific benefits that meet specific needs and the costs involved don’t outweigh those benefits, there is a higher probability of success. Where the decision is made on the basis of using what’s “new” or “cool”, there’s no real way to give a probability of success because the selection criteria is divorced from fitness for purpose. That being said, my bias would be towards a much lower chance of success.

Innovations can be alternatives without necessarily being “better” in all cases.

Form Follows Function on SPaMCast 331

SPaMCAST logo

I’m back for another appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

SPaMCast 331 features Tom on Agile Coaching, a discussion of my “Microservices vs SOA – Is there any real difference?” post and an installment of Jo Ann Sweeny’s column, “Explaining Communication”, dealing with communication channels.