Institutional Amnesia, Cargo Cults and Software Development

When George Santayana stated that “Those who cannot remember the past are condemned to repeat it.”, he wasn’t talking about technology. When Brenda Michelson and Ed Featherston said much the same thing recently, they were:

It’s a sad fact of life that today’s silver bullet is likely to be yesterday’s junk which was probably the day before yesterday’s silver bullet.

Poor design choices are made for a variety of reasons. Sometimes it’s a matter of ego. Sometimes inadequate analysis is the culprit. Focusing on technology rather than problem-solving can be another pitfall. Even attempts at post-hoc justification of a prior bad decision can drive new mistakes.

An uncritical acceptance of tradition is a significant source of problem designs. Eberhard Wolff recently took a swipe at one old standard:

The stock reason for a tiered/distributed design is scalability. However, it’s not a given that distributing X horizontal layers across Y machines (yielding X/Y instances) will yield better results than Y machines, each with all three layers deployed on the same machine. The context in which this sort of distribution makes sense is far from universal. Even when the costs of distribution are outweighed by the benefits, traditional monolithic horizontal layers will likely be less efficient than vertical slices. One of the purported benefits of microservices is the ability to independently scale according to business concerns (vertical slices organized around bounded contexts) rather technology concerns (horizontal layers).

The mention of microservices brings to mind the problem of jumping on bandwagons. How many applications currently under development are being designed using this architectural style because it’s the “next big thing” rather than because the style fits the problem? Sam Newman, author of O’Reilly’s Building Microservices, in “Microservices for Greenfield?”, even states that he considers the style to be more suitable for evolving an existing system rather than building from scratch:

I remain convinced that it is much easier to partition an existing, “brownfield” system than to do so up front with a new, greenfield system. You have more to work with. You have code you can examine, you can speak to people who use and maintain the system. You also know what ‘good’ looks like – you have a working system to change, making it easier for you to know when you may have got something wrong or been too aggressive in your decision making process.

You also have a system that is actually running. You understand how it operates, how it behaves in production. Decomposition into microservices can cause some nasty performance issues for example, but with a brownfield system you have a chance to establish a healthy baseline before making potentially performance-impacting changes.

I’m certainly not saying ‘never do microservices for greenfield’, but I am saying that the factors above lead me to conclude that you should be cautious. Only split around those boundaries that are very clear at the beginning, and keep the rest on the more monolithic side. This will also give you time to assess how how mature you are from an operational point of view – if you struggle to manage two services, managing 10 is going to be difficult.

This same over-eagerness is present in front-end development as much as back-end development. Stefan Tilkow recently tweeted regarding the trend of jumping straight into complex Javascript framework applications rather than evolving into them based on need:

In my opinion, the key to effective design is being able to give a good answer when asked “why”. Being able to articulate the reasons behind the choices made is critical to justifying them. By reasons, I mean a logical explanations of how the techniques chosen contribute to the desired ends. Neither “X recommends this” nor “This is what everybody’s doing” count. Designing, developing, and evolving software systems is not a game of following a recipe. In the words of Grady Booch:

, , , , , , ,

Leave a comment

Are Microservices the Next Big Thing?

Bandwagon

The question popped up in a LinkedIn discussion of my last post:

“A question for you, are microservices the next big thing?”

It was a refreshing change of pace to actually be asked about my opinion rather than be told what it is. The backlash against microservices is such that, anything less that outright condemnation is seen as “touting” the latest fad (this in spite of having authored posts such as “Microservice Architectures aren’t for Everyone”, “Microservice Mistakes – Complexity as a Service”, and “Microservices – The Too Good to be True Parts”). Reflex reactions like this will most likely be so simplistic as to be useless, regardless of whether pro or con.

Architectural design is about making decisions. Good design, in my opinion, is about making decisions justified by the context at hand rather than following a recipe. It’s about understanding the situation and applying principles rather than blindly replicating a pattern. No one cares about how cutting edge the technology is, they care that the solution solves the problem without bankrupting them.

So, to return to the question of whether I think microservices are “the next big thing”:

They shouldn’t be, but they might be. Shouldn’t, because a full-blown microservice architecture app, while perfectly appropriate for Netflix, isn’t likely to be appropriate for most in-house corporate applications. They won’t get a benefit because of the additional operational and development complexity. That being said, lots of inappropriate SOA initiatives sucked up a lot of money not that long ago – if software development had a mascot it would be an immortal lemming with amnesia.

The principles behind MSA, however, have some value IMHO. When a monolithic architecture begins to get in the way, those principles can provide some guidance on how to carve it up. The wonderful thing about SRP is its fractal nature, so you split out responsibilities at a level of granularity that’s appropriate for your application’s situation when it makes sense. There’s no rule that you have to carve up everything at once or that you have to slice it as thin as they do at Netflix.

That’s why my posts on the subject tend to sway back and forth. It’s not a recipe for the “right way” (as if there could be one right way) to design applications, it’s merely another set of ideas that, depending on the context, could help or hurt.

It’s not the technique itself that makes or breaks a design, it’s how applicable the technique is to problem at hand.

, , , , ,

1 Comment

What Makes a Microservice “Micro”?

People really like to be able to use numbers. We see this in management; we see it in requirements gathering. Numbers, after all, are objective. Objective is better than subjective, right?

The answer, of course, is “it depends”. If one can defend the proposition that a given number equates to a given level of quality, then an objective measure should be preferable. If, however, the number is arbitrarily selected, the fact that it’s a number doesn’t change the fact that it was subjectively chosen.

Take, for example, the size metric that “defines” a microservice. According to James Hughes (who considers the definition “atrocious”), “…there seems to be a consensus that a micro service is a simple application that sits around the 10-100 LOC mark”. Does this mean that when the codebase hits 101 lines of code, the result ceases to be a microservice? Does this mean that higher-level languages are required to create microservices because lower-level ones would exceed the LOC limit before expressing something useful?

Clearly a quantitative metric that is irrelevant to what we really want to measure won’t work. This leaves us to choose a valid way to make a qualitative judgement about what makes a microservice “micro”.

There’s the cynical approach:

and there’s the pragmatic approach:

In my opinion, the defining characteristic of a microservice is that it’s an application (in terms of having an independent codebase that is independently deployed) that exists as a component of what an end-user would be more likely to consider the actual application. A microservice is part of the internals of what is the perceived application. It is important to remember that the microservice architectural style is an application architecture style.

I believe that Ben Morris’s suggestion that “…a single deployable service should be no bigger than a bounded context, but no smaller than an aggregate” is a good heuristic for identifying the service boundary. It puts the emphasis on cohesiveness rather than size. It is the single responsibility principle at the application/component level of abstraction.

In my opinion, a microservice is a component of an application that is independently scalable and independently deployable, possess an independent lifecycle, codebase, and data. It is “micro” in the sense of “microcosm” rather than “micro” in terms of some arbitrary unit of size.

, , ,

3 Comments

Form Follows Function on SPaMCast 335

SPaMCAST logo

It’s time for another appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast. This time I’m taking on Knuth’s quote: “Premature optimization is the root of all evil (or at least most of it) in programming.”

SPaMCast 335 features Tom on the meaning of effectiveness, efficiency, frameworks and methodologies; a discussion of my “Wait, did I just say Knuth was wrong?” post and an installment of Jo Ann Sweeny’s column, “Explaining Communication”, talking about content and a framework to guide the development of content.

, , ,

Leave a comment

Microservice Principles, Technical Debt, and Legacy Systems

Is there a circumstance where the answer to Architect Clippy‘s question is “yes”? In “Microservice Architectures aren’t for Everyone” I used this tweet to underscore the observation that a team that can’t produce a well-modularized monolith is unlikely to be helped by trying to distribute the problem. On the other hand, a team (or teams) tasked with rehabilitating a “Big Ball of Mud” might well find some value in the principles behind microservice architectures.

Some of the relevant principles are cohesion and replaceability. As Dan North noted in “Microservices: software that fits in your head”:

One way to manage the mess is to maximise the likelihood that everyone knows what’s going on in the codebase. This requires two things: consistency and replaceability. Consistency implies you can make reasonable assumptions about unfamiliar parts of the application. Replaceability means you can kill code easily and replace it with something better.

Without achieving separation of concerns, any architectural refactoring effort will be an exercise in chasing fires across the codebase. A divide and conquer strategy that applies the single responsibility principle at a macro level will be more likely to facilitate identification and remediation of lower-level technical debt. Monoliths can benefit from being carved up, not because small is inherently better, but because they reach a point where independence of their components becomes beneficial, even crucial. Components that share fewer dependencies (such as a shared data store) and have independent release cycles offer a great deal of flexibility in structuring an application and the team(s) that develop it.

In “Microservices allow for localized tech debt”, Jim Plush stated: “It’s much easier mentally to tackle $10,000 of debt across 4 credit cards at $2500 each than 1 card at the full $10,000.” Even more to the point, it’s much easier to tackle that debt when you split it with three other people (teams) each working independently.

Re-writes have a well-deserved bad reputation. Shared platforms and shared data stores will often mean that the transition from the legacy system to the re-written one will be a high-risk “big bang” affair. As Edmond Lau observed in “How to Avoid One of the Costliest Mistakes in Software Engineering”, you want to “…get as quickly as possible to a state where you’re again making incremental improvements”. Getting to this state may well happen quicker when the parts are separated.

, , , , , , ,

3 Comments

Microservices, SOA, Reuse and Replaceability

Unicorn

While it’s not as elusive as the unicorn, the concept of reuse tends to be talked about more often talked about than seen. Over the years, object-orientation, design patterns, and services have all held out the promise of reuse of either code or at least, design. Similar claims have been made regarding microservices.

Reuse is a creature of extremes. Very fine grained components (e.g. the classes that make up the standard libraries of Java and .Net) are highly reusable but require glue code to coordinate their interaction in order to yield something useful. This will often be the case with microservices, although not always; it is possible to have very small services with few or no dependencies on other services (it’s important to remember, unlike libraries, services generally share both behavior and data.). Coarse grained components, such as traditional SOA services, can be reused across an enterprise’s IT architecture to provide standard high-level interfaces into centralized systems for other applications.

The important thing to bear in mind, though, is that reuse is not an end in itself. It can be a means of achieving consistency and/or efficiency, but its benefits come from avoiding cost and duplication rather than from the extra usage. Just as other forms of reuse have had costs in addition to benefits, so it is with microservices as well.

Anything that is reused rather than duplicated becomes a dependency of its client application. This dependency relationship is a form of coupling, tying the two codebases together and constraining the ability of the dependency to change. Within the confines of an application, it is generally better for reuse to emerge. Inter-application reuse will require more coordination and tend to be more deliberately designed. As with most things, there is no free lunch. Context is required to determine whether the trade is a good one or not.

Replaceability is, in my opinion, just as important, if not more so, than reuse. Being able to switch from one dependency to another (or from one version of a dependency to another) because that dependency has its own independent lifecycle and is independently deployed enables a great deal of flexibility. That flexibility enables easier upgrades (rolling migration rather than a big bang). Reducing the friction inherent in migrations reduces the likelihood of technical debt due to inertia.

While a shared service may well find more constraints with each additional client, each client can determine how much replaceability is appropriate for itself.

, , , , , , ,

2 Comments

Of Blind Men and Elephants and Excessive Certainty

Blind men and the elephant

There’s an old poem about six blind men and an elephant, where each in turn declare that an elephant is like a wall, a spear, a snake, a tree, a fan, and a rope. Each accurately described what he was able to discern from his own limited point of view, yet all were wrong about the subject as a whole. As the poet noted:

Moral:

So oft in theologic wars,
The disputants, I ween,
Rail on in utter ignorance
Of what each other mean,
And prate about an Elephant
Not one of them has seen!

Sometimes our attitudes color our perception of others:

Management is often the butt of our disdain, expressed in cartoon form:

However, as Sandro Mancuso related in “Not all managers are stupid”:

I still remember the day when our managers in a large organisation told us we should still go live after we reported a major problem a couple of months before the deadline…There was a problem in a couple of unfinished flows, which would cause hundreds of thousands of trades to be misreported to the regulators. After we explained the situation, managers told us to work harder go ahead with the release anyway.

How could they tell us to go live in a situation like that? They should all be fired. Arrested. How could they ask us to drop the quality and go live with a known problem of that size?…

More than once we made it clear that focusing our time on getting the system ready to production would not gives us any time to finish the automation for the problematic flows and thousands of trades would be misreported. But they did not listen. Or so we thought.

After a few meetings with the business, we discovered a few things. They were not being irresponsible or stupid, as we developers thought. The deadline was set by the regulators and could not be moved. The cost of not reporting the trades was far higher than misreporting them. Not reporting the trades would not only be followed by heavy fines, but also by possible reputation damage. Companies would have extra time to correct any misreported trades before being fined.

For us, in the development team, it was the first time we realised that going live with a few known issues would be better than not going live at all.

Designing the architecture of a solution, at its core, is an exercise in decision-making. Whether the system in question is a software system or a human system, effective decision-making must be preceded by sense-making to identify the architecture of the problem. Contexts need to be identified in order to be synthesized into the architecture of the problem.

Bias, being too certain of our understanding to make the effort to validate it, is a good way to miss out on what’s in front of us. Failing to recognize our potential for bias makes it harder to overcome that bias. That failure restricts our ability to appreciate the full range of contexts to be synthesized and puts us in the same position as the blind men with the elephant.

It’s extremely difficult to solve a problem you don’t understand.

, , , , ,

6 Comments

Follow

Get every new post delivered to your Inbox.

Join 160 other followers

%d bloggers like this: