Microservices, SOA, Reuse and Replaceability

Unicorn

While it’s not as elusive as the unicorn, the concept of reuse tends to be talked about more often talked about than seen. Over the years, object-orientation, design patterns, and services have all held out the promise of reuse of either code or at least, design. Similar claims have been made regarding microservices.

Reuse is a creature of extremes. Very fine grained components (e.g. the classes that make up the standard libraries of Java and .Net) are highly reusable but require glue code to coordinate their interaction in order to yield something useful. This will often be the case with microservices, although not always; it is possible to have very small services with few or no dependencies on other services (it’s important to remember, unlike libraries, services generally share both behavior and data.). Coarse grained components, such as traditional SOA services, can be reused across an enterprise’s IT architecture to provide standard high-level interfaces into centralized systems for other applications.

The important thing to bear in mind, though, is that reuse is not an end in itself. It can be a means of achieving consistency and/or efficiency, but its benefits come from avoiding cost and duplication rather than from the extra usage. Just as other forms of reuse have had costs in addition to benefits, so it is with microservices as well.

Anything that is reused rather than duplicated becomes a dependency of its client application. This dependency relationship is a form of coupling, tying the two codebases together and constraining the ability of the dependency to change. Within the confines of an application, it is generally better for reuse to emerge. Inter-application reuse will require more coordination and tend to be more deliberately designed. As with most things, there is no free lunch. Context is required to determine whether the trade is a good one or not.

Replaceability is, in my opinion, just as important, if not more so, than reuse. Being able to switch from one dependency to another (or from one version of a dependency to another) because that dependency has its own independent lifecycle and is independently deployed enables a great deal of flexibility. That flexibility enables easier upgrades (rolling migration rather than a big bang). Reducing the friction inherent in migrations reduces the likelihood of technical debt due to inertia.

While a shared service may well find more constraints with each additional client, each client can determine how much replaceability is appropriate for itself.

Microservices – The Too Good to be True Parts

Label for Clark Stanley's Snake Oil Liniment

Over the last several months, I’ve written several posts about microservices. My attitude toward this architectural style is one of guarded optimism. I consider the “purist” version of it to be overkill for most applications (are you really creating something Netflix-scale?), but see a lot of valuable ideas developing out of it. Smaller, focused, service-enabled applications are, in my opinion, an excellent way to increase systems agility. Where the benefits outweigh the costs and you’ve done your homework, systems of systems make sense.

However, the history of Service-Oriented Architecture (SOA), is instructive. A tried and true method of discrediting yourself is to over-promise and under-deliver. Microservice architectures, as the latest hot topic, currently receive a lot of uncritical press, just as SOA did a few years back. An article on ZDNet, “How Nike thinks about app development: Lots of micro services”, illustrates this (emphasis is mine):

Nike is breaking down all the parts of its apps to crate (sic) building blocks that can be reused and tweaked as needed. There’s also a redundancy benefit: Should one micro service fail the other ones will work in the app.

Reuse and agility tend to be antagonists. The governance needed to promote reuse impedes agility. Distribution increase complexity on its own; reuse adds additional complexity. This complexity comes not only from communication issues but also from coordination and coupling. Rationalization, reuse and the ability to compose applications from the individual service is absolutely a feature of this style. The catch is the cost involved in achieving it.

A naive reading of Nike’s strategy would imply that breaking everything up “auto-magically” yields reuse and agility. Without an intentional design, this is very unlikely. Cohesion of the individual services, rather than their size is the important factor in achieving those goals. As Stefan Tilkov notes in “How Small Should Your Microservice Be?”:

In other words, I think it’s not a goal to make your services as small as possible. Doing so would mean you view the separation into individual, stand-alone services as your only structuring mechanism, while it should be only one of many.

Redundancy and resilience are likewise issues that need careful consideration. The quote from the Nike article might lead you to believe that resilience and redundancy are a by-product of deploying microservices. Far from it. Resilience and distribution are orthogonal concepts; in fact, breaking up a monolith can have a negative impact on resilience if resilience is not specifically accounted for in the design. Coupling, in all its various forms, reduces resilience. Jeppe Cramon, in “SOA: synchronous communication, data ownership and coupling”, has shown that distribution, in and of itself, does not eliminate coupling. This means that “Should one micro service fail the other ones will work in the app” may prove false if the service that fails is coupled with and depended on by other services. Decoupling is unlikely to happen accidentally. Likewise, redundant instances of the same service will do little good if a resource shared by those instances (e.g. the data store) is down.

Even where a full-blown microservice architecture is inappropriate, many of the principles behind the style are useful. Swimming with the tide of Conway’s Law, rather than against it, is more likely to yield successful application architectures and enterprise IT architectures. The coherence that makes it successful is a product of design, however, and not serendipity. Microservices are most definitely not snake oil. Selling the style like it is snake oil is a really bad idea.

More on Microservices – Boundaries, Governance, Reuse & Complexity

The Gerry-Mander

One of the things I love most about blogging (and never get enough of) is the feedback. My previous post, “Carving it up – Microservices, Monoliths, & Conway’s Law”, generated several comments/discussions that, taken together, warranted a follow-up post.

One such discussion was a Twitter exchange with Ruth Malan and Jeff Sussna regarding governance. Jeff regarded the concept of decentralized governance to be “controversial”, although he saw centralized governance as “problematic” and as “rigidity”. Ruth observed “Or at least an axis of tradeoffs and (organizational) design…? (Even “federated” has wide interpretation)”. I defended the need for some centralized governance, but with restraint, noting “governing deeper than necessary will cause problems, e.g. problem w/ BDUF (IMO) is the B, Not UF”.

When applications span multiple teams, all deploying independently, some centralized governance will be necessary. The key is to set rules of the road that allow teams to work independently without interfering with each other and not to attempt to micro-manage. This need for orchestration extends beyond just the technical. As a comment by Tom Cagley on my previous post pointed out “Your argument can also be leveraged when considering process improvements across team and organizational boundaries”. Conflicting process models between teams could easily complicate distributed solutions.

Another comment on the “Carving it up – Microservices, Monoliths, & Conway’s Law” post, this time from Robert Tanenbaum, discussed reuse and questioned whether libraries might be a better choice in some cases. Robert also observed “Agile principles tell us to implement the minimal architecture that satisfies the need, and only go to a more complex architecture when the need outgrows the minimal implementation”. I view microservice and SOA architectures as more of a partitioning mechanism than a tool for reuse. Because reuse carries more costs and burdens than most give it credit for, I tend to be less enthusiastic about that aspect. I definitely agree, however, that distributed architectures may be better suited to a later step in the evolutionary process rather than as a starting point. A quote from “Microservices: Decomposing Applications for Deployability and Scalability” by Chris Richardson puts it nicely (emphasis is mine):

Another challenge with using the microservice architecture is deciding at what point during the lifecycle of the application you should use this architecture. When developing the first version of an application, you often do not have the problems that this architecture solves. Moreover, using an elaborate, distributed architecture will slow down development.

Michael Brunton-Spall’s “What Are Microservices and Why Are They Important” provides more context on the nature of the trade-offs involved with this architectural style:

Microservices trade complicated monoliths for complex interacting systems of simple services.

A large monolith might be so complicated that it is hard to reason about certain behaviours, but it is technically possible to reason about it. Simple services interacting with each other will exhibit emergent behaviours, and these can be impossible to fully reason about.

Cascading failure is significantly more of a problem with complex systems, where failures in one simple system cause failures in the other systems around them. Luckily there are patterns, such as back-pressure, dead-man’s switch and more that can help you mitigate this, but you need to consider this.

Finally, microservices can becomes the solution through which you see the problem, even when the problem changes. So once built, if your requirements change such that your bounded contexts adjust, the correct solution might be to throw way two current services and create three to replace them, but it’s often easier to simply attempt to modify one or both. This means microservices can be easy to change in the small, but require more oversight to change in the large.

The difference between “complicated” and “complex” in the quote is particularly important. The emergent behaviors mentioned means that the actions of complex systems will only be completely explainable in retrospect. This means that distributed applications cannot be treated as monoliths with space between the parts, but must be designed according to their nature. This is a dominant theme for Jeppe Cramon, whose ongoing work around microservices on his TigerTeam site is well worth the read.

Jeppe posted a pair of comments on a LinkedIn discussion of my post. In that discussion, Jeppe pointed out that smaller services focused on business capabilities were preferable to monoliths, but that is not that same as connecting several monoliths with web services. The main focus, however, was on the nature of data “owned” by the services. I touched on that very briefly in my previous post, mentioning that some notion of authoritativeness was needed (central governance again). Jeppe concurred, noting that monoliths lead to a multi-master data architecture where multiple systems contain redundant, potentially conflicting, data on the same entity.

The bottom line? In my opinion, this architectural style has tremendous potential to enhance an enterprise’s operations, provided it’s not over-sold (“SOA cures baldness, irritable bowel syndrome, and promotes world peace…for free!!!”). Services are not a secret sauce to effortlessly transform legacy systems into the technology of the twenty-second century. There are caveats, trade-offs, and costs. A pragmatic approach that takes those into account along with the potential benefits should be much more likely to succeed than yesterday’s “build it and they will come” philosophy.

Canonical Data Models, ESBs, and a Reuse Trap

The original uphill battle

In “Reduce, Reuse, Recycle”, I discussed how reuse introduces additional complexity and cost as the scope expands (application to family of applications to extra-organization) and in “Coping with change using the Canonical Data Model”, I illustrated how that technique could be used to manage API changes without breaking existing clients. Last week on LinkedIn, an interesting question was posted that combined both reuse and the Canonical Data Model pattern – how do you create a service-oriented architecture around a canonical data model when the teams for the component applications are unaware of the benefits of reuse?

Initially, the benefit seems to be self-evident. If you have ten applications publishing messages to each other, this can involve up to 90 mappings per service method [n(n – 1)], 180 if the services are request-response. If, however, all ten applications use the same set of messages, you have one format per call (two for request-response). Things get sticky when you realize that you must now coordinate ten development teams to code an interface to your service bus (we’re not even considering the case of a third-party product). Assuming you manage to achieve that feat, things get stickier still when the inevitable change is needed to the canonical format. Do you violate the one true message layout rule or do you try to get ten application teams to deploy an update simultaneously?

Using the canonical data model pattern inside the service bus (the incoming native message is transformed to the canonical format on the way in and then to the appropriate outgoing native format on the way out) allows you to vary the external formats while reducing the number of mappings to a maximum 20 [2n] (40 if using request-response). Any logic inside the bus (e.g. orchestrations) retains the same benefit of operating on the canonical format without the constraint of having all clients constantly synchronized. Obviously, nothing prevents multiple applications from using the same format, so this method can reap the benefits of reuse without constraining change.

A successful service-oriented architecture will need to balance two potentially contradictory forces: stable service contracts for existing consumers and the ability to quickly accommodate new consumers. Using a canonical data model internally while allowing for a variety of external formats can allow you to meet both objectives with minimal complexity. Attempting to enforce a canonical model externally pushes changes onto all consumer applications regardless of need and will slow the pace of change.

Reduce, Reuse, Recycle

Reduce, Reuse, Recycle

Reuse is one of those concepts that periodically rears up to sing its seductive siren song. Like that in the legend, it is exceedingly attractive, whether in the form of object-orientation, design patterns, or services. Unfortunately, it also shares the quality of tempting the unwary onto the rocks to have their hopes (if not their ships) dashed.

The idea of lowering costs via writing something once, the “right way”, then reusing it everywhere, is a powerful one. The simplicity inherent in it is breathtaking. We even have a saying that illustrates the wisdom of reuse – “don’t reinvent the wheel”. And yet, as James Muren pointed out in a discussion on LinkedIn, we do just that every day. The wheels on rollerblades differ greatly from those on a bus. Each reuses the concept, yet it would be ludicrous to suggest that either could make do with the other’s implementation of that concept. This is not to say that reusable implementations (i.e. code reuse) are not possible, only that they are more tightly constrained than we might imagine at first thought.

Working within a given system, code reuse is merely the Don’t Repeat Yourself (DRY) principle in action. The use cases for the shared code are known. Breaking changes can be made with relatively limited consequences given that the clients are under the control of the same team as the shared component(s). Once components move outside of the team, much more in the way of planning and control is necessary and agility becomes more and more constrained.

Reusable code needs to possess a certain level of flexibility in order to be broadly useful. The more widely shared, the more flexible it must be. By the same token, the more widely used the code is, the more stability is required of the interface so as to maintain compatibility across versions. The price of flexibility is technical complexity. The price of stability is overhead and governance – administrative complexity. This administrative complexity not only affects the developing team, but the consuming one also in the form of another dependency to manage.

Last week, Tony DaSilva published a collection of quotes about code reuse from various big names (Steve McConnell, Larry Constantine, etc.), all of which stated the need for governance, planning and control in order to achieve reuse. In the post, he noted: “Planned? Top-down? Upfront? In this age of “agile“, these words border on blasphemy.” If blasphemy, it’s blasphemy with distinguished credentials.

In a blog post (the subject of the LinkedIn discussion I mentioned above) named “The Misuse of Reuse”, Roger Sessions touches on many of the problems noted above. Additionally, he notes security issues, infrastructure overhead, and the potential for a single point of failure that can come from poorly planned reuse. His most important point, however is this (emphasis mine):

Complexity trumps reuse. Reuse is not our goal, it is a possible path to our goal. And more often than not, it isn’t even a path, it is a distraction. Our real goal is not more reusable IT systems, it is simpler IT systems. Simpler systems are cheaper to build, easier to maintain, more secure, and more reliable. That is something you can bank on. Unlike reuse.

While I disagree that simplicity is our goal (value, in my opinion, is the goal; simplicity is just another tool to achieve that value), the highlighted portion is key. Reuse is not an end in itself, merely a technique. Where the technique does not achieve the goal, it should not be used. Rather than naively assuming that code reuse always lowers costs, it must be evaluated taking the costs and risks noted above into account. Reuse should only be pursued where the actual costs are outweighed by the benefits.

Following this to its logical conclusion, two categories emerge as best candidates for code reuse:

  • Components with a static feature set that are relatively generic (e.g. Java/.Net Framework classes, 3rd party UI controls)
  • Complex, uniform and specific processes, particularly where redundant implementations could be harmful (e.g. pricing services, application integrations)

It’s not an accident that the two examples given for generic components are commercially developed code intended for a wide audience. Designing and developing these types of components is more typical of a software vendor than an in-house development team. Corporate development teams would tend to have better results (subject to a context-specific evaluation) with the second category.

Code reuse, however, is not the only type of reuse available. Participants in the LinkedIn discussion above identified design patterns, models, business rules, requirements, processes and standards as potentially reusable artifacts. Remy Fannader has written extensively about the use of models as reusable artifacts. Two of his posts in particular, “The Cases for Reuse” and “The Economics of Reuse”, provide valuable insight into reuse of models and model elements as well as knowledge reuse across different architectural layers. As the example of the wheel points out, reuse of higher levels of abstraction may be more feasible.

Reuse of a concept as opposed to an implementation may allow you to avoid technical complexity. It definitely allows you to avoid administrative complexity. In an environment where a component’s signature is in flux, it makes little sense to try to reuse a concrete implementation. In this circumstance, DRY at the organizational level may be less of a virtue in that it will impede multiple teams ability to respond to change.

Reuse at a higher level of abstraction also allows for recycling instead of reuse. Breaking the concept into parts and transforming its implementation to fit new or different contexts may well yield better results than attempting to make one size fit all.

It would be a mistake to assume that reuse is either unattainable or completely without merit. The key question is whether the technique yields the value desired. As with any other architecturally significant decision, the most important question to ask yourself is “why”.