“Microservices and API Complexity – Inside and Out” on Iasa Global

The signature benefit of a microservice architecture is that its highly granular nature allows for a great deal of flexibility in composing applications. Components are simplified by virtue of a high degree of focus. The ability to replace individual components is enhanced by the modularity inherent in the style.

A very significant drawback to microservice architecture is that its highly granular nature can lead to a great deal of complexity in composing applications. Highly focused components can force service consumers to become more involved in the internals of an interaction than they might otherwise wish. Unwanted options can become more of a source of confusion than useful modularity.

How do you resolve this paradox? See the full post on the Iasa Global Site

“Microservice Mistakes – Complexity as a Service” on Iasa Global

I’m pleased to announce that I’ve been asked to continue as a contributor to the Iasa Global site. I’m planning to post original content there on at least a monthly basis. In the interim, please enjoy a re-post of “Microservice Mistakes – Complexity as a Service”

Microservice Mistakes – Complexity as a Service

Michael Feathers’ tweet about technical empathy packs a lot of wisdom into 140 characters. Lack of technical empathy can lead to a system that is harder to both implement and maintain since no thought was given to simplifying things for the caller. Maintainability is one of those quality of service requirements that appears to be a purely technical consideration right up to the point that it begins significantly affecting development time. If this sounds suspiciously like technical debt to you, then move to the head of the class.

The issue of technical empathy is particularly on point given the popular interest in microservice architectures (MSAs). The granular nature of the MSA style brings many benefits, but also comes with the cost of increased complexity (among others). Michael Feathers referred to this in a post from last summer titled “Microservices Until Macro Complexity”:

It is going to be interesting to see how this approach scales. Some organizations have a relatively low number of microservices. Others are pushing higher, around the 600 mark. This is a bit beyond the point where people start seeking a bigger picture. If services are often bigger than classes in OO, then the next thing to look for is a level above microservices that provides a more abstract view of an architecture. People are struggling with that right now and it was foreseeable. Along with that concern, we have the general issue of dealing with asynchrony and communication patterns between services. I strongly believe that there is a law of conservation of complexity in software. When we break up big things into small pieces we invariably push the complexity to their interaction.

The last sentence bears repeating: “When we break up big things into small pieces we invariably push the complexity to their interaction”. Breaking a monolith into microservices simplifies each individual component, however, as Eran Hammer observed in “The Fallacy of Tiny Modules”, “…at some point, someone has to put it all together and then, all the complexity is going to surface…”. As Yamen Sader illustrated in his slide deck “Microservices 101 – The Big Why” (slide #26), the structure of the organization, domain, and system will diverge in an MSA. The implication of this is that for a given domain task, we need to know more about the details of that task (i.e. have less encapsulation of the internals) in a microservice environment.

The further removed the consumer of these services are from the providers, the harder and less convenient it will be to transfer that knowledge. To put this in perspective, consider two fast food restaurants: one operates in a traditional manner where money is exchanged for burgers, the other is a microservice style operation where you must obtain the beef, lettuce, pickles, onion, cheese, and buns separately, after which the cooking service combines them (provided the money is there also). The second operation will likely be in business for a much shorter period of time in spite of the incredible flexibility it offers. Additionally, it that flexibility only truly exists when the contracts between two or more providers are swappable. As Ben Morris noted in “Do Microservices create extra challenges for distributed development?”:

Each team will develop its own interpretation of the system and view of the data model. Any service collaboration will have to involve some element of translation or mapping and will be vulnerable to subtle bugs that can arise from semantic differences.

Adopting a canonical model across the platform can help to address this problem by asserting a common vocabulary. Each service is expected to exchange information based on a shared definition of the main entities. However, this requires a level of cross-team governance which is very much at odds with the decentralised nature of microservices.

None of this is meant to say that the microservice architecture style is “bad” or “wrong”. It is my opinion, however, that MSA is first and foremost an architectural style of building applications rather than systems of systems. This isn’t an absolute; I could see multiple MSA applications within an organization sharing component microservices. Where those microservices transcend the organizational boundary, however, making use of them becomes more complex. Actions at a higher level of granularity, such as placing an order for some product or service, involves coordinating multiple services in the MSA realm rather than consuming one “chunkier” service. While the principles behind microservice architectures are relevant at higher levels of abstraction, a mismatch in granularity between the concept and implementation can be very troublesome. Maintainability issues are both technical debt and a source of customer dissatisfaction. External customers are far easier to lose than internal ones.

Hatin’ on Nulls

Dante's Inferno; Lucifer, King of Hell

When I first read Christian Neumanns’ “Why We Should Love ‘null'”, I found myself agreeing with his position. Yes, null references have “…led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage…” per Sir C. A. R. Hoare. Yes, many people heartily dislike null references and will go to great lengths to work around the problem. Finally, yes, these workarounds may be more detrimental than the problem they are intended to solve. While I agreed with the position that null references are a necessary inconvenience (the ill effects are ultimately the result of failure to check for null, not the null condition itself), I didn’t initially see the issue as being particularly “architectural”.

Further on in the article, however, Christian covered why null references and the various workarounds, become architecturally significant. The concept of null, nothing, is semantically important. A price of zero dollars is not intrinsically the same as a missing price. A date several millenia into the future does not universally convey “unknown” or “to be determined”. Using the null object pattern may eliminate errors due to unchecked references, but it’s far from “safe”. According to Wikipedia, “…a Null Object is very predictable and has no side effects: it does nothing“. That, however, is untrue. A Null Object masks a potential error condition and allows the user to continue on in ignorance. That, in my opinion, is very much doing something.

A person commenting on Christian’s post stated that “…a crash is the worst kind of experience a user can have”. That person argued that masking a null reference error may not be as bad for the user as a crash. There’s a kernel of truth there, but it’s a matter of risk. If an application continues on and the result is a misunderstanding of what’s been done or worse, corrupted data, how bad is that? If the application in question is a game, there’s little real harm. What if the application in question is dealing with health information? I stand by the position that where there is an error, no news is bad news.

As more and more applications become platforms via being service enabled, semantic issues gain importance. Versioning strategies can ensure structural compatibility, but semantic issues can still break clients. Coherence and consistency should be considered hallmarks of an API. As Erik Dietrich noted in “Notes on Writing Discoverable Framework Code”, a good API should “make screwing up impossible”. Ambiguity makes screwing up very possible.