One of the things I love most about blogging (and never get enough of) is the feedback. My previous post, “Carving it up – Microservices, Monoliths, & Conway’s Law”, generated several comments/discussions that, taken together, warranted a follow-up post.
One such discussion was a Twitter exchange with Ruth Malan and Jeff Sussna regarding governance. Jeff regarded the concept of decentralized governance to be “controversial”, although he saw centralized governance as “problematic” and as “rigidity”. Ruth observed “Or at least an axis of tradeoffs and (organizational) design…? (Even “federated” has wide interpretation)”. I defended the need for some centralized governance, but with restraint, noting “governing deeper than necessary will cause problems, e.g. problem w/ BDUF (IMO) is the B, Not UF”.
When applications span multiple teams, all deploying independently, some centralized governance will be necessary. The key is to set rules of the road that allow teams to work independently without interfering with each other and not to attempt to micro-manage. This need for orchestration extends beyond just the technical. As a comment by Tom Cagley on my previous post pointed out “Your argument can also be leveraged when considering process improvements across team and organizational boundaries”. Conflicting process models between teams could easily complicate distributed solutions.
Another comment on the “Carving it up – Microservices, Monoliths, & Conway’s Law” post, this time from Robert Tanenbaum, discussed reuse and questioned whether libraries might be a better choice in some cases. Robert also observed “Agile principles tell us to implement the minimal architecture that satisfies the need, and only go to a more complex architecture when the need outgrows the minimal implementation”. I view microservice and SOA architectures as more of a partitioning mechanism than a tool for reuse. Because reuse carries more costs and burdens than most give it credit for, I tend to be less enthusiastic about that aspect. I definitely agree, however, that distributed architectures may be better suited to a later step in the evolutionary process rather than as a starting point. A quote from “Microservices: Decomposing Applications for Deployability and Scalability” by Chris Richardson puts it nicely (emphasis is mine):
Another challenge with using the microservice architecture is deciding at what point during the lifecycle of the application you should use this architecture. When developing the first version of an application, you often do not have the problems that this architecture solves. Moreover, using an elaborate, distributed architecture will slow down development.
Michael Brunton-Spall’s “What Are Microservices and Why Are They Important” provides more context on the nature of the trade-offs involved with this architectural style:
Microservices trade complicated monoliths for complex interacting systems of simple services.
A large monolith might be so complicated that it is hard to reason about certain behaviours, but it is technically possible to reason about it. Simple services interacting with each other will exhibit emergent behaviours, and these can be impossible to fully reason about.
Cascading failure is significantly more of a problem with complex systems, where failures in one simple system cause failures in the other systems around them. Luckily there are patterns, such as back-pressure, dead-man’s switch and more that can help you mitigate this, but you need to consider this.
Finally, microservices can becomes the solution through which you see the problem, even when the problem changes. So once built, if your requirements change such that your bounded contexts adjust, the correct solution might be to throw way two current services and create three to replace them, but it’s often easier to simply attempt to modify one or both. This means microservices can be easy to change in the small, but require more oversight to change in the large.
The difference between “complicated” and “complex” in the quote is particularly important. The emergent behaviors mentioned means that the actions of complex systems will only be completely explainable in retrospect. This means that distributed applications cannot be treated as monoliths with space between the parts, but must be designed according to their nature. This is a dominant theme for Jeppe Cramon, whose ongoing work around microservices on his TigerTeam site is well worth the read.
Jeppe posted a pair of comments on a LinkedIn discussion of my post. In that discussion, Jeppe pointed out that smaller services focused on business capabilities were preferable to monoliths, but that is not that same as connecting several monoliths with web services. The main focus, however, was on the nature of data “owned” by the services. I touched on that very briefly in my previous post, mentioning that some notion of authoritativeness was needed (central governance again). Jeppe concurred, noting that monoliths lead to a multi-master data architecture where multiple systems contain redundant, potentially conflicting, data on the same entity.
The bottom line? In my opinion, this architectural style has tremendous potential to enhance an enterprise’s operations, provided it’s not over-sold (“SOA cures baldness, irritable bowel syndrome, and promotes world peace…for free!!!”). Services are not a secret sauce to effortlessly transform legacy systems into the technology of the twenty-second century. There are caveats, trade-offs, and costs. A pragmatic approach that takes those into account along with the potential benefits should be much more likely to succeed than yesterday’s “build it and they will come” philosophy.