Microservice Principles, Technical Debt, and Legacy Systems

Is there a circumstance where the answer to Architect Clippy‘s question is “yes”? In “Microservice Architectures aren’t for Everyone” I used this tweet to underscore the observation that a team that can’t produce a well-modularized monolith is unlikely to be helped by trying to distribute the problem. On the other hand, a team (or teams) tasked with rehabilitating a “Big Ball of Mud” might well find some value in the principles behind microservice architectures.

Some of the relevant principles are cohesion and replaceability. As Dan North noted in “Microservices: software that fits in your head”:

One way to manage the mess is to maximise the likelihood that everyone knows what’s going on in the codebase. This requires two things: consistency and replaceability. Consistency implies you can make reasonable assumptions about unfamiliar parts of the application. Replaceability means you can kill code easily and replace it with something better.

Without achieving separation of concerns, any architectural refactoring effort will be an exercise in chasing fires across the codebase. A divide and conquer strategy that applies the single responsibility principle at a macro level will be more likely to facilitate identification and remediation of lower-level technical debt. Monoliths can benefit from being carved up, not because small is inherently better, but because they reach a point where independence of their components becomes beneficial, even crucial. Components that share fewer dependencies (such as a shared data store) and have independent release cycles offer a great deal of flexibility in structuring an application and the team(s) that develop it.

In “Microservices allow for localized tech debt”, Jim Plush stated: “It’s much easier mentally to tackle $10,000 of debt across 4 credit cards at $2500 each than 1 card at the full $10,000.” Even more to the point, it’s much easier to tackle that debt when you split it with three other people (teams) each working independently.

Re-writes have a well-deserved bad reputation. Shared platforms and shared data stores will often mean that the transition from the legacy system to the re-written one will be a high-risk “big bang” affair. As Edmond Lau observed in “How to Avoid One of the Costliest Mistakes in Software Engineering”, you want to “…get as quickly as possible to a state where you’re again making incremental improvements”. Getting to this state may well happen quicker when the parts are separated.

Advertisement

Organizing an Application – Layering, Slicing, or Dicing?

stump cut to show wood grain

How did you choose the architecture for your last greenfield product?

Perhaps a better question is did you consciously choose the architecture of your last greenfield product?

When I tweeted an announcement of my previous post, “Accidental Architecture”, I received a couple of replies from Simon Brown:

While I tend to use layers in my designs, Simon’s observation is definitely on point. Purposely using a particular architectural style is a far different thing than using that style “just because”. Understanding the design considerations that drove a particular set of architectural choices can be extremely useful in making sure the design is worked within rather than around. This understanding is, of course, impossible, if there was no consideration involved.

Structure is fundamental to the architecture of an application, existing as both logical relationships between modules (typically classes) and the packaging of those modules into executable components. Packaging provides stronger constraint of relationships. It’s easier to break a convention that classes in namespace A doesn’t use those in namespace C except via classes in namespace B if they’re all in the same package. Packaged separately, constraints on the visibility of classes can provide an extra layer of enforcement. Packaging also affects deployment, dependency management, and other capabilities. For developers, the package structure of an application is probably the most visible aspect of the application’s architecture.

Various partitioning strategies exists. Many choose one that reflects a horizontally layered design, with one or more packages per layer. Some, such as Jimmy Bogard, prefer a single package strategy. Simon Brown, however, has been advocating a vertically sliced style that he calls “architecturally -evident”, where the packaging reflects a bounded context and the layering is internal. Johannes Brodwall is another prominent proponent of this partitioning strategy.

The all in one strategy is, in my opinion, a limited one. Because so many of the of the products I’ve worked on have evolved to include service interfaces (often as separate sites), I favor keeping the back-end separate from the front as a rule. While a single package application could be refactored to tease it apart when necessary, architectural refactoring can be a much more involved process than code refactoring. With only soft limits in both the horizontal and vertical dimensions, it’s likely that the overall design will become muddled as the application grows. Both the horizontal and the vertical partitioning strategies allow for greater control over component relationships.

Determining the optimal partitioning strategy will involve understanding what goals are most important and what scenarios are most likely. Horizontal partitions promote adherence to logical layering rules (e.g. the UI does not make use of data access except via the business layer) and can be used for tiered deployments where the front-end and back-end are on different machines. Horizontal layers are also useful for dynamically configuring dependencies (e.g. swappable persistence layers). Vertical partitions promote semantic cohesion by providing hard contextual boundaries. Vertical partitioning enables easier architectural refactoring from a monolithic structure to a distributed one built around microservices.

Another option would be to combine layering with slicing – dicing, if you will. This technique would allow you to combine the benefits of both approaches, albeit with greater complexity. There is also the danger of weakening the contextual cohesion when layering a vertical slice. The common caution (at least in the .Net world) of harming performance seems to be more an issue during coding and builds rather than at run time.

As with most architectural decisions, the answer to which partitioning scheme is best is “it depends”. Knowing what you want and the priority of those wants in relation to each other is vitally important. You won’t be able to answer “The Most Important Question” otherwise.