Microservices vs SOA – Is there any real difference?

Microservice architecture has been the hot topic for 2014, so I suppose it’s appropriate that it be the subject for what I intend to be my last post until 2015. Last week, Kelly Sommers kicked off an active discussion of the nature of microservices vis-a-vis SOA:

This isn’t really a new observation. Before Lewis and Fowler published their final installment of the post that started the everyone talking, Steve Jones had already published “Microservices is SOA, for those who know what SOA is”. Even Adrian Cockcroft, in response to Sommers, noted:

And yet, they’re different. In a post written for Iasa, “Microservices – The Return of SOA?”, I quoted Anne Thomas Manes’ “SOA is Dead; Long Live Services”:

Successful SOA (i.e., application re-architecture) requires disruption to the status quo. SOA is not simply a matter of deploying new technology and building service interfaces to existing applications; it requires redesign of the application portfolio. And it requires a massive shift in the way IT operates. The small select group of organizations that has seen spectacular gains from SOA did so by treating it as an agent of transformation. In each of these success stories, SOA was just one aspect of the transformation effort. And here’s the secret to success: SOA needs to be part of something bigger. If it isn’t, then you need to ask yourself why you’ve been doing it.

As I stated in that post

Part of the problem may be that both SOA and microservices have aspects that transcend boundaries. Manes quote above makes it clear that SOA is concerned with enterprise IT architecture. Microservices are primarily an application architecture pattern.

Microservices can be reused by multiple applications, but need not be. SOA’s emphasis was at a higher level of abstraction. While the two share a great many principles, they apply them at different scales (application and solution architecture versus enterprise IT architecture).

Last weeks Twitter stream did produce some examples of principles that are, if not unique, then at least more heavily emphasized in the microservice style. From Marco Vermeulen:

Eugene Kalenkovich‘s post, “Can I Haz Name?”, captures the essence (in my opinion) of this style: “Independent Scalability, Independent Lifecycle and Independent Data”. It’s not about the lines of code, but about the separation of concerns within the context of the application. These same “independences” are important to SOA, but define a microservice architecture.

Advertisement

One Weird Trick to Design Perfect Applications (?)

What’s the right way to design an application?

Scanning the web, one might be tempted to believe that all sites must use AngularJS Backbone.js Ember.js some sort of javascript framework, all services must be RESTful, and if those services aren’t of the “micro” variety, well…

Is that really the case, though?

Is there a right way, or is it case of choosing the way which most closely matches our current context? We can say rules are rules, but experience will teach that rules lose meaning when divorced from the context they were formed in response to. There is no universal best practice. Without understanding the “why” behind a rule, you can’t determine whether it applies.

A conversation with John Evdemon captured principle this in regards to the latest application architecture “sensation” (AKA microservices):



A Kindle highlight shared by Tony DaSilva extends this principle to enterprise architecture:

There is no evidence that structure in and of itself affects the profitability of a company; different structures work best for different companies.

Meeting a need involves two architectures: the architecture of the problem and the architecture of the solution. Understanding the architecture of the problem is a necessary prerequisite to designing the architecture of the solution. Without that context providing definition of the desired end, imperfect though it may be, any proposed solution becomes a shot in the dark. Attempting to extend a technology or technique outside its range of utility risks harming its credibility (see SOA).

Thinking isn’t an option, it’s a requirement.

Form Follows Function on SPaMCAST 319

SPaMCAST logo

I’m back with another appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

SPaMCast 319 features Tom’s “Why Are Requirements So Hard To Get Right?” segment, followed by Jo Ann Sweeny’s new column, “Explaining Change”. Tom and I close it out with a discussion of one of my previous posts, “Fixing IT – Credible or Cassandra?”.

Who Needs Architects? – Navigating the Fractals

Vasco da Gama

In my last post, “Microservice Principles and Enterprise IT Architecture”, I mentioned how Ruth Malan frequently notes that design is fractal. In other words, “…a natural phenomenon or a mathematical set that exhibits a repeating pattern that displays at every scale”. Software systems can generally be decomposed into sub-systems, components, classes and methods, all the way down into individual lines of code. Concerns that apply to the more granular levels (cohesion/focus, separation of concerns, managing dependencies, etc.) also apply as the level of abstraction increases.

It is important to note that software systems do not exist in a vacuum, but are themselves components of solutions, possibly of a system of systems, and definitely of an enterprise’s IT architecture. Those same concerns that apply to the hierarchy of structures within a given system also apply to the hierarchy of structures to which the system belongs. To quote Ruth again:

Russell Ackoff urged that to design a system, it must be seen in the context of the larger system of which it is part. Any system functions in a larger system (various larger systems, for that matter), and the boundaries of the system — its interaction surfaces and the capabilities it offers — are design negotiations. That is, they entail making decisions with trade-off spaces, with implications and consequences for the system and its containing system of systems. We must architect across the boundaries, not just up to the boundaries. If we don’t, we naively accept some conception of the system boundary and the consequent constraints both on the system and on its containing systems (of systems) will become clear later. But by then much of the cast will have set. Relationships and expectations, dependencies and interdependencies will create inertia. Costs of change will be higher, perhaps too high.

Ruth illustrates this relationship using this diagram:

System Context illustration, Ruth Malan

Couple this macro-illustration with the graphic breakdown of application structure from Savita Pahuja’s “Relation of Agility and Modularity”, and it should be obvious that architecture (in the form of structure and relationships) is present at all levels of abstraction.

System Granularity illustration

This fractal nature confirms what Savita states: “Agility can only be realized when the underlying entity – the organization or the software product, has structural modularity”. In other words, if you can’t break it down, replace or re-configure it, your ability to change is constrained. Moreover, modularity at one level of abstraction can be defeated by a lack of modularity at a higher level of abstraction.

While this shows that it’s turtles “architecture all the way down”, what does that say about the need for architectural design?

I’ve long been skeptical of the idea that a coherent design can reliably “emerge” from a disparate group of people doing “the simplest thing that can possibly work”. Darwinian evolution is a story set in blind alleys carpeted in the corpses of failures. With millions of years and an unlimited budget, you might develop something in this manner, but lacking that, you need to cheat (i.e. design intentionally).

Alistair Cockburn has recently posted about just that, mixing intentional design with Test Driven Development (TDD) in pursuit of better results (which he felt he got). Kevin Rutherford’s “TDD for teams” related how Cockburn’s efforts seemed to spur some counter posts to defend TDD from the assault of up-front thinking. It appears that some feel that thinking throughout coding is antithetical to spending anything more than a trivial amount of time on thought before coding. Rutherford notes:

So now we have 5 different design approaches to one toy kata from 4 developers, all in the space of one weekend. (And there are probably more that I haven’t heard about.) Of course, it would have been weird if they had all produced identical designs. We are all different, and that’s definitely a good thing. But I worry that this can cause problems for teams doing TDD.

Rutherford then, in my opinion, captures the essence of the problem:

If a team is to produce software that exhibits internal consistency and is understandable by all of its developers, then somehow these individual differences must be surrendered or homogenized in some way. Somehow the team must create designs — and a design style — that everyone agrees on.

As one travels up the hierarchy, one layer’s internal consistency is predicated on the consistency of the external interfaces of the layer below. Traditionally, that consistency has been the realm of architects. Success or failure in achieving that consistency, in my opinion, is determined by the appropriateness of the influence exercised. An attempt to micromanage via Big Design Up Front (BDUF) will likely fail due to the information deficits inherent in trying to control far beyond the extent of your comprehension. Abstraction exists to allow for broader understanding at lower resolution. By the same token, I would question the logic of relying on luck to maintain consistency between a system and its ecosystem. A balance is necessary to well match the system to the context it will inhabit.