Microservices Under the Microscope

Microscope

Buzz and backlash seems to describe the technology circle of life. Something (language, process, platform, etc.; it doesn’t seem to matter) gets noticed, interest increases, then the reaction sets in. This was seen with Service-Oriented Architecture (SOA) in the early 2000s. More was promised than could ever be realistically attained and eventually the hype collapsed under its own weight (with a little help from the economic downturn). While the term SOA has died out, services, being useful, have remained.

2014 appears to be the year of microservices. While neither the term nor the architectural style itself are new, James Lewis and Martin Fowler’s post from earlier this year appears to have significantly raised the level of interest in it. In response to the enthusiasm, others have pointed out that the microservices architectural style, like any technique, involves trade offs. Michael Feathers pointed out in “Microservices Until Macro Complexity”:

If services are often bigger than classes in OO, then the next thing to look for is a level above microservices that provides a more abstract view of an architecture. People are struggling with that right now and it was foreseeable. Along with that concern, we have the general issue of dealing with asynchrony and communication patterns between services. I strongly believe that there is a law of conservation of complexity in software. When we break up big things into small pieces we invariably push the complexity to their interaction.

Robert, “Uncle Bob”, Martin has recently been a prominent voice questioning the silver bullet status of microservices. In “Microservices and Jars”, he pointed out that applications can achieve separation of concerns via componentization (using jars/Gems/DLLs depending on the platform) without incurring the overhead of over-the-wire communication. According to Uncle Bob, by using a plugin scheme, components can be as independently deployable as a microservice.

Giorgio Sironi responded with the post “Microservices are not Jars”. In it, Giorgio pointed out independent deployment is only part of the equation, independent scalability is possible via microservices but not via plugins. Giorgio questioned the safety of swapping out libraries, but I can vouch for the fact that plugins can be hot-swapped at runtime. One important point made was in regard to this quote from Uncle Bob’s post:

If I want two jars to get into a rapid chat with each other, I can. But I don’t dare do that with a MS because the communication time will kill me.

Sironi’s response:

Of course, chatty fine-grained interfaces are not a microservices trait. I prefer accept a Command, emit Events as an integration style. After all, microservices can become dangerous if integrated with purely synchronous calls so the kind of interfaces they expose to each other is necessarily different from the one of objects that work in the same process. This is a property of every distributed system, as we know from 1996.

Remember this for later.

Uncle Bob’s follow-up post, “Clean Micro-service Architecture”, concentrated on scalability. It made the point that microservices are not the only method for scaling an application (true); and stated that “the deployment model is a detail” and “details are never part of an architecture” (not true, at least in my opinion and that of others):

While Uncle Bob may consider the idea of designing for distribution to be “BDUF Baloney”, that’s wrong. That’s not only wrong, but he knows it’s wrong – see his quote above re: “a rapid chat”. In the paper that’s referenced in the Sironi quote above, Waldo et al. put it this way:

We argue that objects that interact in a distributed system need to be dealt with in ways that are intrinsically different from objects that interact in a single address space. These differences are required because distributed systems require that the programmer be aware of latency, have a different model of memory access, and take into account issues of concurrency and partial failure.

You can design a system with components that can run in the same process, across multiple processes, and across multiple machines. To do so, however, you must design them as if they were going to be distributed from the start. If you begin chatty, you will find yourself jumping through hoops to adapt to a coarse-grained interface later. If you start with the assumption of synchronous and/or reliable communications, you may well find a lot of obstacles when you need to change to a model that lacks one or both of those qualities. I’ve seen systems that work reasonably well on a single machine (excluding the database server) fall over when someone attempts to load balance them because of a failure to take scaling into account. Things like invalidating and refreshing caches as well as event publication become much more complex starting with node number two if a “simplest thing that can work” approach is taken.

Distributed applications in general and microservice architectures in particular are not universal solutions. There are costs as well as benefits to every architectural style and sometimes having everything in-process is the right answer for a given point in time. On the other hand, you can’t expect to scale easily if you haven’t taken scalability into consideration previously.

, , , , , ,

Leave a comment

Service Versioning Illustrated

Hogarth painting the muse

My last post, “No Structure Services”, generated some discussion on LinkedIn regarding service versioning and how an application’s architecture can enable exposing services that are well-defined, stable, internally simple and DRY. I’ve discussed these topics in the past: “Strict Versioning for Services – Applying the Open/Closed Principle” detailed the versioning process I use to design and maintain services that can evolve while maintaining backwards compatibility and “On the plane or in the plane?” covered how I decouple the service from the underlying implementation. Based on the discussion, I decided that some visuals would probably provide some additional clarity to the subject.

Note: The diagrams below are meant to simplify understanding of these two concepts (versioning and the structure to support it) and not be a 100% faithful representation of an application. If you look at them as a blueprint, rather than a conceptual outline, you’ll find a couple SRP violations, etc. Please ignore the nits and focus on the main ideas.

Internal API Diagram

In the beginning, there was an application consisting of a class named Greeter, which had the job of constructing a greeting for some named person from another person. A user interface was created to collect the necessary information from the end-user, pass it to Greeter and display the results. The input to the Greet method is an object of type GreetRequest, which has members identifying the sender and recipient. Greeter.Greet() returns a GreetResponse, the sole member of which is a string containing the Greeting.

And it was good (actually, it was Hello World which until recently was just a cheesy little sample program but is now worth boatloads of cash – should you find yourself using these diagrams to pitch something to a VC, sending me a cut would probably be good karma ;-) ).

At some point, the decision was made to make the core functionality available to external applications (where external is defined as client applications built and deployed separately from the component in question, regardless of whether the team responsible for the client is internal or external to the organization). If the internal API were exposed directly, the ability to change Greeter, GreetRequest and GreetResponse would be severely constrained. Evolving that functionality could easily lead to non-DRY code if backwards compatibility is a concern.

Note: Backwards compatibility is always a concern unless you’re ditching the existing client. The option is synchronized development and deployment which is slightly more painful than trimming your fingernails with a chainsaw – definitely not recommended.

The alternative is to create a facade/adapter class (GreetingsService) along with complementary message classes (GreetingRequest and GreetingResponse) that can serve as the published interface. The GreetingsService exists to receive the GreetingRequest, manage its transformation to a GreetRequest, delegate to Greeter and manage the transformation of the GreetResponse into a GreetingResponse which is returned to the caller (this is an example of the SRP problem I mentioned above, in actual practice, some of those tasks would be handled by other classes external to GreetingsService – an example can be found here).

Internal API with External Service Adapter Diagram

Later, someone decided that the application should have multilingual capability. Wouldn’t it be cool if you could choose between “Hello William, from your friend Gene” and “Hola Guillermo, de su amigo Eugenio”? The question, however, is how to enable this without breaking clients using GreetingsService. The answer is to add the Language property to the GreetRequest (Language being of the GreetingLanguage enumeration type) and making the default value of Language be English. We can now create GreetingsServiceV1 which does everything GreetingsService does (substituting GreetingRequestV1 and GreetingResponseV1 for GreetingRequest and GreetingResponse) and adds the new language capability. The result is like this:

Internal API with External Service Adapters (2 versions) Diagram

Because Language defaults to English, there’s no need to modify GreetingsService at all. It should continue to work as-is and its clients will continue to receive the same results. The same type of results can be obtained using a loose versioning scheme (additions, which should be ignored by existing clients, are okay; you only have to add a new version if the change is something that would break the interface, like a deletion). The “can” and “should” raise flags for me – I have control issues (which is incredibly useful when you support published services).

Control is the best reason for preferring a strict versioning scheme. If, for example, we wanted to change the default language to Spanish going forward while maintaining backward compatibility, we could not do that under a loose regime without introducing a lot of kludgy complexity. With the strict scheme, it would be trivial (just change the default on GreetingRequestV1 to Spanish and you’re done). With the strict scheme I can even retire the GreetingService once GreetingServiceV1 is operational and the old clients have had a chance to migrate to the new version.

Our last illustration is just to reinforce what’s been said above. This time a property has been added to control the number of times the greeting is generated. GreetingsServiceV2 and its messages support that and all prior functionality, while GreetingsService and GreetingsServiceV1 are unchanged.

Internal API with External Service Adapters (3 versions) Diagram

As noted above, being well-defined, stable, internally simple and DRY are all positive attributes for published services. A strict versioning scheme provides those attributes and control over what versions are available.

, , , , , ,

2 Comments

No Structure Services

Amoeba sketch

Some people seem to think that flexibility is universally a virtue. Flexibility, in their opinion, is key to interoperability. Postel’s Principle, “…be conservative in what you do, be liberal in what you accept from others”, is often used to justify this belief. While this sounds wonderful in theory, in practice it’s problematic. As Tom Stuart pointed out in “Postel’s Principle is a Bad Idea”:

Postel’s Principle is wrong, or perhaps wrongly applied. The problem is that although implementations will handle well formed messages consistently, they all handle errors differently. If some data means two different things to different parts of your program or network, it can be exploited—Interoperability is achieved at the expense of security.

These problems exist in TCP, the poster child for Postel’s principle. It is possible to make different machines see different input, by building packets that one machine accepts and the other rejects. In Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection, the authors use features like IP fragmentation, corrupt packets, and other ambiguous bits of the standard, to smuggle attacks through firewalls and early warning systems.

In his defense, the environment in which Postel proposed this principle is far different from what we have now. Eric Allman, writing for the ACM Queue, noted in “The Robustness Principle Reconsidered”:

The Robustness Principle was formulated in an Internet of cooperators. The world has changed a lot since then. Everything, even services that you may think you control, is suspect.

Flexibility, often sold as extensibility, too often introduces ambiguity and uncertainty. Ambiguity and uncertainty are antithetical to APIs. This is why 2 of John Sonmez’s “3 Simple Techniques to Make APIs Easier to Use and Understand” are “Using enumerations to limit choices” and “Using default values to reduce required parameters”. Constraints provide structure and structure simplifies.

Taken to the extreme, I’ve seen flexibility used to justify “string in, string out” service method signatures. “Send us a string containing XML and we’ll send you one back”. There’s no need to worry about versioning, etc. because all the versions for all the clients are handled by a single endpoint. Of course, behind the scenes there’s a lot of conditional logic and “hope for the best” parsing. For the client, there’s no automated generation of messages nor even guarantee of structure. Validation of the structure can only occur at runtime.

Does this really sound robust?

I often suspect the reluctance to tie endpoints to defined contracts is due to excessive coupling between the code exposing the service and the code performing the function of the service. If domain logic is intermingled with presentation logic (which a service is), then a strict versioning scheme, an application of the Open/Closed Principle to services, now violates Don’t Repeat Yourself (DRY). If, however, the two concerns are kept separate within the application, multiple endpoints can be handled without duplicating business logic. This provides flexibility for both divergent client needs and client migrations from one message format to another with less complexity and ambiguity.

Stable interfaces don’t buy you much when they’re achieved by unsustainable complexity on the back end. The effect of ambiguity on ease of use doesn’t help either.

, , ,

5 Comments

“Microservices: The Return of SOA?” on Iasa Global

In the early 2000s, Service-Oriented Architecture (SOA) was the hot topic. Much ink was spilled in touting its potential. Effort and money was expended in attempts to secure its promised benefits. Like object-orientation and reuse before it, the reality of SOA was not able to live up to the hype. Unlike them, SOA had better penetration into the business realm and having soared higher in the corporate hierarchy, fell further into disrepute. Before the decade was done, SOA had become a dirty word (although services had become ubiquitous).

Read “Microservices: The Return of SOA?” on Iasa Global for my take on how microservices compares to SOA (for better or worse).

, ,

Leave a comment

#NoEstimates? #NoProjects? #NoManagers? #NoJustNo

Drawing of Oliver Cromwell

#NoX seems to be the current pattern for hashtags designed to attract attention. While they certainly seem to serve that purpose, they also seem to attract more than their fair share of polarized viewpoints. Crowding out the middle ground makes for better propaganda than discussion.

So why does a picture of a long dead English political figure grace a post about hashtags? Oliver Cromwell was certainly a polarizing figure, so much so that when the English monarchy was restored after his death, he was disinterred, posthumously hung in chains, and his head was displayed on a spike. Royalists hated him for overthrowing the monarchy and executing King Charles I. The more democratically inclined hated him for overthrowing Parliament to reign as Lord Protector (which had a nicer sound to it than “dictator”) for life. To his credit, however, he did have a way with words. One of my favorites of his quotes:

I beseech you, in the bowels of Christ, think it possible you may be mistaken.

That particular exhortation would well serve those who have latched onto the spirit of those hashtags without much reflection on the details. Latching onto the negation of something you dislike without any notion of a replacement doesn’t convey depth of thought. Deb Hartmann Preuss put it well:

For many of the #NoX movements, abuse of the X concept seems to be the rationale for doing away with it. Someone has done X badly, therefore X is flawed, irrational, or even evil. Another Cromwell quote addresses this:

Your pretended fear lest error should step in, is like the man that would keep all the wine out of the country lest men should be drunk. It will be found an unjust and unwise jealousy, to deny a man the liberty he hath by nature upon a supposition that he may abuse it.

That some who do things will do those things badly should not come as a surprise. My singing voice is wretched, but to universally condemn singing as a practice because of that does not follow. Blatant fallacious thinking reflects poorly on advocates of a position.

In many cases, there seems to be a disconnect between those advocating these positions and the realities of business:

Owly Images

In response to his posting a link to “The CIO Golden Rule – Talking in the Language of the Business”, Peter Kretzman reported “…I saw people interpret “you need to talk in language of the biz” as being an “us vs. them” statement!” Another objected to the idea that the client’s wishes should be paramount: “the idea that the person with the purse has more of a voting right is one I don’t live under. i can vote to leave the table.” Now, I will give that one credit in that they recognize that “leaving the table” is the price for insisting on their own way (I’m assuming that they know that means quitting), but it still betrays a lack of maturity. In most case, we work for a client, not ourselves. How many times can one “leave the table” before they’re no longer invited to it in the first place.

The business is not a life support system for developers following their passion. Rather than it being their job to fund us, it is our job to meet their needs. Putting our interests ahead of the clients’ is wrong and arrogant. It is the same variety of arrogance that attempts to keep BYOD at bay. It is the same arrogance that tried to prevent the introduction of the PC into the enterprise thirty years ago. It failed then and given the rising technological savvy of our customers, has even less of a chance of succeeding now. Should we, as an profession, continue to attempt to dictate to our customers, we risk hearing yet another of Cromwell’s orations:

You have sat too long for any good you have been doing lately… Depart, I say; and let us have done with you. In the name of God, go!

This is not to say that the various #NoX movements are completely wrong. Some treat estimates as a permanent commitment rather than a projection based on current knowledge. That’s unrealistic and working to change that misconception has value. Management does not always equate to leadership and improving agility is a worthy goal provided focus is not lost. I’ve even written on the danger of focusing on projects to the detriment of the product. What is needed, however, is a balanced assessment that doesn’t stray into shouting slogans.

I’ve seen some defend their #NoX hashtag as a springboard to dialog (see the parts re: slogans, hyperbole, and propaganda above). They contend that “of course, I don’t mean never do X, that’s just semantics”. John Allspaw, however, has an excellent response to that:

, , , , ,

6 Comments

Product Owner or Live Customer?

Wooden Doll

Who calls the shots for a given application? Who gets to decide which features go on the roadmap (and when) and which are left by the curb? Is there one universal answer?

Many of my recent posts have dealt in some way with the correlation between organizational structure and application/solution architecture. While this is largely to be expected due to Conway’s Law, as I noted in “Making and Taming Monoliths”, many business processes depend on other business processes. While a given system may track a particular related set of business capabilities, concepts belonging to other capabilities and features related to maintaining them, tend to creep in. This tends to multiply the number of business stakeholders for any given system and that’s before taking into account the various IT groups that will also have an interest in the system.

Scrum has the role of Product Owner, whose job it is to represent the customer. Alan Holub, in a blog post for Dr. Dobb’s, terms this “flawed”, preferring Extreme Programming’s (XP) requirement for an on-site customer. For Holub, introducing an intermediary introduces risk because the product owner is not a true customer:

Agile processes use incremental design, done bit by bit as the product evolves. There is no up-front requirements gathering, but you still need the answers you would have gleaned in that up-front process. The question comes up as you’re working. The customer sitting next to you gives you the answers. Same questions, same answers, different time.

Put simply, agile processes replace up-front interviews with ongoing interviews. Without an on-site customer, there are no interviews, and you end up with no design.

For Holub, the product owner introduces either guesses or delay. Worst of all in his opinion is when the product owner is “…just the mouthpiece of a separate product-development group (or a CEO) who thinks that they know more about what the customer wants than the actual customer…”. For Holub, the cost of hiring an on-site customer is negligible when you factor in the risks and costs of wrong product decisions.

Hiring a customer, however, pretty much presumes an ISV environment. For in-house corporate work, the customer is already on board. However, in both cases, two issues exist: one is that a customer that is the decision-maker risks giving the product a very narrow appeal; the second is that the longer the “customer” is away from being a customer, the currency of their knowledge of the domain diminishes. Put together, you wind up with a product that reflects one person’s outdated opinion of how to solve an issue.

Narrowness of opinion should be sufficient to give pause. As Jim Bird observed in “How Product Ownership works in the Real World”:

There are too many functional and operational and technical details for one person to understand and manage, too many important decisions for one person to make, too many problems to solve, too many questions and loose ends to follow-up on, and not enough time. It requires too many different perspectives: business needs and business risks, technical dependencies and technical risks, user experience, supportability, compliance and security constraints, cost factors. And there are too many stakeholders who need to be consulted and managed, especially in a big, live business-critical system.

As is the case with everything related to software development, there is a balance to be met. Coherence of vision enhances design to a point, but then can lead to a too-insular focus. Mega-monoliths that attempt to deal comprehensively with every aspect of an enterprise, if they ever launch, are unwieldy to maintain. By the same token, business dependencies become system dependencies and cannot be eliminated, only re-arranged (microservices being a classic example – the dependencies are not removed, only their location has changed). The idea that we can have only one voice directing development is, in my opinion, just too simplistic.

, , , , ,

Leave a comment

New Posts on Iasa Global

IASA has re-vamped its site. With the launch, you can now find two new items, both by yours truly.

“Your Code is Not Enough” (a re-post, originally published here) discusses the need for design documentation. Code tells the “what”, but it can’t convey the “why”.

‘”Not my Way” is not the same as “Wrong”‘ (original content, exclusive to Iasa Global) points out the need for soft skills necessary for crafting compromises that reconcile competing viewpoints.

Leave a comment

Follow

Get every new post delivered to your Inbox.

Join 132 other followers

%d bloggers like this: