#NoX seems to be the current pattern for hashtags designed to attract attention. While they certainly seem to serve that purpose, they also seem to attract more than their fair share of polarized viewpoints. Crowding out the middle ground makes for better propaganda than discussion.
So why does a picture of a long dead English political figure grace a post about hashtags? Oliver Cromwell was certainly a polarizing figure, so much so that when the English monarchy was restored after his death, he was disinterred, posthumously hung in chains, and his head was displayed on a spike. Royalists hated him for overthrowing the monarchy and executing King Charles I. The more democratically inclined hated him for overthrowing Parliament to reign as Lord Protector (which had a nicer sound to it than “dictator”) for life. To his credit, however, he did have a way with words. One of my favorites of his quotes:
I beseech you, in the bowels of Christ, think it possible you may be mistaken.
That particular exhortation would well serve those who have latched onto the spirit of those hashtags without much reflection on the details. Latching onto the negation of something you dislike without any notion of a replacement doesn’t convey depth of thought. Deb Hartmann Preuss put it well:
Deb Hartmann Preuss (@deborahh) August 29, 2014
For many of the #NoX movements, abuse of the X concept seems to be the rationale for doing away with it. Someone has done X badly, therefore X is flawed, irrational, or even evil. Another Cromwell quote addresses this:
Your pretended fear lest error should step in, is like the man that would keep all the wine out of the country lest men should be drunk. It will be found an unjust and unwise jealousy, to deny a man the liberty he hath by nature upon a supposition that he may abuse it.
That some who do things will do those things badly should not come as a surprise. My singing voice is wretched, but to universally condemn singing as a practice because of that does not follow. Blatant fallacious thinking reflects poorly on advocates of a position.
In many cases, there seems to be a disconnect between those advocating these positions and the realities of business:
Peter Kretzman (@PeterKretzman) September 10, 2014
In response to his posting a link to “The CIO Golden Rule – Talking in the Language of the Business”, Peter Kretzman reported “…I saw people interpret “you need to talk in language of the biz” as being an “us vs. them” statement!” Another objected to the idea that the client’s wishes should be paramount: “the idea that the person with the purse has more of a voting right is one I don’t live under. i can vote to leave the table.” Now, I will give that one credit in that they recognize that “leaving the table” is the price for insisting on their own way (I’m assuming that they know that means quitting), but it still betrays a lack of maturity. In most case, we work for a client, not ourselves. How many times can one “leave the table” before they’re no longer invited to it in the first place.
The business is not a life support system for developers following their passion. Rather than it being their job to fund us, it is our job to meet their needs. Putting our interests ahead of the clients’ is wrong and arrogant. It is the same variety of arrogance that attempts to keep BYOD at bay. It is the same arrogance that tried to prevent the introduction of the PC into the enterprise thirty years ago. It failed then and given the rising technological savvy of our customers, has even less of a chance of succeeding now. Should we, as an profession, continue to attempt to dictate to our customers, we risk hearing yet another of Cromwell’s orations:
You have sat too long for any good you have been doing lately… Depart, I say; and let us have done with you. In the name of God, go!
This is not to say that the various #NoX movements are completely wrong. Some treat estimates as a permanent commitment rather than a projection based on current knowledge. That’s unrealistic and working to change that misconception has value. Management does not always equate to leadership and improving agility is a worthy goal provided focus is not lost. I’ve even written on the danger of focusing on projects to the detriment of the product. What is needed, however, is a balanced assessment that doesn’t stray into shouting slogans.
I’ve seen some defend their #NoX hashtag as a springboard to dialog (see the parts re: slogans, hyperbole, and propaganda above). They contend that “of course, I don’t mean never do X, that’s just semantics”. John Allspaw, however, has an excellent response to that:
"Oh that's just semantics" - dismissively said by everyone who thinks words, language, and meaning don't matter. (they do) #DevOps—
John Allspaw (@allspaw) August 29, 2014
Who calls the shots for a given application? Who gets to decide which features go on the roadmap (and when) and which are left by the curb? Is there one universal answer?
Many of my recent posts have dealt in some way with the correlation between organizational structure and application/solution architecture. While this is largely to be expected due to Conway’s Law, as I noted in “Making and Taming Monoliths”, many business processes depend on other business processes. While a given system may track a particular related set of business capabilities, concepts belonging to other capabilities and features related to maintaining them, tend to creep in. This tends to multiply the number of business stakeholders for any given system and that’s before taking into account the various IT groups that will also have an interest in the system.
Scrum has the role of Product Owner, whose job it is to represent the customer. Alan Holub, in a blog post for Dr. Dobb’s, terms this “flawed”, preferring Extreme Programming’s (XP) requirement for an on-site customer. For Holub, introducing an intermediary introduces risk because the product owner is not a true customer:
Agile processes use incremental design, done bit by bit as the product evolves. There is no up-front requirements gathering, but you still need the answers you would have gleaned in that up-front process. The question comes up as you’re working. The customer sitting next to you gives you the answers. Same questions, same answers, different time.
Put simply, agile processes replace up-front interviews with ongoing interviews. Without an on-site customer, there are no interviews, and you end up with no design.
For Holub, the product owner introduces either guesses or delay. Worst of all in his opinion is when the product owner is “…just the mouthpiece of a separate product-development group (or a CEO) who thinks that they know more about what the customer wants than the actual customer…”. For Holub, the cost of hiring an on-site customer is negligible when you factor in the risks and costs of wrong product decisions.
Hiring a customer, however, pretty much presumes an ISV environment. For in-house corporate work, the customer is already on board. However, in both cases, two issues exist: one is that a customer that is the decision-maker risks giving the product a very narrow appeal; the second is that the longer the “customer” is away from being a customer, the currency of their knowledge of the domain diminishes. Put together, you wind up with a product that reflects one person’s outdated opinion of how to solve an issue.
Narrowness of opinion should be sufficient to give pause. As Jim Bird observed in “How Product Ownership works in the Real World”:
There are too many functional and operational and technical details for one person to understand and manage, too many important decisions for one person to make, too many problems to solve, too many questions and loose ends to follow-up on, and not enough time. It requires too many different perspectives: business needs and business risks, technical dependencies and technical risks, user experience, supportability, compliance and security constraints, cost factors. And there are too many stakeholders who need to be consulted and managed, especially in a big, live business-critical system.
As is the case with everything related to software development, there is a balance to be met. Coherence of vision enhances design to a point, but then can lead to a too-insular focus. Mega-monoliths that attempt to deal comprehensively with every aspect of an enterprise, if they ever launch, are unwieldy to maintain. By the same token, business dependencies become system dependencies and cannot be eliminated, only re-arranged (microservices being a classic example – the dependencies are not removed, only their location has changed). The idea that we can have only one voice directing development is, in my opinion, just too simplistic.
In Meditation XVII of Devotions upon Emergent Occasions, John Donne wrote the immortal line “No man is an island, entire of itself; every man is a piece of the continent, a part of the main.” If we skip ahead nearly 400 years and change our focus from mortality to information systems (lots of parallels there), we might re-word it like this: No system is an island, entire of itself; every system is a piece of the enterprise, a part of the main.
In “Carving it up – Microservices, Monoliths, & Conway’s Law”, I discussed how the new/old idea of microservice architectures related to the more monolithic classic style of application architecture. An important take-away was the applicability of Conway’s Law to application and solution architectures. Monoliths are monolithic for a reason; a great many business activities are dependent on other business activities performed by actors from other business units. In Ruth Malan’s words: “The organizational divides are going to drive the true seams in the system.” Rather than a miscellaneous collection of business capabilities, monoliths typically represent related capabilities stitched together in one application.
The problem with monoliths is that there is a tendency for capabilities/data to be duplicated across multiple systems. The dominant concern of one system (e.g. products in an inventory system) will typically be a subordinate concern in another (e.g. products in an order fulfillment system). This leads to different units using different systems based on their needs vis-a-vis a particular concern. Without adequate governance, this leads to potential conflicts over which system is authoritative regarding data about those concerns.
Consider the following diagram of a hypothetical family of systems consisting of an order management system and a system to track the outsourced activities of the subset of orders that are fulfilled by vendors. Both use miscellaneous data (such as the list of US states and counties) that may or may not be managed by a separate system. They also use and maintain duplicate from other systems. The order system does this with data from the pricing, customer management, and product management systems. It also exports data for manual import into the accounts receivable system. The outsourced fulfillment system does this with data from the product management and vendor management systems as well as the order system. It exports data for manual import into the accounts payable system. Because of the lack of integration, there is redundant data with no clear ability to declare any one set as the master one. Additionally, there is redundant code as the functions of maintaining this data are repeated (and likely repeated in an inconsistent manner) across multiple systems.
There are different responses to the issue of uncontrolled data duplication, ranging from “hope for the best” (not recommended) to various database replication schemes. Jeppe Cramon’s “Microservices: It’s not (only) the size that matters, it’s (also) how you use them – part 4″ outlines the advantages of publishing data as events from authoritative source systems for use by consumer systems. This provides near real-time updates and avoids the pitfall of trying to treat services like a database (hint, latency will kill you). Caching this data locally should not only improve performance but also allows for the consumer applications to manage attributes of those concerns that are unique to them with referential integrity (if desired).
In the diagram below, the architecture of this family of systems has been rationalized by the introduction of different types of services appropriate to the different use cases. The order system now makes use of the pricing system solely as a service (with its very narrow focus, the pricing system is definitely amenable to the microservice architectural style) and no longer contains data or code related to that concern and only contains enough code to trigger asynchronous messages to the accounts receivable system. Likewise, the outsourced fulfillment system contains only a thin shell for the accounts payable system. Other concerns (customers, vendors, products, etc.) are now managed by their respective owing systems and updates are published asynchronously as events. The mechanism for distributing these events (the cloud element in the diagram) is purposely not named as there are multiple ways to accomplish this (ESBs are one mechanism, but far from the only one). While the consuming systems cache the data locally, much of the code that previously was required to maintain them is no longer needed. In many cases, entire swathes of “administration” UI can be done away with. Where necessary, it can be pared down to only edit data related to those concerns that is both unique to and restricted to the consuming application.
I will take the time to re-emphasize the point that not all communication need be, nor should it be, the same. Synchronous communication (shown as a solid line in the diagram) may be appropriate for some usages (particularly where an end user is waiting on an immediate success/fail type of response). An event-driven style (shown as a dotted line in the diagram) will be much more performant and scalable for many more situations. As I noted in “Coordinating Microservices – Playing Well with Others”, designing and coding distributed applications in the same manner as a monolith will result in a web of service dependencies, a distributed big-ball of mud. Coupling that is a code smell in a monolith can become a crippling issue in distributed systems.
Disassembling monoliths in this manner is complex. It can be done incrementally, removing the need for expensive “big bang” initiatives. There is, however, a need for strategy and governance at the right level of detail (micromanagement and too-granular design are still unlikely to yield the best results). Done well, you can wind up with both better data integrity and less code to maintain.
I remember when reuse was the Holy Grail. First it was object-oriented languages, then object modeling, then components, then services. While none of these has lived up to the promise of rampant reuse, one thing has – language. We have learned to overload and reuse terms to the point that most are no more descriptive than using the word “thing”. “Service” is one of these.
The OASIS Reference Model for Service Oriented Architecture 1.0 defines a service as “…a mechanism to enable access to one or more capabilities, where the access is provided using a prescribed interface and is exercised consistent with constraints and policies as specified by the service description”. This definition includes nothing about protocol, media types, messaging patterns, etc. Essentially, this definition applies to anything that exposes something to consumers over in a defined interface in a defined manner.
Well, that’s helpful. It doesn’t even resolve whether it’s referring to the application providing services or the specific endpoints exposed by that application. In the context of Domain-Driven Design, a service may even be share the same process with its client.
Some services are meant to be interactive. A request is made and a response is returned. Such a service is extremely useful when the response is being presented to an actual human user in response to some action of theirs. This immediate gratification provides an excellent user experience provided latency is minimized. When interactive services are composed within another interactive service, this latency burden quickly increases as the remote client can’t regain control until the service regains it from each remote service it’s called (barring a timeout).
Some services are meant to work asynchronously. These work well particularly well in system to system scenarios when there’s no need to for temporal coupling. A significant trade-off for this model is the additional complexity involved, particularly around error condition feedback.
Some services (typically SOAP-based) are meant to receive and return highly structured and well-defined messages invoking tasks to be performed. This works extremely well for many system to system communications. Others (typically RESTful services) provide more of a variation of media types which are dealt with using CRUD operations. These work particularly well when the client is mainly presenting the response (perhaps with some minimal transformation) to an end user. If the messages are less well-defined, then the level of client complexity increases. Services using well-defined messages that are exposed externally will typically have different versioning requirements than services meant for consumption by internal clients (where “internal” is defined as built and deployed contemporaneously with the service).
The points behind this litany of service definitions and usage patterns? Different styles are appropriate to different situations and precision in communication is important to avoid conflating those styles.
Providing functionality via a service, regardless of the type of service, is not sufficient to meet a need. Functionality should be exposed via the flavor of service that corresponds to the needs of the client. Otherwise, the client may incur more effort and complexity dealing with the mismatch than the functionality is worth.
The concept of a message-oriented API for the back-end allows it to meet all of these needs without violating the DRY principle. Just like UI components, service endpoints (SOAP or REST style) should not contain domain logic, but delegate to domain services. As such, this provides a thin layer primarily concerned with translation between the format and messaging pattern required externally and that required by the domain services. This allows internal consumers (again, where “internal” is defined as built and deployed contemporaneously with the back-end) to work either directly in-process with domain services or remotely via a very thin shim that retains the same message structure and solely provides the network communication capabilities. External consumers should have their own endpoints (perhaps even separate sites) that cater to their needs.
Having the thinnest possible service layer not only prevents duplication of logic but also lessens the burden of standing up new endpoints. Lowering the cost of an endpoint makes it easier to justify providing tailored APIs. APIs that are tailored to specific client types are simpler to consume, further reducing costs.
In my last post, I asked “do you need a service layer?” The answer remains, “it depends”. But now there’s a new wrinkle, “if you need a service layer, does it need to be a remote one?”
One of the perils of popularity (at least in technology) is that some people equate popularity with efficacy. Everyone’s using this everywhere, so it must be great for everything!
Actually, not so much.
No one, to my knowledge, has created the secret sauce that makes everything better with no trade-offs or side effects. Context still rules. Martin Fowler recently published “Microservices and the First Law of Distributed Objects”, which touches on this principle. In it, Fowler refers to what he calls the First Law of Distributed Object Design: “don’t distribute your objects”. To make a long story short, distributed objects are a very bad idea because object interfaces tend to be fine-grained and fine-grained interfaces perform poorly in a distributed environment. “Chunkier” interfaces are more appropriate to distributed scenarios due to the difference in context between in-process and out of process communication (particularly when out of process also crosses machine, even network boundaries). Additionally, distributed architectures introduce complexity, adding another contextual element to be accounted for when evaluating whether to use them.
In his post, under “Further Readings”, Fowler references an older post of his on Dr. Dobb’s titled “Errant Architectures” (a re-packaging of Chapter 7 of his book Patterns of Enterprise Application Architecture). In that, he discusses encountering what was a common architectural anti-pattern of the time (early 2000s):
There’s a recurring presentation I used to see two or three times a year during design reviews. Proudly, the system architect of a new OO system lays out his plan for a new distributed object system—let’s pretend it’s some kind of ordering system. He shows me a design that looks rather like “Architect’s Dream, Developer’s Nightmare” with separate remote objects for customers, orders, products and deliveries. Each one is a separate component that can be placed in a separate processing node.
I ask, “Why do you do this?”
“Performance, of course,” the architect replies, looking at me a little oddly. “We can run each component on a separate box. If one component gets too busy, we add extra boxes for it so we can load-balance our application.” The look is now curious, as if he wonders if I really know anything about real distributed object stuff at all.
The quintessential example of this style was the web site presentation layer with its business and data layer behind a set of web services. Particularly in the .Net world, web services were the cutting edge and everyone needed them. Of course the system architect in the story was confusing scalability for performance (as were most who jumped onto the web service layer bandwagon). In order to increase the former, a hit is taken to the latter. In most cases, this hit is entirely unnecessary. A web application with an in-process back-end can be load-balanced much more simply than separate UI and service sites, without suffering the performance penalty of remote communication.
There are valid use cases for a service layer, such as when an application provides a both an interactive user interface and is service-enabled. When there are multiple user interfaces, some front ends will benefit from physical separation from the back-end (native mobile apps, desktop clients, SharePoint web parts, etc.). Applications can be structured to accommodate the same back-end either in-process or out of process. This structure can be particularly important for applications that have both internal (where internal is defined as built and deployed simultaneously with the back-end) and external service clients as these have different needs in terms of versioning. Wrapping the back-end with a service layer rather than allowing it to invade a particular service implementation can allow one back-end to serve internal and external customers either directly or via services (RESTful and/or SOAP) without violating the DRY principle.
Many of the same caveats that apply to composite applications formed from microservices (network latency, serialization costs, etc.) apply to these types of applications. It makes little sense to take on these downsides in the absence of a clear demonstrable need. Doing so looks more like falling prey to a fad than creating a well thought out design.
A recent post on The Daily WTF highlighted a system that “…throws the fewest errors of any of our code, so it should be very stable”. The punchline, of course, was that the system threw so few errors because it was catching and suppressing almost all the errors that were occurring. Once the “no news is good news” code was removed, the dysfunctional nature of the system was revealed.
See the full post on the Iasa Global Blog (a re-post, originally published here).