“Microservices, Monoliths, Modularity – Shearing Layers for Flexibility” on Iasa Global and a Milestone

First the milestone – this is my 200th post since starting this blog in October, 2011. I look forward to many, many, more (and hope my readers do as well!).

And now for the meat – “Microservices, Monoliths, Modularity – Shearing Layers for Flexibility” is up on the Iasa Global site:

Over the last fifteen months, many electrons have been expended discussing the relative merits of the application architecture styles commonly referred to as microservices and monoliths. Both styles have their advocates, and the interesting aspect is not their differences, but their agreement on one core principle – modularity. Both camps seem to agree that “good” architecture is modular and loosely coupled. The disagreements lie more in the realm of whether the enforcement of modularity via physical distribution is worth the increase in complexity, latency, etc.

read the rest there

Advertisement

When Reality Gets in the Way – Applying Systems Thinking to Design

It’s easy to sympathize with this:

It’s also more than a little dangerous if our desire for simplicity moves us to act as if reality isn’t as complex as it is. Take, for example, a recent tweet from John Allspaw about over-simplification:

My observation in return:

As I noted in my previous post, it’s part of human nature to gravitate towards easy answers. We are conditioned to try to impose rules on reality, even when those rules are mistaken. Sometimes this is the result of treating symptoms in an ad hoc manner, as evidenced by this recent twitter exchange:

This goes by the name of the “balloon effect”, pressure on one area of the problem just pushes it into another in the way that squeezing a balloon displaces the air inside.

Sometimes our response is born of bias. In sociology, for example, this phenomenon has its own name: “normative sociology”:

The whole “normative sociology” concept has its origins in a joke that Robert Nozick made, in Anarchy, State and Utopia, where he claimed, in an offhand way, that “Normative sociology, the study of what the causes of problems ought to be, greatly fascinates us all”(247). Despite the casual manner in which he made the remark, the observation is an astute one. Often when we study social problems, there is an almost irresistible temptation to study what we would like the cause of those problems to be (for whatever reason), to the neglect of the actual causes. When this goes uncorrected, you can get the phenomenon of “politically correct” explanations for various social problems – where there’s no hard evidence that A actually causes B, but where people, for one reason or another, think that A ought to be the explanation for B.

Some historians likewise have a tendency to over-simplify, fixating on aspects that “ought to be” rather than determining what is (which is another way of saying what can be reasonably defended).

Decision-making is the essence of design. Thought processes that poorly match reality, whether due to bias or insufficient analysis or both, are unlikely to yield optimal results. Systems thinking, “…viewing ‘problems’ as parts of an overall system, rather than reacting to specific parts, outcomes or events, and thereby potentially contributing to further development of unintended consequences”, is an approach more likely to achieve a successful outcome.

When the end result will be a software system integrated into a social system (i.e. a system that is a component of an ecosystem), it makes sense to understand the problem space as the as-is system to be remediated. This holds true whether that as-is system is an automated one or not. While it is not feasible to minutely analyze the problem space, much less design in detail the solution, failing to appreciate the full context on a high level presents risks. These risks include not only those inherent in satisfying the needs of the overlooked context(s), but also those challenges that emerge from the interactions of the various contexts that make up the problem space.

Deciding on a particular design direction is, obviously, a decision. Deferring that determination is, likewise, a decision. Refusing to make a definite decision is a decision as well. The answer is not to push all decisions off to as late a date as possible, but to make decisions in the moment that are defensible given the information at hand. Looking at the problem space as a whole in the context of its ecosystem provides the perspective required to make the optimal decision.

Form Follows Function on SPaMCast 347

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast features Tom’s essay on project management in an Agile environment (aka “Project Management is Dead”) and a Software Sensei column on testing from Kim Pries in addition to a Form Follows Function installment on microservices, Devops and Conway’s Law.

In SPaMCast 347, Tom and I discuss my “Fixing IT – Microservices and DevOps to the Rescue?” post, specifically on how microservice architectures are not just a technical approach but an organizational one as well.

Cargo Cult Architecture

According to Mark Little, Red Hat VP of Engineering, the microservice backlash has arrived, coming from “people who were really pushing it at the beginning and who are now just starting to realise it’s not all sunshine and roses, or people who never felt the need for it at all”. The Twitterverse seems to agree:

This post, however, is less about microservices, and more about what their rise and fall (and, no doubt, recovery as we violently discover equilibrium) says about software development as a discipline.

As Sander Mak observed in his post “On monoliths, microservices and critical thinking” (h/t Paul Bakker):

What does it mean if public software engineering opinion flips 180 degrees in a matter of weeks? It’s too easy to chalk it all up to people needing authority figures. Yes, I know: not everybody was all over microservices. But you have to admit there’s something fundamentally unsound going on here.

This is hardly a new problem. The same Mark Little mentioned in the opening wrote an article for InfoQ almost three years ago titled “IT Values Technologies Over Thought” where he stated “If the people delivering the implementations that are supposed to be solutions to business problems aren’t looking beyond the hype and considering alternatives, especially when those alternatives may have been tried and tested for many years, then we are in for some very interesting times ahead”.

It’s a known problem. We even laugh at articles that trade on our tendency to jump from silver bullet to silver bullet (although I’m not sure if that laughter is based on sangfroid or fatalism).

It’s not even a problem that’s exclusively ours. An article in Forbes, “Why So Many Management Strategies Become Fads That Fade Away”, refers to it as “idea surfing”. When complexity, unrealistic expectations, cultural resistance, or poor fit lead to management souring on the current strategy du jour, there’s always a shinier object just down the road that promises to be the recipe for success.

Accord to “Rats Can Be Smarter Than People” in January’s Harvard Business Review, our predilection for easy answers is deeply rooted (emphasis added):

Our rule-based system was an evolutionary development: How do you tell if a berry is good for eating? You learn that this small red one is good, and then you save energy by bypassing the ones of a different shape or color. So our brains have been conditioned to look for rules. We’re taught them in school, at work, and by our parents, and we can make many good decisions by applying the ones we’ve learned. But in other situations there’s too much going on for simple rules to work, and that’s when information integration learning has to kick in. Think of a radiologist evaluating an X-ray. If you ask him what rules he uses to determine whether a spot is cancer, he’d probably have a hard time verbalizing them. He’s learned from labeled examples in medical school and his own experience, and then developed an instinct for identifying cancerous spots based on what he’s seen before. Another example that comes to mind is a manager interviewing a job candidate. There aren’t any hard-and-fast rules about who will be a good hire. You have to consider many factors and rely on your judgment or on a gut feeling based on your experience with people in the workplace. Unfortunately, there’s a great deal of evidence showing that humans have a harder time learning how to integrate information in this way, because they seek rules even when there are none.

In spite of how much it’s part of our nature, we have to overcome the desire for easy answers. No matter how many jumps we make, the magic recipe will not be found:

Ignore that last guy 😉

Planning and Designing – Intentional, Accidental, Emergent

Over the last three years, I’ve written eleven posts tagged with “Emergence”. In a discussion over the past week of my previous post, I’ve come to the realization that I’ve been misusing that term. In essence, I’ve been conflating emergent architecture with accidental architecture when they’re very different concepts:

In both cases, aspects of the architecture emerge, but the circumstance under which that occurs is vastly different. When architectural design is intentional, emergence occurs as a result of learning. For example, as multiple contexts are reconciled, challenges will emerge. This learning process will continue over the lifetime of the product. With accidental architecture, emergence occurs via lack of preparation, either through inadequate analysis or perversely, through intentionally ignoring needs that aren’t required for the task at hand (even if those needs are confirmed). With this type of emergence, lack of a clear direction leads to conflicting ad hoc responses. If time is not spent to rework these responses, then system coherence suffers. The fix for the problem of Big Design Up Front (BDUF) is appropriate design, not absence of design.

James Coplien, in his recent post “Procrastination”, takes issue with the idea of purposeful ignorance:

There is a catch phrase for it which we’ll examine in a moment: “defer decisions to the last responsible moment.” The agile folks add an interesting twist (with a grain of truth) that the later one defers a decision, the more information there will be on which to base the decision.

Alarmingly, this agile posture is used either as an excuse or as an admonition to temper up-front planning. The attitude perhaps arose as a rationalisation against the planning fanaticism of 1980s methodologies. It’s true that time uncovers more insight, but the march of time also opens the door both to entropy and “progress.” Both constrain options. And to add our own twist, acting early allows more time for feedback and course correction. A stitch in time saves nine. If you’re on a journey and you wait until the end to make course corrections, when you’re 40% off-course, it takes longer to remedy than if you adjust your path from the beginning. Procrastination is the thief of time.

Rebecca Wirfs-Brock has also blogged on feeling “discomfort” and “stress” when making decisions at the last responsible moment. That stress is significant, given study findings she quoted:

Giora Keinan, in a 1987 Journal of Personal and Social Psychology article, reports on a study that examined whether “deficient decision making” under stress was largely due to not systematically considering all relevant alternatives. He exposed college student test subjects to “controllable stress”, “uncontrollable stress”, or no stress, and measured how it affected their ability to solve interactive decision problems. In a nutshell being stressed didn’t affect their overall performance. However, those who were exposed to stress of any kind tended to offer solutions before they considered all available alternatives. And they did not systematically examine the alternatives.

Admittedly, the test subjects were college students doing word analogy puzzles. And the uncontrolled stress was the threat of a small random electric shock….but still…the study demonstrated that once you think you have a reasonable answer, you jump to it more quickly under stress.

It should be noted that although this study didn’t show a drop in performance due to stress, the problems involved were more black and white than design decisions which are best fit type problems. Failure to “systematically examine the alternatives” and the tendency to “offer solutions before they considered all available alternatives” should be considered red flags.

Coplien’s connection of design and planning is significant. Merriam-Webster defines “design” as a form of planning (and the reverse works as well if you consider organizations to be social systems). A tweet from J. B. Rainsberger illustrates an extremely important point about planning (and by extension, design):

In my opinion, a response to “unexpected results” is more likely to be effective if you have conducted the systematic examination of the alternatives beforehand when the stress that leads you to jump to a solution without considering all available alternatives is absent. What needs to be avoided is failing to ensure that the plan/design aligns with the context. This type of intentional planning/design can provide resilience for systems by taking future needs and foreseeable issues into account, giving you options for when the context changes. Even if those needs are not implemented, you can avoid constraints that would make dealing with them when they arise more difficult. Likewise, having options in place for dealing with likely issues can make the difference between a brief problem and a prolonged outage. YAGNI is only a virtue when you really aren’t going to need it.

As Ruth Malan has noted, architectural design involves shaping:

Would you expect that shaping to result in something coherent if it was merely a collection of disconnected tactical responses?

“Microservice Mistakes – Complexity as a Service” on Iasa Global

I’m pleased to announce that I’ve been asked to continue as a contributor to the Iasa Global site. I’m planning to post original content there on at least a monthly basis. In the interim, please enjoy a re-post of “Microservice Mistakes – Complexity as a Service”

Who Needs Architects? – Are You Committed to Reaching Your Goals?

Two wedding rings

A good way to avoid writer’s block is to have a smart and engaged readership. They tend to ask questions that keep you on your toes. Thomas Cagley did the honors on my last post dealing with separation of concerns and the need for architectural design up to the level of enterprise IT architecture:

Is there an approach you would suggest to ensure this type of thinking occurs when it really matters rather than a reaction to problems?

I responded with:

The short answer is commitment, which is best demonstrated by having goals (both in terms of process and in terms of application, solution, and enterprise IT architecture) and having people responsible for making sure those goals are accounted for in day to day operations.

The long answer will be this week’s post. 🙂

“Accidental architecture”, the organic, undirected evolution of architecture, describes the state of many organizations today. This state of affairs exists across the entire range of application, solution, and enterprise IT architectures. Without coherent goals and a plan of how the parts will work together toward these common goals, how could it be otherwise? As Tom Graves noted in “Governance is not an end in itself”:

…things work better when they work together, on purpose.

Purposeful governance is needed to direct efforts towards the same goals. Not because people need to be directed to perform, but because people need to know what is valued to direct their own performance in a way that benefits the organization as a whole. Purposeful governance is the opposite of governance for the sake of being able to say we have governance (Tom again):

…governance should never be ‘an end in itself’. Instead, governance exists solely to support a business need – or, more specifically, to keep things on track towards that business-need.

Anyone can get caught by surprise, even under the best of circumstances, but failing to plan practically ensures that problems will arise. Returning to a policy of neglect as soon as a brushfire is put out effectively ensures that another fire (and likely one from the same source) will break out. Effective governance (IT and otherwise) requires commitment: commitment to determine a direction towards organizational goals, commitment to follow through with action to achieve those goals and commitment to monitor that the goals remain the same and the direction continues to point towards those goals. This requires full time commitment to and ownership of those considerations.

Would you ride with a driver that wouldn’t commit to paying attention to the road unless there was a problem and only until the problem was “solved”?

Effective governance requires commitment to collaboration. Just as those responsible for the solution and application architectures need direction to mesh with the IT architecture of the enterprise, those responsible for the solution and enterprise IT architectures need the feedback of those responsible for the application architectures. Failure to listen can lead to catastrophe as can failure to achieve engagement:

As I noted above, many organizations have an “accidental architecture”. There are many reasons for why this is where they are now. The more important question is are they willing to remain there or will they make the commitment to take control of their future?