Herve Lourdin (@HerveLourdin) June 15, 2015
Herve Lourdin‘s tweet wasn’t aimed at modeling, but the image nicely illustrates a critical deficiency in modeling languages – showing evolution of a system over time. Structure and behavior are captured, but only for a given point in time. Systems and their ecosystems, however, are not static. A map of the destination without reference to the point of origin or rationale for choices is of limited use in communicating the what, how, and why behind architectural decisions.
- formal ALs’ need for specialized competencies with insufficient perceived return on investment,
- overspecification as well as the inability to model design decisions explicitly in the AL, and
- lack of integration in the software life cycle, lack of mature tools, and usability issues.
All of the items in bold above represent usability and value issues; a failure to communicate. As Simon Brown observed in “Simple Sketches for Diagramming Your Software Architecture”:
In today’s world of agile delivery and lean startups, some software teams have lost the ability to communicate what it is they are building and it’s no surprise that these teams often seem to lack technical leadership, direction and consistency. If you want to ensure that everybody is contributing to the same end-goal, you need to be able to effectively communicate the vision of what it is you’re building. And if you want agility and the ability to move fast, you need to be able to communicate that vision efficiently too.
Simon is a proponent of a sketching technique that answers many of these communication failures:
The goal with these sketches is to help teams communicate their software designs in an effective and efficient way rather than creating another comprehensive modelling notation. UML provides both a common set of abstractions and a common notation to describe them, but I rarely find teams that are using either effectively. I’d rather see teams able to discuss their software systems with a common set of abstractions in mind rather than struggling to understand what the various notational elements are trying to show.
Simon’s colleague, Robert Annett, recently posted “Diagrams for System Evolution”, which proposes using the color-coding scheme from diff tools to indicate change: red = remove, blue = change, green = new. Simon followed this up with two posts of his own, “Diff’ing software architecture diagrams” and “Diff’ing software architecture diagrams again”, which dealt with applying Robert’s ideas to Simon’s structurizr.com tool.
Simon’s work, coupled with Robert’s ideas, addresses many of the highlighted deficiencies listed above (it even touches on the third bullet that I didn’t emphasize). Ruth Malan’s work also contains some ideas that are vital (in my opinion) to being able to visualize and communicate important design considerations – explicit quality of service and rationale elements along with organizational context elements. A further enhancement might be incorporating these into a platform that can tie elements of software architecture together with elements of solution and enterprise architecture, such as the one proposed by Tom Graves.
Given the need for agility, it might seem strange to be talking about modeling, design documentation, and architectural languages. The fact is, however, that many of us deal with inherently complex systems in inherently complex ecosystems. Without the ability to visualize a design in its context, we run the risk of either slowing down or going down. Not everyone can afford to “move fast and break things”.
“Microservices, Monoliths, Modularity – Shearing Layers for Flexibility” on Iasa Global and a Milestone
First the milestone – this is my 200th post since starting this blog in October, 2011. I look forward to many, many, more (and hope my readers do as well!).
And now for the meat – “Microservices, Monoliths, Modularity – Shearing Layers for Flexibility” is up on the Iasa Global site:
Over the last fifteen months, many electrons have been expended discussing the relative merits of the application architecture styles commonly referred to as microservices and monoliths. Both styles have their advocates, and the interesting aspect is not their differences, but their agreement on one core principle – modularity. Both camps seem to agree that “good” architecture is modular and loosely coupled. The disagreements lie more in the realm of whether the enforcement of modularity via physical distribution is worth the increase in complexity, latency, etc.
It’s easy to sympathize with this:
At least it's almost the weekend! http://t.co/unmInBWDaU—
Calvin and Hobbes (@Calvinn_Hobbes) June 18, 2015
It’s also more than a little dangerous if our desire for simplicity moves us to act as if reality isn’t as complex as it is. Take, for example, a recent tweet from John Allspaw about over-simplification:
John Allspaw (@allspaw) June 15, 2015
My observation in return:
Trying to squeeze complex reality into simplistic abstractions creates additional unnecessary complexity twitter.com/allspaw/status…—
Gene Hughson (@GeneHughson) June 18, 2015
As I noted in my previous post, it’s part of human nature to gravitate towards easy answers. We are conditioned to try to impose rules on reality, even when those rules are mistaken. Sometimes this is the result of treating symptoms in an ad hoc manner, as evidenced by this recent twitter exchange:
Medical pre-reg: "have you traveled outside US in last 21 days?", the legacy of Kaci Hickox to Maine healthcare system—
brenda m. michelson (@bmichelson) June 12, 2015
James Urquhart (@jamesurquhart) June 12, 2015
brenda m. michelson (@bmichelson) June 12, 2015
Jeff Sussna (@jeffsussna) June 12, 2015
This goes by the name of the “balloon effect”, pressure on one area of the problem just pushes it into another in the way that squeezing a balloon displaces the air inside.
Sometimes our response is born of bias. In sociology, for example, this phenomenon has its own name: “normative sociology”:
The whole “normative sociology” concept has its origins in a joke that Robert Nozick made, in Anarchy, State and Utopia, where he claimed, in an offhand way, that “Normative sociology, the study of what the causes of problems ought to be, greatly fascinates us all”(247). Despite the casual manner in which he made the remark, the observation is an astute one. Often when we study social problems, there is an almost irresistible temptation to study what we would like the cause of those problems to be (for whatever reason), to the neglect of the actual causes. When this goes uncorrected, you can get the phenomenon of “politically correct” explanations for various social problems – where there’s no hard evidence that A actually causes B, but where people, for one reason or another, think that A ought to be the explanation for B.
Some historians likewise have a tendency to over-simplify, fixating on aspects that “ought to be” rather than determining what is (which is another way of saying what can be reasonably defended).
Decision-making is the essence of design. Thought processes that poorly match reality, whether due to bias or insufficient analysis or both, are unlikely to yield optimal results. Systems thinking, “…viewing ‘problems’ as parts of an overall system, rather than reacting to specific parts, outcomes or events, and thereby potentially contributing to further development of unintended consequences”, is an approach more likely to achieve a successful outcome.
When the end result will be a software system integrated into a social system (i.e. a system that is a component of an ecosystem), it makes sense to understand the problem space as the as-is system to be remediated. This holds true whether that as-is system is an automated one or not. While it is not feasible to minutely analyze the problem space, much less design in detail the solution, failing to appreciate the full context on a high level presents risks. These risks include not only those inherent in satisfying the needs of the overlooked context(s), but also those challenges that emerge from the interactions of the various contexts that make up the problem space.
Deciding on a particular design direction is, obviously, a decision. Deferring that determination is, likewise, a decision. Refusing to make a definite decision is a decision as well. The answer is not to push all decisions off to as late a date as possible, but to make decisions in the moment that are defensible given the information at hand. Looking at the problem space as a whole in the context of its ecosystem provides the perspective required to make the optimal decision.
This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast features Tom’s essay on project management in an Agile environment (aka “Project Management is Dead”) and a Software Sensei column on testing from Kim Pries in addition to a Form Follows Function installment on microservices, Devops and Conway’s Law.
In SPaMCast 347, Tom and I discuss my “Fixing IT – Microservices and DevOps to the Rescue?” post, specifically on how microservice architectures are not just a technical approach but an organizational one as well.
According to Mark Little, Red Hat VP of Engineering, the microservice backlash has arrived, coming from “people who were really pushing it at the beginning and who are now just starting to realise it’s not all sunshine and roses, or people who never felt the need for it at all”. The Twitterverse seems to agree:
So, have I got this straight? We should start with a monolith, except when we we want microservices, in case we shouldn't, except when we...—
Chris Oldwood (@chrisoldwood) June 09, 2015
Microservices are not something you should aim for. They are a means to an end. Focus on what's important - building useful software.—
Sam Newman (@samnewman) June 10, 2015
Start w monilith, don't start w monilith... Really doesn't matter as much as the ability to use grey matter.—
Ashic Mahtab (@ashic) June 11, 2015
This post, however, is less about microservices, and more about what their rise and fall (and, no doubt, recovery as we violently discover equilibrium) says about software development as a discipline.
As Sander Mak observed in his post “On monoliths, microservices and critical thinking” (h/t Paul Bakker):
What does it mean if public software engineering opinion flips 180 degrees in a matter of weeks? It’s too easy to chalk it all up to people needing authority figures. Yes, I know: not everybody was all over microservices. But you have to admit there’s something fundamentally unsound going on here.
This is hardly a new problem. The same Mark Little mentioned in the opening wrote an article for InfoQ almost three years ago titled “IT Values Technologies Over Thought” where he stated “If the people delivering the implementations that are supposed to be solutions to business problems aren’t looking beyond the hype and considering alternatives, especially when those alternatives may have been tried and tested for many years, then we are in for some very interesting times ahead”.
It’s a known problem. We even laugh at articles that trade on our tendency to jump from silver bullet to silver bullet (although I’m not sure if that laughter is based on sangfroid or fatalism).
It’s not even a problem that’s exclusively ours. An article in Forbes, “Why So Many Management Strategies Become Fads That Fade Away”, refers to it as “idea surfing”. When complexity, unrealistic expectations, cultural resistance, or poor fit lead to management souring on the current strategy du jour, there’s always a shinier object just down the road that promises to be the recipe for success.
Accord to “Rats Can Be Smarter Than People” in January’s Harvard Business Review, our predilection for easy answers is deeply rooted (emphasis added):
Our rule-based system was an evolutionary development: How do you tell if a berry is good for eating? You learn that this small red one is good, and then you save energy by bypassing the ones of a different shape or color. So our brains have been conditioned to look for rules. We’re taught them in school, at work, and by our parents, and we can make many good decisions by applying the ones we’ve learned. But in other situations there’s too much going on for simple rules to work, and that’s when information integration learning has to kick in. Think of a radiologist evaluating an X-ray. If you ask him what rules he uses to determine whether a spot is cancer, he’d probably have a hard time verbalizing them. He’s learned from labeled examples in medical school and his own experience, and then developed an instinct for identifying cancerous spots based on what he’s seen before. Another example that comes to mind is a manager interviewing a job candidate. There aren’t any hard-and-fast rules about who will be a good hire. You have to consider many factors and rely on your judgment or on a gut feeling based on your experience with people in the workplace. Unfortunately, there’s a great deal of evidence showing that humans have a harder time learning how to integrate information in this way, because they seek rules even when there are none.
In spite of how much it’s part of our nature, we have to overcome the desire for easy answers. No matter how many jumps we make, the magic recipe will not be found:
Matthew Skelton (@matthewpskelton) June 09, 2015
Gene Hughson (@GeneHughson) June 09, 2015
Ignore that last guy ;-)
Over the last three years, I’ve written eleven posts tagged with “Emergence”. In a discussion over the past week of my previous post, I’ve come to the realization that I’ve been misusing that term. In essence, I’ve been conflating emergent architecture with accidental architecture when they’re very different concepts:
The question is intentional vs accidental architecture, not BDUF vs emergent...all architecture involves emergence—
Gene Hughson (@GeneHughson) June 03, 2015
In both cases, aspects of the architecture emerge, but the circumstance under which that occurs is vastly different. When architectural design is intentional, emergence occurs as a result of learning. For example, as multiple contexts are reconciled, challenges will emerge. This learning process will continue over the lifetime of the product. With accidental architecture, emergence occurs via lack of preparation, either through inadequate analysis or perversely, through intentionally ignoring needs that aren’t required for the task at hand (even if those needs are confirmed). With this type of emergence, lack of a clear direction leads to conflicting ad hoc responses. If time is not spent to rework these responses, then system coherence suffers. The fix for the problem of Big Design Up Front (BDUF) is appropriate design, not absence of design.
James Coplien, in his recent post “Procrastination”, takes issue with the idea of purposeful ignorance:
There is a catch phrase for it which we’ll examine in a moment: “defer decisions to the last responsible moment.” The agile folks add an interesting twist (with a grain of truth) that the later one defers a decision, the more information there will be on which to base the decision.
Alarmingly, this agile posture is used either as an excuse or as an admonition to temper up-front planning. The attitude perhaps arose as a rationalisation against the planning fanaticism of 1980s methodologies. It’s true that time uncovers more insight, but the march of time also opens the door both to entropy and “progress.” Both constrain options. And to add our own twist, acting early allows more time for feedback and course correction. A stitch in time saves nine. If you’re on a journey and you wait until the end to make course corrections, when you’re 40% off-course, it takes longer to remedy than if you adjust your path from the beginning. Procrastination is the thief of time.
Rebecca Wirfs-Brock has also blogged on feeling “discomfort” and “stress” when making decisions at the last responsible moment. That stress is significant, given study findings she quoted:
Giora Keinan, in a 1987 Journal of Personal and Social Psychology article, reports on a study that examined whether “deficient decision making” under stress was largely due to not systematically considering all relevant alternatives. He exposed college student test subjects to “controllable stress”, “uncontrollable stress”, or no stress, and measured how it affected their ability to solve interactive decision problems. In a nutshell being stressed didn’t affect their overall performance. However, those who were exposed to stress of any kind tended to offer solutions before they considered all available alternatives. And they did not systematically examine the alternatives.
Admittedly, the test subjects were college students doing word analogy puzzles. And the uncontrolled stress was the threat of a small random electric shock….but still…the study demonstrated that once you think you have a reasonable answer, you jump to it more quickly under stress.
It should be noted that although this study didn’t show a drop in performance due to stress, the problems involved were more black and white than design decisions which are best fit type problems. Failure to “systematically examine the alternatives” and the tendency to “offer solutions before they considered all available alternatives” should be considered red flags.
Coplien’s connection of design and planning is significant. Merriam-Webster defines “design” as a form of planning (and the reverse works as well if you consider organizations to be social systems). A tweet from J. B. Rainsberger illustrates an extremely important point about planning (and by extension, design):
Plan all you want. What you do in the face of unexpected results matters much, much more. #agile—
☕ J. B. Rainsberger (@jbrains) June 05, 2015
In my opinion, a response to “unexpected results” is more likely to be effective if you have conducted the systematic examination of the alternatives beforehand when the stress that leads you to jump to a solution without considering all available alternatives is absent. What needs to be avoided is failing to ensure that the plan/design aligns with the context. This type of intentional planning/design can provide resilience for systems by taking future needs and foreseeable issues into account, giving you options for when the context changes. Even if those needs are not implemented, you can avoid constraints that would make dealing with them when they arise more difficult. Likewise, having options in place for dealing with likely issues can make the difference between a brief problem and a prolonged outage. YAGNI is only a virtue when you really aren’t going to need it.
As Ruth Malan has noted, architectural design involves shaping:
ruth malan (@ruthmalan) June 08, 2015
Would you expect that shaping to result in something coherent if it was merely a collection of disconnected tactical responses?
I’m pleased to announce that I’ve been asked to continue as a contributor to the Iasa Global site. I’m planning to post original content there on at least a monthly basis. In the interim, please enjoy a re-post of “Microservice Mistakes – Complexity as a Service”