We Deliver Decisions (Who Needs Architects?)

Broken Window

What do medicine, situational awareness, economics, confirmation bias, and value all have to do with all have to do with the architectural design of software systems?

Quite a lot, actually. To connect the dots, we need to start from the point of view that the architecture is essentially a set of design decisions intended to solve a problem. The architecture of that problem consists of a set of contexts. The fitness of a solution architecture will depend on how well it addresses the problem architecture. While challenges will emerge in the course of resolving a set of contexts, understanding up front what can be known provides reserves to deal with what cannot.

About two weeks ago, during a Twitter discussion with Greger Wikstrand, I mentioned that the topic (learning via observational studies rather than controlled experiment) coincided with a post I was publishing that day, “First Do No Harm – the Practice of Software Development” (comparing software development to medicine). That triggered the following exchange:

A few days later, I stumbled across a reference to Frédéric Bastiat‘s classic essay on economics, “What Is Seen and What Is Not Seen”. For those that aren’t motivated to read the work of 19th century French economists, it deals with the concepts of opportunity costs and the law of unintended consequences via a parable that attacks the notion that broken windows benefit the economy by putting glaziers to work.

A couple more days went by and Greger posted “Confirmation bias in software engineering” on the pitfalls of being too willing to believe information that conforms to our own preferences. That same day, I posted “Let’s Talk Value (Who Needs Architects?)”, discussing the effects of perception on determining value. Matt Ballantine made a comment on that post, and coincidentally, “confirmation bias” came up again:

I think it’s always going to be a balance of expediency and pragmatism when it comes to architecture. And to an extend it relates back to your initial point about value – I’m not sure that value is *anything* but perception, no matter what logical might tell us. Think about the things that you truly value in your life outside of work, and I’d wager that few of them would fit neatly into the equation…

So why should we expect the world of work to be any different? The reality is that it isn’t, and we just have a fashion these days in business for everything to be attributable to numbers that masks what is otherwise a bunch of activities happening under the cognitive process of confirmation bias.

So when it comes to arguing the case for architecture, despite the logic of the long-term gain, short term expedience will always win out. I’d argue that architectural approaches need to flex to accommodate that… http://mmitii.mattballantine.com/2012/11/27/the-joy-of-hindsight/

The common thread through all this is cognition. Perceiving and making sense of circumstances, then deciding how best to respond. The quality of the decision(s) will be directly related to the quality of the cognition. Failing to take a holistic view (big picture and details, not either-or) will impair our perception of the problem, sabotaging our ability to design effective solutions. Our biases can lead to embracing fallacies like the one in Bastiat’s parable, but stakeholders will likely be sensitive to the opportunity costs of avoidable architectural refactoring (the unintentional consequence of applying YAGNI at the architectural level). That sensitivity will color their perception of the value of the solution and their perception is the one that counts.

Making the argument that you did well by costing someone money is a lot easier in the abstract than it is in reality.

Let’s Talk Value (Who Needs Architects?)

Gold Bars

Value is a term that’s heard often these days, but I wonder how well it’s understood. Too often, it seems, value is taken to mean raw benefit rather than its actual meaning, benefit after cost (i.e. “bang for the buck”). An even better understanding of the concept can be had from Tom Cagley’s “Breaking Down Value”: “Value = (Benefit + Perception) – (Cost + Perception)”.

The point being?

Change involves costs and, one would hope, benefits. Not only does the magnitude of the cost matter, but also its perception matters. Where a cost is seen as unnecessary or incurred to benefit someone other than the one paying the bills, its perception will likely be unfavorable. Changes that come about due to unforeseen circumstances are more likely to be seen as necessary than those stemming from foreseeable ones. Changes to accommodate known needs are the least likely to be seen as reasonable. This is why I’ve always maintained that YAGNI doesn’t scale beyond low-level design into the realm of architectural decisions. Where cost of change determines architectural significance, decision churn is problematic.

After I posted “Who Needs Architects? Who’s Minding the Architecture?”, Charlie Alfred tweeted:

One way to be seen as an asset is to provide value. As Cesare Pautasso put it:

This is not to say that architectural refactoring is without value, but that refactoring will be seen as redundant work. When that work is for foreseeable needs, it will be perceived as costlier and less beneficial than strictly new functionality. Refactoring to accommodate known needs will suffer an even greater perception problem.

YAGNI presumes that the risk of flexible design being unnecessary outweighs the risk of refactoring being unnecessary. In my opinion, that is far too simplistic a view. Some functionality and qualities may be speculative, but the need for others (e.g. security) will be more certain.

Studies have shown that the ability to modify an application is a prime quality concern for stakeholders. Flexible design enables easier and cheaper change. “Big bang” changes (expensive and painful) are more likely where coherent design is lacking. Holistic design based on context seems to provide more value (both tangible and perceived) than a dogma-driven process of stringing together tactical decisions and hoping for the best.

“Laziness as a Virtue in Software Architecture” on Iasa Global

Laziness may be one of the Seven Deadly Sins, but it can be a virtue in software development. As Matt Osbun observed:

Robert Heinlein noted the benefits of laziness:

See the full post on the Iasa Global Site (a re-post, originally published here).

Who Needs Architects? Because YAGNI Doesn’t Scale

If you choose not to decide
You still have made a choice

Rush – “Free Will”

Bumper sticker philosophies (sayings that are short, pithy, attractive to some, and so lacking in nuance as to be dangerous) are pet peeves of mine. YAGNI (“You Ain’t Gonna Need It”) is right at the top of my list. I find it particularly dangerous because it contains a kernel of truth, but expressed in a manner that makes it very easy to get in trouble. This is particularly the case when it’s paired with other ones like “the simplest thing that could possibly work” and “defer decisions to the last responsible moment”.

Much has already been written (including some from me) about why it’s a bad idea to implement speculative features just because you think you might need them. The danger there is so easy to see that it’s not worth reiterating here. What some seem to miss, however, is that there is a difference in implementing something you think you might need and implementing something that you know (or should know) you will need. This is where “the simplest thing that could possibly work” can cause problems.

In his post “Yagni”, Martin Fowler detailed a scenario where there’s a temptation to implement features that are planned for the future but not yet ready to be used by customers:

Let’s imagine I’m working with a startup in Minas Tirith selling insurance for the shipping business. Their software system is broken into two main components: one for pricing, and one for sales. The dependencies are such that they can’t usefully build sales software until the relevant pricing software is completed.

At the moment, the team is working on updating the pricing component to add support for risks from storms. They know that in six months time, they will need to also support pricing for piracy risks. Since they are currently working on the pricing engine they consider building the presumptive feature [2] for piracy pricing now, since that way the pricing service will be complete before they start working on the sales software.

Even knowing that a feature is planned is not enough to jump the gun and implement it ahead of time. As Fowler points out, there are carrying costs and opportunity costs involved in doing so, not to mention the risk that the feature drops off the radar or its details change. That being said, there is a difference between prudently waiting to implement the feature until it’s time and ignoring it altogether:

Now we understand why yagni is important we can dig into a common confusion about yagni. Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify. Yagni is only a viable strategy if the code is easy to modify, so expending effort on refactoring isn’t a violation of yagni because refactoring makes the code more malleable. Similar reasoning applies for practices like SelfTestingCode and ContinuousDelivery. These are enabling practices for evolutionary design, without them yagni turns from a beneficial practice into a curse.

In other words, while it makes sense to defer the implementation of the planned feature, it’s also smart to insure that there’s sufficient flexibility in the design so that the upcoming feature won’t cause refactoring to the extent that it’s almost a rewrite. Deferring the decision to do necessary work now is a decision to incur unnecessary rework later. Likewise, techniques implemented to ameliorate failure (circuit breakers in distributed systems, feature flags, etc.) should not be given the YAGNI treatment, even when you hope they’re never needed. The “simplest thing that could possibly work” may well be completely inadequate for the messy world the system inhabits.

It’s probably impossible to architect a system that is perfect, since the definition of perfection changes. It is possible, however, to architect systems that deal gracefully with change. That requires thought, not adherence to simplistic slogans.

Planning and Designing – Intentional, Accidental, Emergent

Over the last three years, I’ve written eleven posts tagged with “Emergence”. In a discussion over the past week of my previous post, I’ve come to the realization that I’ve been misusing that term. In essence, I’ve been conflating emergent architecture with accidental architecture when they’re very different concepts:

In both cases, aspects of the architecture emerge, but the circumstance under which that occurs is vastly different. When architectural design is intentional, emergence occurs as a result of learning. For example, as multiple contexts are reconciled, challenges will emerge. This learning process will continue over the lifetime of the product. With accidental architecture, emergence occurs via lack of preparation, either through inadequate analysis or perversely, through intentionally ignoring needs that aren’t required for the task at hand (even if those needs are confirmed). With this type of emergence, lack of a clear direction leads to conflicting ad hoc responses. If time is not spent to rework these responses, then system coherence suffers. The fix for the problem of Big Design Up Front (BDUF) is appropriate design, not absence of design.

James Coplien, in his recent post “Procrastination”, takes issue with the idea of purposeful ignorance:

There is a catch phrase for it which we’ll examine in a moment: “defer decisions to the last responsible moment.” The agile folks add an interesting twist (with a grain of truth) that the later one defers a decision, the more information there will be on which to base the decision.

Alarmingly, this agile posture is used either as an excuse or as an admonition to temper up-front planning. The attitude perhaps arose as a rationalisation against the planning fanaticism of 1980s methodologies. It’s true that time uncovers more insight, but the march of time also opens the door both to entropy and “progress.” Both constrain options. And to add our own twist, acting early allows more time for feedback and course correction. A stitch in time saves nine. If you’re on a journey and you wait until the end to make course corrections, when you’re 40% off-course, it takes longer to remedy than if you adjust your path from the beginning. Procrastination is the thief of time.

Rebecca Wirfs-Brock has also blogged on feeling “discomfort” and “stress” when making decisions at the last responsible moment. That stress is significant, given study findings she quoted:

Giora Keinan, in a 1987 Journal of Personal and Social Psychology article, reports on a study that examined whether “deficient decision making” under stress was largely due to not systematically considering all relevant alternatives. He exposed college student test subjects to “controllable stress”, “uncontrollable stress”, or no stress, and measured how it affected their ability to solve interactive decision problems. In a nutshell being stressed didn’t affect their overall performance. However, those who were exposed to stress of any kind tended to offer solutions before they considered all available alternatives. And they did not systematically examine the alternatives.

Admittedly, the test subjects were college students doing word analogy puzzles. And the uncontrolled stress was the threat of a small random electric shock….but still…the study demonstrated that once you think you have a reasonable answer, you jump to it more quickly under stress.

It should be noted that although this study didn’t show a drop in performance due to stress, the problems involved were more black and white than design decisions which are best fit type problems. Failure to “systematically examine the alternatives” and the tendency to “offer solutions before they considered all available alternatives” should be considered red flags.

Coplien’s connection of design and planning is significant. Merriam-Webster defines “design” as a form of planning (and the reverse works as well if you consider organizations to be social systems). A tweet from J. B. Rainsberger illustrates an extremely important point about planning (and by extension, design):

In my opinion, a response to “unexpected results” is more likely to be effective if you have conducted the systematic examination of the alternatives beforehand when the stress that leads you to jump to a solution without considering all available alternatives is absent. What needs to be avoided is failing to ensure that the plan/design aligns with the context. This type of intentional planning/design can provide resilience for systems by taking future needs and foreseeable issues into account, giving you options for when the context changes. Even if those needs are not implemented, you can avoid constraints that would make dealing with them when they arise more difficult. Likewise, having options in place for dealing with likely issues can make the difference between a brief problem and a prolonged outage. YAGNI is only a virtue when you really aren’t going to need it.

As Ruth Malan has noted, architectural design involves shaping:

Would you expect that shaping to result in something coherent if it was merely a collection of disconnected tactical responses?

Laziness as a Virtue in Software Architecture

Laziness may be one of the Seven Deadly Sins, but it can be a virtue in software development. As Matt Osbun observed:

Robert Heinlein noted the benefits of laziness:

Even in the military, laziness carries potential greatness (emphasis mine):

I divide my officers into four groups. There are clever, diligent, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and diligent — their place is the General Staff. The next lot are stupid and lazy — they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the intellectual clarity and the composure necessary for difficult decisions. One must beware of anyone who is stupid and diligent — he must not be entrusted with any responsibility because he will always cause only mischief.

Generaloberst Kurt von Hammerstein-Equord

The lazy architect will ignore slogans like YAGNI and the Rule of Three when experience and/or information tells them that it’s far more likely that the need will arise than not. As Matt stated in “Foreseeable and Imaginary Design”, architects must ask “What changes are possible and which of those changes are foreseeable”. The slogans point out that engineering costs but the reality is that so does re-work resulting from decisions deferred. Avoiding that extra work (laziness) avoids the cost associated with it (frugality).

Likewise, lazy architects are less likely cave when confronted with the sayings of some notable person. Rather than resign themselves to the extra work, they’re more likely to examine the statement as Kevlin Henney did:

It’s far cheaper to design and build a system according to its context than re-build a system to fix issues that were foreseeable. The re-work born of being too focused on the tactical to the detriment of the strategic is as much a form of technical debt as cutting corners. The lazy architect knows that time spent identifying and reconciling contexts can allow them to avoid the extra work caused by blind incremental design.

“Bumper Sticker Philosophy” on Iasa Global Blog

Yeah!  Wait, what?

YAGNI: You Ain’t Gonna Need It.

Sound bites are great – short, sweet, clear, and simple. Just like real life, right?

Seductive simple certainty is what makes slogans so problematic. Uncertainty and ambiguity occur far more frequently in the real world. Context and nuance add complexity, but not all complexity can be avoided. In fact, removing essential complexity risks misapplication of otherwise valid principles.

See the full post on the Iasa Global Blog (a re-post, originally published here).