Ignorance Isn’t Bliss, Just Good Tactics

Donkey

There’s an old saying about what happens when you assume.

The fast lane to asininity seems to run through the land of hubris. Anshu Sharma’s Tech Crunch article, “Why Big Companies Keep Failing: The Stack Fallacy”, illustrates this:

Stack fallacy has caused many companies to attempt to capture new markets and fail spectacularly. When you see a database company thinking apps are easy, or a VM company thinking big data is easy  — they are suffering from stack fallacy.

Stack fallacy is the mistaken belief that it is trivial to build the layer above yours.

Why do people fall prey to this fallacy?

The stack fallacy is a result of human nature  — we (over) value what we know. In real terms, imagine you work for a large database company  and the CEO asks , “Can we compete with Intel or SAP?” Very few people will imagine they can build a computer chip just because they can build relational database software, but because of our familiarity with building blocks of the layer up,  it is easy to believe you can build the ERP app. After all, we know tables and workflows.

The bottleneck for success often is not knowledge of the tools, but lack of understanding of the customer needs. Database engineers know almost nothing about what supply chain software customers want or need.

This kind of assumption can cost an ISV a significant amount of money and a lot of good will on the part of the customer(s) they attempt to disrupt. Assumptions about the needs of the customer (rather than the customer’s customer) can be even more expensive. The smaller your pool of customers, the more damage that’s likely to result. Absent a captive customer circumstance, incorrect assumptions in the world of bespoke software can be particularly costly (even if only in terms of good will). Even comprehensive requirements are of little benefit without the knowledge necessary to interpret them:

But, that being said:

This would seem to pose a dichotomy: domain knowledge as both something vital and an impediment. In reality, there’s no contradiction. As the old saying goes, “a little knowledge is a dangerous thing”. When we couple that with another cliche, “familiarity breeds contempt”, we wind up with Sharma’s stack fallacy, or as xkcd expressed it:

'Purity' on xkcd.com

In order to create and evolve effective systems, we obviously have a need for domain knowledge. We also have a need to understand that what we possess is not domain knowledge per se, but domain knowledge filtered through (and likely adulterated by) our own experiences and biases. Without that understanding, we risk what Richard Martin described in “The myopia of expertise”:

In the world of hyperspecialism, there is always a danger that we get stuck in the furrows we have ploughed. Digging ever deeper, we fail to pause to scan the skies or peer over the ridge of the trench. We lose context, forgetting the overall geography of the field in which we stand. Our connection to the surrounding region therefore breaks down. We construct our own localised, closed system. Until entropy inevitably has its way. Our system then fails, our specialism suddenly rendered redundant. The expertise we valued so highly has served to narrow and shorten our vision. It has blinded us to potential and opportunity.

The Clean Room pattern on CivicPatterns.org puts it this way:

Most people hate dealing with bureaucracies. You have to jump through lots of seemingly pointless hoops, just for the sake of the system. But the more you’re exposed to it, the more sense it starts to make, and the harder it is to see things through a beginner’s eyes.

So, how do we get those beginner’s eyes? Or, at least, how do we get closer to having a beginner’s eyes?

The first step is to reject the notion of our own understanding of the problem space. Lacking innate understanding, we must then do the hard work of determining what the architecture of the problem, our context, is. As Paul Preiss noted, this doesn’t happen at a desk:

Architecture happens in the field, the operating room, the sales floor. Architecture is business technology innovation turned to strategy and then executed in reality. Architecture is reducing the time it takes to produce a barrel of oil, decreasing mortality rates in the hospital, increasing product margin.

Being willing to ask “dumb” questions is just as important. Perception without validation may be just an assumption. Seeing isn’t believing. Seeing and validating what you’ve seen, is believing.

It’s equally important to understand that validating our assumptions goes beyond just asking for requirements. Stakeholders can be subject to biases and myopic viewpoints as well. It’s true that Henry Ford’s customers would probably have asked for faster horses, it’s also true that, in a way, that’s exactly what he delivered.

We earn our money best when we learn what’s needed and synthesize those needs into an effective solution. That learning is dependent on communication unimpeded by our pride or prejudice:

Advertisement

Form Follows Function on SPaMCast 373

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast, number 373, features Tom’s essay on #NotImplementedNoValue and a Form Follows Function installment on simplistic mental models.

Tom and I discuss my post “All models may be wrong, but it’s not a contest to see how wrong you can be”, talking about cognitive biases and how overly simplistic mental models fail us.

You can find all my SPaMCast episodes using under the SPAMCast Appearances category on this blog. Enjoy!

All models may be wrong, but it’s not a contest to see how wrong you can be

HO Scale locomotive beside a pencil

The one thing you can be sure of is that nothing is dependent on only one thing.

Michael Feathers‘ tweet last week brought this to mind:

Too often we construct simplistic mental models that fail to account for outcomes that are possible, but inconvenient for us in some way. As Aneel noted while discussing OODA loops in his post “All Models Are Wrong Some Are Useful [In Some Context]”:

OODA is just a vehicle for the larger issue of models, biases, and model-based blindness — Taleb’s Procrustean Bed. Where we chop off the disconfirmatory evidence that suggests our models are wrong AND manipulate [or manufacture] confirmatory evidence.

Because if we allowed the wrongness to be true, or if we allowed ourselves to see that differentness works, we’d want/have to change. That hurts.

Furthermore:

Our attachment [and self-identification] to particular models and ideas about how things are in the face of evidence to the contrary — even about how we ourselves are — is the source of avoidable disasters like the derivatives driven financial crisis. Black Swans.

I wonder how many black swans are only “unpredictable” because we blind ourselves to possibilities.

It’s possible that we cannot eradicate self-deception. “Rats Can Be Smarter Than People” speaks to this via study results that found rats outperforming humans in one of a pair of learning tasks:

The first task involved rules. The second focused on information integration. Humans learn in both ways. Our rule-based system was an evolutionary development: How do you tell if a berry is good for eating? You learn that this small red one is good, and then you save energy by bypassing the ones of a different shape or color. So our brains have been conditioned to look for rules. We’re taught them in school, at work, and by our parents, and we can make many good decisions by applying the ones we’ve learned. But in other situations there’s too much going on for simple rules to work, and that’s when information integration learning has to kick in. Think of a radiologist evaluating an X-ray. If you ask him what rules he uses to determine whether a spot is cancer, he’d probably have a hard time verbalizing them. He’s learned from labeled examples in medical school and his own experience, and then developed an instinct for identifying cancerous spots based on what he’s seen before. Another example that comes to mind is a manager interviewing a job candidate. There aren’t any hard-and-fast rules about who will be a good hire. You have to consider many factors and rely on your judgment or on a gut feeling based on your experience with people in the workplace. Unfortunately, there’s a great deal of evidence showing that humans have a harder time learning how to integrate information in this way, because they seek rules even when there are none.

In other words, we have a model about learning (meta-model?) that works well in many situations. It works so well that we resist looking for context where it’s not the appropriate model. While this shows that a propensity for self-deception is natural, the fact that “self” is in there suggest that we have some control. Having some control obligates us to exercise what control we have and work to gain more.

Why?

A critical argument for the practice of software architecture is that the design of the solution must cohere with the problem space in order to be effective. In order to deliver decisions that achieve this goal, we need to be able to make sense of the problem space. Systems thinking, described by Tom Cagley as “…an approach to problem solving that emphasizes viewing problems as the output of the whole process, including the environment the system operates within”, is a technique to do so. The more we force ourselves to be aware of our biases and work to counteract them, the better our decisions will be.

Simple is good, but not when it’s too good to be true.

We Deliver Decisions (Who Needs Architects?)

Broken Window

What do medicine, situational awareness, economics, confirmation bias, and value all have to do with all have to do with the architectural design of software systems?

Quite a lot, actually. To connect the dots, we need to start from the point of view that the architecture is essentially a set of design decisions intended to solve a problem. The architecture of that problem consists of a set of contexts. The fitness of a solution architecture will depend on how well it addresses the problem architecture. While challenges will emerge in the course of resolving a set of contexts, understanding up front what can be known provides reserves to deal with what cannot.

About two weeks ago, during a Twitter discussion with Greger Wikstrand, I mentioned that the topic (learning via observational studies rather than controlled experiment) coincided with a post I was publishing that day, “First Do No Harm – the Practice of Software Development” (comparing software development to medicine). That triggered the following exchange:

A few days later, I stumbled across a reference to Frédéric Bastiat‘s classic essay on economics, “What Is Seen and What Is Not Seen”. For those that aren’t motivated to read the work of 19th century French economists, it deals with the concepts of opportunity costs and the law of unintended consequences via a parable that attacks the notion that broken windows benefit the economy by putting glaziers to work.

A couple more days went by and Greger posted “Confirmation bias in software engineering” on the pitfalls of being too willing to believe information that conforms to our own preferences. That same day, I posted “Let’s Talk Value (Who Needs Architects?)”, discussing the effects of perception on determining value. Matt Ballantine made a comment on that post, and coincidentally, “confirmation bias” came up again:

I think it’s always going to be a balance of expediency and pragmatism when it comes to architecture. And to an extend it relates back to your initial point about value – I’m not sure that value is *anything* but perception, no matter what logical might tell us. Think about the things that you truly value in your life outside of work, and I’d wager that few of them would fit neatly into the equation…

So why should we expect the world of work to be any different? The reality is that it isn’t, and we just have a fashion these days in business for everything to be attributable to numbers that masks what is otherwise a bunch of activities happening under the cognitive process of confirmation bias.

So when it comes to arguing the case for architecture, despite the logic of the long-term gain, short term expedience will always win out. I’d argue that architectural approaches need to flex to accommodate that… http://mmitii.mattballantine.com/2012/11/27/the-joy-of-hindsight/

The common thread through all this is cognition. Perceiving and making sense of circumstances, then deciding how best to respond. The quality of the decision(s) will be directly related to the quality of the cognition. Failing to take a holistic view (big picture and details, not either-or) will impair our perception of the problem, sabotaging our ability to design effective solutions. Our biases can lead to embracing fallacies like the one in Bastiat’s parable, but stakeholders will likely be sensitive to the opportunity costs of avoidable architectural refactoring (the unintentional consequence of applying YAGNI at the architectural level). That sensitivity will color their perception of the value of the solution and their perception is the one that counts.

Making the argument that you did well by costing someone money is a lot easier in the abstract than it is in reality.

When Reality Gets in the Way – Applying Systems Thinking to Design

It’s easy to sympathize with this:

It’s also more than a little dangerous if our desire for simplicity moves us to act as if reality isn’t as complex as it is. Take, for example, a recent tweet from John Allspaw about over-simplification:

My observation in return:

As I noted in my previous post, it’s part of human nature to gravitate towards easy answers. We are conditioned to try to impose rules on reality, even when those rules are mistaken. Sometimes this is the result of treating symptoms in an ad hoc manner, as evidenced by this recent twitter exchange:

This goes by the name of the “balloon effect”, pressure on one area of the problem just pushes it into another in the way that squeezing a balloon displaces the air inside.

Sometimes our response is born of bias. In sociology, for example, this phenomenon has its own name: “normative sociology”:

The whole “normative sociology” concept has its origins in a joke that Robert Nozick made, in Anarchy, State and Utopia, where he claimed, in an offhand way, that “Normative sociology, the study of what the causes of problems ought to be, greatly fascinates us all”(247). Despite the casual manner in which he made the remark, the observation is an astute one. Often when we study social problems, there is an almost irresistible temptation to study what we would like the cause of those problems to be (for whatever reason), to the neglect of the actual causes. When this goes uncorrected, you can get the phenomenon of “politically correct” explanations for various social problems – where there’s no hard evidence that A actually causes B, but where people, for one reason or another, think that A ought to be the explanation for B.

Some historians likewise have a tendency to over-simplify, fixating on aspects that “ought to be” rather than determining what is (which is another way of saying what can be reasonably defended).

Decision-making is the essence of design. Thought processes that poorly match reality, whether due to bias or insufficient analysis or both, are unlikely to yield optimal results. Systems thinking, “…viewing ‘problems’ as parts of an overall system, rather than reacting to specific parts, outcomes or events, and thereby potentially contributing to further development of unintended consequences”, is an approach more likely to achieve a successful outcome.

When the end result will be a software system integrated into a social system (i.e. a system that is a component of an ecosystem), it makes sense to understand the problem space as the as-is system to be remediated. This holds true whether that as-is system is an automated one or not. While it is not feasible to minutely analyze the problem space, much less design in detail the solution, failing to appreciate the full context on a high level presents risks. These risks include not only those inherent in satisfying the needs of the overlooked context(s), but also those challenges that emerge from the interactions of the various contexts that make up the problem space.

Deciding on a particular design direction is, obviously, a decision. Deferring that determination is, likewise, a decision. Refusing to make a definite decision is a decision as well. The answer is not to push all decisions off to as late a date as possible, but to make decisions in the moment that are defensible given the information at hand. Looking at the problem space as a whole in the context of its ecosystem provides the perspective required to make the optimal decision.

Of Blind Men and Elephants and Excessive Certainty

Blind men and the elephant

There’s an old poem about six blind men and an elephant, where each in turn declare that an elephant is like a wall, a spear, a snake, a tree, a fan, and a rope. Each accurately described what he was able to discern from his own limited point of view, yet all were wrong about the subject as a whole. As the poet noted:

Moral:

So oft in theologic wars,
The disputants, I ween,
Rail on in utter ignorance
Of what each other mean,
And prate about an Elephant
Not one of them has seen!

Sometimes our attitudes color our perception of others:

Management is often the butt of our disdain, expressed in cartoon form:

However, as Sandro Mancuso related in “Not all managers are stupid”:

I still remember the day when our managers in a large organisation told us we should still go live after we reported a major problem a couple of months before the deadline…There was a problem in a couple of unfinished flows, which would cause hundreds of thousands of trades to be misreported to the regulators. After we explained the situation, managers told us to work harder go ahead with the release anyway.

How could they tell us to go live in a situation like that? They should all be fired. Arrested. How could they ask us to drop the quality and go live with a known problem of that size?…

More than once we made it clear that focusing our time on getting the system ready to production would not gives us any time to finish the automation for the problematic flows and thousands of trades would be misreported. But they did not listen. Or so we thought.

After a few meetings with the business, we discovered a few things. They were not being irresponsible or stupid, as we developers thought. The deadline was set by the regulators and could not be moved. The cost of not reporting the trades was far higher than misreporting them. Not reporting the trades would not only be followed by heavy fines, but also by possible reputation damage. Companies would have extra time to correct any misreported trades before being fined.

For us, in the development team, it was the first time we realised that going live with a few known issues would be better than not going live at all.

Designing the architecture of a solution, at its core, is an exercise in decision-making. Whether the system in question is a software system or a human system, effective decision-making must be preceded by sense-making to identify the architecture of the problem. Contexts need to be identified in order to be synthesized into the architecture of the problem.

Bias, being too certain of our understanding to make the effort to validate it, is a good way to miss out on what’s in front of us. Failing to recognize our potential for bias makes it harder to overcome that bias. That failure restricts our ability to appreciate the full range of contexts to be synthesized and puts us in the same position as the blind men with the elephant.

It’s extremely difficult to solve a problem you don’t understand.

“Who’s your predator?” on Iasa Global Blog

Just for the howl of it

Have you ever worked with someone with a talent for asking inconvenient questions? They’re the kind of person who asks “Why?” when you really don’t have a good reason or “How?” when you’re really not sure. Even worse, they’re always capable of finding those scenarios where your otherwise foolproof plan unravels.

Do you have someone like that?

If so, treasure them!

In a Twitter conversation with Charlie Alfred, he pointed out that you need a predator, “…an entity that seeks the weakest of the designs that evolve from a base to ensure survival of fittest”. You need this predator, because “…without a predator or three, there’s no limit to the number of unfit designs that can evolve”.

See the full post on the Iasa Global Blog (a re-post, originally published here).

Design Follies – ‘Why can’t I do that?’

Man in handcuffs

It’s ironic that the traits we think of as making a good developer are also those that can get in the way of design and testing, but that’s just the case. Think of how many times you’ve heard (or perhaps, said) “no one would ever do that”. Yet, given the event-driven, non-linear nature of modern systems, if a given execution path can occur, it will occur. Our cognitive biases can blind us to potential issues that arise when our product is used in ways we did not intend. As Thomas Wendt observed in “The Broken Worldview of Experience Design”:

To a certain extent, the designer’s intent is irrelevant once the product launches. That is, intent can drive the design process, but that’s not the interesting part; the ways in which users adopt the product to their own needs is where the most insight comes from. Designer intent is a theoretical, speculative formulation even when based on the most rigorous research methods and valid interpretations. That is not to say intention and strategic positioning is not important, but simply that we need to consider more than idealized outcomes.

Abhi Rele, in “APIs and Data: Journey to the Center of the Customer Experience”, put it in more concrete terms:

If you think you’re in full control of your customers’ experience, you’re wrong.

Customers increasingly have taken charge—they know what they want, when they want it, and how they want it. They are using their mobile phones more often for an ever-growing list of tasks—be it searching for information, looking up directions, or buying products. According to Google, 34% of consumers turn to the device that’s closest to them. More often than not, they’re switching from one channel or device mid-transaction; Google found that 67% of consumers do just that. They might start their product research on the web, but complete the purchase on a smartphone.

Switch device in mid-transaction? No one would ever do that! Oops.

We could, of course, decide to block those paths that we don’t consider “reasonable” (as opposed to stopping actual error conditions). The problem with that approach, is that our definition of “reasonable” may conflict with the customer’s definition. When “conflict” and “customer” are in the same sentence, there’s generally a problem.

These conflicts, in the right domain, can even have deadly results. While investigating the Asiana Airlines crash from July of 2013, one of the findings of the National Transportation Safety Board (NTSB) was that the crew’s belief of what the autopilot system would do did not coincide with what it actually did (my emphasis):

The NTSB found that the pilots had “misconceptions” about the plane’s autopilot systems, specifically what the autothrottle would do in the event that the plane’s airspeed got too low.

In the setting that the autopilot was in at the time of the accident, the autothrottles that are used to maintain specific airspeeds, much like cruise control in a car, were not programmed to wake up and intervene by adding power if the plane got too slow. The pilots believed otherwise, in part because in other autopilot modes on the Boeing 777, the autothrottles would in fact do this.

“NTSB Blames Pilots in July 2013 Asiana Airlines Crash” on Mashable.com

Even if it doesn’t contribute to a tragedy, a poor user experience (inconsistent, unstable, or overly restrictive) can lead to unintended consequences, customer dissatisfaction, or both. Basing that user experience on assumptions instead of research and/or testing increases the risk. As I’ve stated previously, risky assumptions are an assumption of risk.

Who’s Your Predator?

Just for the howl of it

Have you ever worked with someone with a talent for asking inconvenient questions? They’re the kind of person who asks “Why?” when you really don’t have a good reason or “How?” when you’re really not sure. Even worse, they’re always capable of finding those scenarios where your otherwise foolproof plan unravels.

Do you have someone like that?

If so, treasure them!

In a Twitter conversation with Charlie Alfred, he pointed out that you need a predator, “…an entity that seeks the weakest of the designs that evolve from a base to ensure survival of fittest”. You need this predator, because “…without a predator or three, there’s no limit to the number of unfit designs that can evolve”.

Preach it, brother. Sometimes the best friend you can have is the person who tells you what you don’t want to hear. It’s easier (and far cheaper) to deal with problems early rather than late.

I’ve posted previously regarding the benefits of designing collaboratively and the pitfalls of too much self reliance, but it’s one of those topics that merits an occasional reminder. As Richard Feynman noted, “The first principle is that you must not fool yourself – and you are the easiest person to fool.”

Self-confidence is a natural and desirable trait. Obviously, if you’re not confident in your decision, you should probably defer committing to it. That confidence, however, can blind us to potential flaws. Just as innovators overestimate consumer interest in their product, we can place too much faith in our own decisions and beliefs. Our lack of emotional distance can make it easy to fool ourselves. In that case, having someone to challenge our assumptions can be invaluable.

Working collaboratively increases the odds that flaws will be found, but does not guarantee it. Encouraging questions, even challenges is a good start – you don’t want to cause a failure because people were afraid to question the design. However, groups can be as subject to cognitive biases as individuals (for a great overview, see Thomas Cagley Jr.’s excellent series on the subject: July 8th, July 9th, July 10th, July 11th, and July 12th). No bad news is not necessarily good news.

Some times you have to be your own predator.

Getting ideas out of your head is helpful in getting the distance you need to evaluate your ideas in a more objective manner. Likewise, writing and/or diagramming forces a bit of rigor and organization, making it easier to spot gaps in the design. The more scrutiny a design can withstand, the more likely it is to survive in the wild.

When the wolf’s at the door, you’ll want to rely something that’s been proven, not pampered.