Form Follows Function on SPaMCast 471

SPaMCAST logo

It’s time for another appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

Last week’s episode, number 471, features Tom’s essay on the top 20 transformation killers. Jeremy Berriault‘s QA corner is about involving testers in the requirements process. My Form Follows Function segment rounds out the podcast, covering my post “Systems Thinking Complicates Things”.

In this installment, Tom and I talk about how simplistic analysis is unlikely to fit a complex problem. We illustrate this by talking about the game of “rock, paper, scissors” (then graduate to “rock, paper, scissors, lizard, Spock”!). I don’t really think that’s what’s meant by game theory, but hey, it’s fun (and illustrative) nonetheless.

You can find all the SPaMCast episodes I’m in under the SPaMCast Appearances category on this blog. Enjoy!

Advertisement

Systems Thinking Complicates Things

4th UK Rock Paper Scissors Championships by James Bamber via Wikimedia

 

I’ve had the honor and pleasure of appearing as a regular on Tom Cagley‘s SPaMCast podcast for almost three years now. Before I write one of my “Form Follows Function on SPaMCast x” posts, I always listen to the podcast to make sure that the summary is right (the implication being, relying purely on my memory won’t be right). I got a bonus while writing up last week’s appearance, because Tom asked an excellent question that deserved its own post: does thinking about a problem (legacy systems, in the instance of last week’s discussion) holistically/systematically complicate things?

Abso-freakin’-lutely.

It is much easier to avoid all the twists and turns and possibilities inherent in systems thinking. A simpler approach, picking one lever to pull/one button to push, makes it much easier to come up with a solution.

It just doesn’t work very well at coming up with solutions that actually work.

When there is a mismatch in complexity between problem and solution architectures, this mismatch will be an additional problem to deal with. This will apply when the solution is more complex than the problem space warrants and when the opposite is the case. Solutions that fail to account for the context they will encounter are vulnerable. This is the idea behind the quote attributed to Albert Einstein: “Everything should be made as simple as possible, but not simpler.”

Human nature can push us to fix problems quickly, and quick will generally equate to simple. It takes time to analyse the angles and consider the alternatives. How often have you seen people ask for “the best” way to do something absent any context? How often have you seen people ask “why would someone ever do that?”

I’ll answer that by asking 3 questions:

  • since Rock beats Scissors, why would anyone ever choose Scissors?
  • since Paper beats Rock, why would anyone ever choose Rock?
  • since Scissors beats Paper, why would anyone ever choose Paper?

Reality isn’t binary. It’s not what’s “best”, it’s what’s fit for purpose in a given context and there are lots and lots of contexts out there.

This isn’t to say that all quick, simple interventions are wrong. If you find yourself in a house fire, more action and less comprehensive deliberation may well be in order. The key is matching the cost (largely in terms of time) of defining the problem space with cost (in terms of both effort and risk that the intervention adds to the problem) of crafting the solution.

Rock, Paper, Scissor, Lizard, Spock rules diagram

It’s almost guaranteed that the system contexts we deal with (both technical and social) will evolve toward more and more complexity. Surprises will emerge as a matter of course. We don’t need to make more by failing to take a more holistic view when we have the time to do so.

Innovation, Intention, Planning and Execution

Napoleon at Wagram

 

Convergence is an interesting thing. Greger Wiktrand and I have been trading posts back and forth on the subject of innovation for almost eighteen months now (forty posts in total). I’ve also been writing a lot on the concept of organizations as systems, (twenty-two posts over the last year, with some overlap with innovation). The need for architectural design (and make no mistake, social systems like organizations require as much architectural design over their lifetime as any software system) and the superiority, in my opinion, of intentional architecture versus accidental architecture are also themes of long standing on this site.

My last post, “Architecture Corner: Good at innovation – Seven Deadly Sins of IT”, linked to a YouTube video produced by and starring Greger and Casimir Artmann. It’s worth the watching, so I won’t give away the plot, but I will say that it demonstrates how these concepts interrelate.

Effectiveness requires reasoned intentional action. I’ve used this Tom Graves’ quote many times before, but it still applies: “things work better when they work together, on purpose”.

The word “purpose” is critical to that sentence. The difference between intentional rather than accidental activity is the difference between being goal-directed and flailing blindly (n.b. experimenting, done right, is the former, not the latter). An understanding of purpose can allow a goal to be reached, even when the initial route to that goal is closed off. Completing a required set of tasks lacks that flexibility. This appreciation of the utility of purpose-oriented direction over micro-management is an old one that the military periodically re-visits:

An understanding of the purpose aids the joint force in exercising disciplined initiative to facilitate the commander’s visualized end state. Moreover, the purpose itself not only drives why tasks must happen, but also how subordinate commanders choose to execute their assigned mission(s).

Purposes must be carefully crafted, nested, and organized not only to achieve unity of effort, but also the intended outcomes (selected tasks to execute, method of execution, and/or desired effects). They also must give subordinates the latitude to find better, innovative solutions to tactical and operational problems. Finally, the operational purpose must ultimately nest back to the strategic national interest in order to affect change in the human domain. Purposes for the subordinate operations must be well thought out, nested within the desired operational objectives, and be the correct purpose in order to achieve the desired operational end state. Therefore, it is incumbent upon commanders to develop purposes for subordinate operations first and subsequently build the tasks. The “why” trumps the “how” both in importance and in priority.

What to accomplish and why are more important than how to accomplish something. As the author of article above noted, communicating purpose “…enables subordinates to take advantage of emergent opportunities that arise by enabling shared understanding of the commander’s purpose and end state.” It should also force those providing direction to examine their rationale for what they’re asking for. “Why” is the most important question that can be asked. Activity that is not tailored to achieving a particular aim will be ineffective. This includes chasing the latest silver bullet. A recent article on International Business Times, “As a term of description, ‘digital’ is now an anachronism”, had this to say:

As a term of description, digital is an anachronism. It reflects an organisational mindset that views technological transformation itself as the aim. It’s a common mistake. At the height of the dotcom boom, suddenly everyone needed a website, but not everyone understood why.

Over the last few years, the drive to digitisation has intensified. Business models, brands, products and services, customer relationships and business processes are increasingly governed by digital elements such as data.

But much the same as building a website in 1999, it’s not a question of becoming “more digital”. It’s a question of what you want digital to do.

Confusing means and ends is both futile and expensive. No matter how many tools I buy, buying tools won’t make me a carpenter (though my bank balance will continue to shrink regardless of whether the purchase helped or not). Dropping tools and techniques into a culture that is not able or prepared to use them accomplishes nothing. Likewise, becoming more “digital” (or for that matter, more “agile”), will not help an organization if it’s heading in the wrong direction. Efficiency and effectiveness are two different things that may well not go hand in hand. Just as important to understand, efficiency must take a subordinate position to effectiveness. You cannot do the wrong thing efficiently enough to turn it into the right thing.

You need to understand what you want to do and what the constraints, if any, are. That understanding will allow you to figure out how you’re going to try to do it and determine why the tools and techniques will get you there or not. The alternative is delay (waiting for new instructions) caused by the bottleneck of over-centralized decision-making with a high probability of something getting lost in translation.

Work together purposefully so things work better.

Microservices, Monoliths, and Modularity

Iceberg

 

There are very valid reasons for considering a microservice architecture (MSA) when building/evolving an application. In my opinion, however, forcing modularity isn’t one of those very valid reasons.

Just the other day, I saw tweet from Simon Brown saying this same thing:

I still like his comment from two years back: “I’ll keep saying this … if people can’t build monoliths properly, microservices won’t help”. I believe that if you’re having problems building a monolith properly, trying to use a distributed architecture to force modularity may actually cause harm.

MSAs, like any distributed application architecture, involve increased complexity and costs; table stakes, if you will. Like an iceberg, there’s both a lot more to it than just what’s showing above the waterline and a fair amount of hazard for the unwary. If a development team cannot or will not comply with design guidelines (e.g. modularity requirements), injecting additional complexity is probably not the solution you need.

Distributing an application makes it harder to accidentally entangle different concerns, but it doesn’t make it impossible:

I’d argue that making it harder to accidentally break modularity addresses neither of the groups I mentioned earlier: those that cannot or will not comply. It’s ironic, but those who fail to understand the need for modularity can be very creative in their “solutions”, regardless of the obstacles. Likewise, those who refuse to comply.

In short, distribution as a means of “ensuring” modularity fails the fitness for purpose test.

The situation becomes worse when you factor in the additional complexity inherent in a distributed system. Likewise, there’s the cost of the table stakes (infrastructure, process, staffing, etc.) mentioned above. Of course, having abandoned the principle of cause and effect, one could attempt some “creative” workarounds to avoid having to pay the price (in other words, adding more and more complexity).

When you introduce significant additional complexity (with all its attendant risk) with little chance of the technique actually achieving its goal, you’ve caused harm.

These concerns are not solely limited to the application architecture. Distributing the data architecture has the same limitations in terms of ensuring modularity and introduces additional complexity. Adding boundaries adds the need for governance. A disciplined, monolithic team can maintain modularity in a monolithic data architecture. Multiple separate teams trying to share a monolithic data architecture will either experience a crippling level of governance overhead or a complete breakdown in modularity.

MSAs can be useful when you need independently scalable and replaceable components. When you have multiple teams working on one logical application, they can also be appropriate as well. Using the technique when the cost outweighs the potential payoff, however, is a losing bet.

Emergence: Babies and Bathwater, Plans and Planning

blueprints

 

“Emergent” is a word that I run into from time to time. When I do run into it, I’m reminded of an exchange from the movie Gallipoli:

Archy Hamilton: I’ll see you when I see you.
Frank Dunne: Yeah. Not if I see you first.

The reason for my ambivalent relationship with the word is that it’s frequently used in a sense that doesn’t actually fit its definition. Dictionary.com defines it like this:

adjective

1. coming into view or notice; issuing.
2. emerging; rising from a liquid or other surrounding medium.
3. coming into existence, especially with political independence: the emergent nations of Africa.
4. arising casually or unexpectedly.
5. calling for immediate action; urgent.
6. Evolution. displaying emergence.

noun

7. Ecology. an aquatic plant having its stem, leaves, etc., extending above the surface of the water.

Most of the adjective definitions apply to planning and design (which I consider to be a specialized form of planning). Number 3 is somewhat tenuous for that sense and and 5 only applies sometimes, but 6 is dead on.

My problem, however, starts when it’s used as a euphemism for a directionless. The idea that a cohesive, coherent result will “emerge” from responding tactically (whether in software development or in managing a business) is, in my opinion, a dangerous one. I’ve never heard an explanation of how strategic success emerges from uncoordinated tactical excellence that doesn’t eventually come down to faith. It’s why I started tagging posts on the subject “Intentional vs Accidental Architecture”. Success that arises from lack of coordination is accidental rather than by design (not to mention ironic when the lack of intentional coordination or planning/design is intentional itself):

If you don’t know where you are going, any road will get you there.

 

The problem, of course, is do you want to be at the “there” you wind up at? There’s also the issue of cost associated with a meandering path when a more direct route was available.

None of this, however, should be taken as a rejection of emergence. In fact, a dogmatic attachment to a plan in the face of emergent facts is as problematic as pursuing an accidental approach. Placing your faith in a plan that has been invalidated by circumstances is as blinkered an approach as refusing to plan at all. Neither extreme makes much sense.

We lack the ability to foresee everything that can occur, but that limit does not mean that we should ignore what we can foresee. A purely tactical focus can lead us down obvious blind alleys that will be more costly to back out of in the long run. Experience is an excellent teacher, but the tuition is expensive. In other words, learning from our mistakes is good, but learning from other’s mistakes is better.

Darwinian evolution can produce lead to some amazing things provided you can spare millions of years and lots of failed attempts. An intentional approach allows you to tip the scales in your favor.


Many thanks to Andrew Campbell and Adrian Campbell for the multi-day twitter conversation that spawned this post. Normally, I unplug from almost all social media on the weekends, but I enjoyed the discussion so much I bent the rules. Cheers gentlemen!

Managing Fast and Slow

Tortoise and Hare Illustration

People have a complicated relationship with the concept of cause and effect. In spite of the old saying about the insanity of doing the same old thing looking for a different result, we hope against hope that this time it will work. Sometimes we inject unnecessary complexity into what should be very simple tasks, other times we over-simplify looking for shortcuts to success. Greger Wikstrand recently spoke to one aspect of this in his post “Cargo cult innovation, play buzzword bingo to spot it” (part of our ongoing conversion on innovation):

I am not saying that there is no basis of truth in what they say. The problem is that innovation is much more complex than they would have you believe. If you fall for the siren song of cargo cult innovationism, you will have all the effort and all the trouble of real innovation work but you will have none of the benefits.

I ran across an interesting example of this kind of simplistic thought not long ago on Forbes, titled “The Death of Strategy”, by Bill Fischer:

Strategy is dead!

Or, is it tactics?

In a world of never-ending change, it’s either one or the other; we can no longer count on having both. As innovation accelerates its assault on what we formerly referred to as “our planning process,” and as S-curves accordingly collapse, each one on top another, time is compressed. In the rubble of what is left of our strategy structure, we find that what we’ve lost is the orderly and measured progression of time. Tim Brown, of IDEO, recently put it this way at the Global Peter Drucker Forum 2016, in Vienna: “So many things that used to have a beginning, a middle and an end, no longer have a middle or an end.” Which is gone: strategy or tactics? And, does it matter?

Without a proper middle, or end, for any initiative, the distinction between strategy and tactics blurs: tactics become strategy, especially if they are performed in a coherent and consistent fashion. Strategy, in turn, now takes place in the moment, in the form of an agglomeration of a series (or not) of tactics.

The pace of change certainly feels faster than ever before (I’m curious, though, as to when the world has not been one of “never-ending change”). However, that nugget of truth is wrapped in layers of fallacy and a huge misunderstanding of the definitions of “tactics” and “strategy”. “Tactical and Strategic Interdependence”, a commentary from the Clausewitzian viewpoint, contrasts the terms in this manner:

Both strategy and tactics depend on combat, but, and this is their essential difference, they differ in their specific connection to it. Tactics are considered “the formation and conduct of these single combats in themselves” while strategy is “the combination of them with one another, with a view to the ultimate object of the War.”[8] Through the notion of combat we begin to see the differentiation forming between tactics and strategy. Tactics deals with the discrete employment of a single combat, while strategy handles their multiplicity and interdependence. Still we need a rigorous conception. Clausewitz strictly defines “tactics [as] the theory of the use of military forces in combat,” while “Strategy is the theory of the use of combats for the object of the War.”[9] These definitions highlight the difference between the means and ends of tactics and strategy. Tactics considers the permutations of military forces, strategy the combinations of combats, actual and possible.

In other words, tactics are the day to day methods you use to do things. Strategy is how you achieve your long term goals by doing the things you do. Tactics without strategy is a pile of bricks without an idea of what you’re going to build. Strategy without tactics is an idea of what to build without a clue as to how you’d build it.

Fischer is correct that strategy executed is the “…agglomeration of a series (or not) of tactics”, but his contention that it “…now takes place in the moment…” is suspect, predicated as it is on the idea that things suddenly lack “…a proper middle or end…”. I would argue that any notion of a middle or end that was determined in advance rather than retroactively, is an artificial one. Furthermore, the idea that there are no more endings due to the pace of change is more than a little ludicrous. If anything, the faster the pace, the more likely endings become as those who can’t keep up drop out. Best of all is the line “…tactics become strategy, especially if they are performed in a coherent and consistent fashion”. Tactics performed in “…a coherent and consistent fashion” is pretty much the definition of executing a strategy (negating the premise of the article).

Flailing around without direction will not result in innovation, no matter how fast you flail. While change is inevitable, innovation is not. Innovating, making “significant positive change”, is not a matter of doing a lot of things fast and hoping for the best. Breakthroughs may occasionally be “happy accidents”, but even then are generally ones where intentional effort has been expended towards making them likely.

In today’s business environment, organizations must be moving forward just to maintain the status quo, much less innovate. This requires knowing where you are, where you’re headed, and what obstacles you’re likely to face. This assessment of your operating context is known as situational awareness. It’s not simple, because your context isn’t simple. It’s not a recipe, because your context is ever-changing. It’s not a product you can buy nor a project you can finish and be done with. It’s an ongoing, deliberate process of making sense of your context and reacting accordingly.

Situational awareness exists on multiple levels, tactical through strategic. While the pace of change is high, the relative pace between the tactical and strategic is still one of faster and slower. Adjustments to strategic goals may come more frequently, but daily changes in long-term goals would be a red flag. Not having any long-term goals would be another. Very specific, very static long-range plans are probably wasted effort, but having some idea of what you’ll be doing twelve months down the road is a healthy sign.

Form Follows Function on SPaMCast 411

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast, number 411, features Tom’s essay on Servant Leadership (which I highly recommened), John Quigley on managing requirements as a part of product management, a Form Follows Function installment based on my post “Organizations as Systems – ‘Uneasy Lies the Head that Wears the Crown'”, and Kim Pries on software craftsmanship.

Tom and I discuss the danger of trying to use simplistic explanations for the interactions that make up complex human systems. No one has the power to force things in a particular direction, rather the direction comes about as a result of the actions and interactions of everyone involved. It might be comforting to believe that there’s one single lever for change, but it’s wrong.

You can find all my SPaMCast episodes using under the SPAMCast Appearances category on this blog. Enjoy!

Organizations as Systems – “Uneasy Lies the Head that Wears the Crown”

Bavarian Crown and Regalia, Royal Treasury Munich

 

One of the benefits of having a (very) wide range of interests is that every so often a flash of insight gets dropped into my lap. In this case, it was a matter of “We must recognise that single events have multiple causes” showing up as a suggested read from Aeon on the same day that Thomas Power retweeted this:

The image in the tweet is an excerpt from an interview with Rory Stewart, Conservative Member of Parliament for Penrith in the UK. The collision of themes between the two articles struck me.

“You get there and you pull the lever, and nothing happens.”

The behavior of a system is determined not by the structure of the components of that system, but by the relationships and interactions between those components. Moreover, those relationships and interactions are dynamic and complex, even when that’s contrary to the designer’s intent. In fact, the gap between the behavior as intended and as experienced introduces a tension. I would argue that it’s less a matter of nothing happening when the “lever” is pulled and more that something different from what’s expected happens. Rather than simple cause and effect, “if this, then that”, multiple factors are in play.

In mechanical systems, parts wear, subtly changing the physics of the mechanism. Foreign objects invading the system can impose change in a more dramatic fashion. Context, both that of the system’s internals and its environment, influences its operation.

As was noted in the Aeon article, agency adds to the complexity. In social systems, all of the “components” are individuals with agency, making those systems chaotic in at least the colloquial sense of the word. Using Tom Graves’ sense-making framework, SCAN, these interactions fall into the more uncertain quadrants, either “Ambiguous but Actionable” or “Not-known, None-of-the-above”. Attempting to deal with them as though they fell into the “Simple and Straightforward” quadrant increases the likelihood of getting unexpected results.

Learning/sense-making is critical to dealing with change, whether internal or external (or both). The manner in which change is appreciated and reacted to, affects the health of the system. Consider three boilers: one where pressure is continuously monitored and adjusted, one which is equipped with a pressure relief valve which will open prior to a catastrophic failure, and one where problems are signaled by an explosion. It’s a trivial exercise to come up with examples of social systems, from businesses all the way up to political systems, using the third method. It’s probably a more interesting exercise to consider why that’s the case for so many.

In a recent post, “Architecting the shadows”, Tom Graves discussed the phenomenon of ad hoc, unofficial “shadow” organizational interactions that arise in order to get work done:

In SCAN terms, we could summarise the generic positioning of all ‘shadow’ functions – shadow-IT, shadow-business-models, shadow-management and more – much as follows:

Scan Diagram: Official vs. Shadow

In other words, the ‘shadow’-world exists to deal with and resolve all the uncertainties and over-simplifications that overly-mechanistic management models tend to overlook. Even in more aware management-models, in which some exploration of the uncertain is officially sanctioned and allowed, the shadow-world will still always need to exist – particularly whenever the work gets closer towards real-time action:

Scan Diagram: Official vs. Shadow showing sanctioned Shadow Activity

In closing the post, Tom makes the following observation:

As the literal ‘the architecture of the enterprise’, a real enterprise-architecture must, by definition, cover every aspect of the enterprise – including all of the ‘shadow’-elements. And yet, also by definition, those ‘shadow’-elements cannot be brought ‘under control’ – not least because they deal with the themes and factors that are beyond the reach of conventional concepts of ‘control’.

The “conventional concepts of ‘control'”, the deluded belief that complex interactions can be managed as though they were simple, poses an immense risks to organizations. Even attempting to treat those interactions as merely complicated, rather than complex, introduces a gap between reality and perception, between “the way we do things” and the way things actually get done. When the concept and reality of the system’s interactions differ, it’s more likely that the components of the system will wind up working at cross-purposes.

In a comment on Tom’s post, I noted that where the shadow elements are a “French Resistance”, flouting the rules in order to actually get work done, that’s a red flag.

The most important thing to learn about management and governance is knowing when and how to manage or govern and more importantly, when not to. Knowing what can actually be controlled is an important first step.

All models may be wrong, but it’s not a contest to see how wrong you can be

HO Scale locomotive beside a pencil

The one thing you can be sure of is that nothing is dependent on only one thing.

Michael Feathers‘ tweet last week brought this to mind:

Too often we construct simplistic mental models that fail to account for outcomes that are possible, but inconvenient for us in some way. As Aneel noted while discussing OODA loops in his post “All Models Are Wrong Some Are Useful [In Some Context]”:

OODA is just a vehicle for the larger issue of models, biases, and model-based blindness — Taleb’s Procrustean Bed. Where we chop off the disconfirmatory evidence that suggests our models are wrong AND manipulate [or manufacture] confirmatory evidence.

Because if we allowed the wrongness to be true, or if we allowed ourselves to see that differentness works, we’d want/have to change. That hurts.

Furthermore:

Our attachment [and self-identification] to particular models and ideas about how things are in the face of evidence to the contrary — even about how we ourselves are — is the source of avoidable disasters like the derivatives driven financial crisis. Black Swans.

I wonder how many black swans are only “unpredictable” because we blind ourselves to possibilities.

It’s possible that we cannot eradicate self-deception. “Rats Can Be Smarter Than People” speaks to this via study results that found rats outperforming humans in one of a pair of learning tasks:

The first task involved rules. The second focused on information integration. Humans learn in both ways. Our rule-based system was an evolutionary development: How do you tell if a berry is good for eating? You learn that this small red one is good, and then you save energy by bypassing the ones of a different shape or color. So our brains have been conditioned to look for rules. We’re taught them in school, at work, and by our parents, and we can make many good decisions by applying the ones we’ve learned. But in other situations there’s too much going on for simple rules to work, and that’s when information integration learning has to kick in. Think of a radiologist evaluating an X-ray. If you ask him what rules he uses to determine whether a spot is cancer, he’d probably have a hard time verbalizing them. He’s learned from labeled examples in medical school and his own experience, and then developed an instinct for identifying cancerous spots based on what he’s seen before. Another example that comes to mind is a manager interviewing a job candidate. There aren’t any hard-and-fast rules about who will be a good hire. You have to consider many factors and rely on your judgment or on a gut feeling based on your experience with people in the workplace. Unfortunately, there’s a great deal of evidence showing that humans have a harder time learning how to integrate information in this way, because they seek rules even when there are none.

In other words, we have a model about learning (meta-model?) that works well in many situations. It works so well that we resist looking for context where it’s not the appropriate model. While this shows that a propensity for self-deception is natural, the fact that “self” is in there suggest that we have some control. Having some control obligates us to exercise what control we have and work to gain more.

Why?

A critical argument for the practice of software architecture is that the design of the solution must cohere with the problem space in order to be effective. In order to deliver decisions that achieve this goal, we need to be able to make sense of the problem space. Systems thinking, described by Tom Cagley as “…an approach to problem solving that emphasizes viewing problems as the output of the whole process, including the environment the system operates within”, is a technique to do so. The more we force ourselves to be aware of our biases and work to counteract them, the better our decisions will be.

Simple is good, but not when it’s too good to be true.

Cause and Effect – Cargo Cults and Carts Before Horses

Sometimes our love of shortcuts can make us really stupid. Take, for example, the idea of “Fail Fast”. As Jeff Sussna observed in his post “Rethinking Failure”, “Suddenly failure is all the rage.” He also noted:

By itself, failure is anything but good. Making the same mistake over and over again doesn’t help anyone. Failure leads to success when I learn from it by changing my behavior or understanding in response to it. Even then, it’s impossible to guarantee that my response will in fact lead to success. The validity of any given response can only be evaluated in hindsight.

Dan McClure, in “Why the “Fail Fast” Philosophy Doesn’t Work”, agreed:

If your only strategy for exploring the unknown is to pick up rocks and look underneath, then the more rocks you turn over the better. The problem is that for real world innovations, test and reject doesn’t scale well. For disruptive ideas with the potential to make a difference in the market there are lots and lots of rocks.

The value in “Fail Fast” lies in the “Fast” part; there’s no magic in the “Fail”. If you’re going to fail, finding out about it sooner, rather than later, is less costly. Less costly, however, is a far cry from best. Succeeding obviously works much better than failing fast, meaning that methods which allow you to evaluate without incurring the time, pain, and expense of a failure are a better choice when available.

Another example of this phenomenon is what Matt Balantine recently referred to as “investor-centric” development:

That, of course, creates an interesting rabbit hole – investors chasing products that will be “hot” and products designed to appeal to investors rather than customers (which would result in the product becoming “hot”). Matt’s comment from his post “What if the answer isn’t software?” applies, “I’ve no doubt that we are seeing real issues and opportunities being ignored in the pursuit of the rainbow-pooping unicorns.”

Yet another example of magical thinking is via believing in the “Great Man Theory”. People like Steve Jobs and Elon Musk have achieved great things, but as a result of what they did, not who they are. Divorced from their context, it’s a hard sell to argue that they would be equally successful.

Effectiveness is more likely to come from systems thinking than magical thinking. Understanding cause and effect as well as interrelationships and context makes the difference between rational decision-making and superstition.