Emergence: Babies and Bathwater, Plans and Planning

blueprints

 

“Emergent” is a word that I run into from time to time. When I do run into it, I’m reminded of an exchange from the movie Gallipoli:

Archy Hamilton: I’ll see you when I see you.
Frank Dunne: Yeah. Not if I see you first.

The reason for my ambivalent relationship with the word is that it’s frequently used in a sense that doesn’t actually fit its definition. Dictionary.com defines it like this:

adjective

1. coming into view or notice; issuing.
2. emerging; rising from a liquid or other surrounding medium.
3. coming into existence, especially with political independence: the emergent nations of Africa.
4. arising casually or unexpectedly.
5. calling for immediate action; urgent.
6. Evolution. displaying emergence.

noun

7. Ecology. an aquatic plant having its stem, leaves, etc., extending above the surface of the water.

Most of the adjective definitions apply to planning and design (which I consider to be a specialized form of planning). Number 3 is somewhat tenuous for that sense and and 5 only applies sometimes, but 6 is dead on.

My problem, however, starts when it’s used as a euphemism for a directionless. The idea that a cohesive, coherent result will “emerge” from responding tactically (whether in software development or in managing a business) is, in my opinion, a dangerous one. I’ve never heard an explanation of how strategic success emerges from uncoordinated tactical excellence that doesn’t eventually come down to faith. It’s why I started tagging posts on the subject “Intentional vs Accidental Architecture”. Success that arises from lack of coordination is accidental rather than by design (not to mention ironic when the lack of intentional coordination or planning/design is intentional itself):

If you don’t know where you are going, any road will get you there.

 

The problem, of course, is do you want to be at the “there” you wind up at? There’s also the issue of cost associated with a meandering path when a more direct route was available.

None of this, however, should be taken as a rejection of emergence. In fact, a dogmatic attachment to a plan in the face of emergent facts is as problematic as pursuing an accidental approach. Placing your faith in a plan that has been invalidated by circumstances is as blinkered an approach as refusing to plan at all. Neither extreme makes much sense.

We lack the ability to foresee everything that can occur, but that limit does not mean that we should ignore what we can foresee. A purely tactical focus can lead us down obvious blind alleys that will be more costly to back out of in the long run. Experience is an excellent teacher, but the tuition is expensive. In other words, learning from our mistakes is good, but learning from other’s mistakes is better.

Darwinian evolution can produce lead to some amazing things provided you can spare millions of years and lots of failed attempts. An intentional approach allows you to tip the scales in your favor.


Many thanks to Andrew Campbell and Adrian Campbell for the multi-day twitter conversation that spawned this post. Normally, I unplug from almost all social media on the weekends, but I enjoyed the discussion so much I bent the rules. Cheers gentlemen!

Managing Fast and Slow

Tortoise and Hare Illustration

People have a complicated relationship with the concept of cause and effect. In spite of the old saying about the insanity of doing the same old thing looking for a different result, we hope against hope that this time it will work. Sometimes we inject unnecessary complexity into what should be very simple tasks, other times we over-simplify looking for shortcuts to success. Greger Wikstrand recently spoke to one aspect of this in his post “Cargo cult innovation, play buzzword bingo to spot it” (part of our ongoing conversion on innovation):

I am not saying that there is no basis of truth in what they say. The problem is that innovation is much more complex than they would have you believe. If you fall for the siren song of cargo cult innovationism, you will have all the effort and all the trouble of real innovation work but you will have none of the benefits.

I ran across an interesting example of this kind of simplistic thought not long ago on Forbes, titled “The Death of Strategy”, by Bill Fischer:

Strategy is dead!

Or, is it tactics?

In a world of never-ending change, it’s either one or the other; we can no longer count on having both. As innovation accelerates its assault on what we formerly referred to as “our planning process,” and as S-curves accordingly collapse, each one on top another, time is compressed. In the rubble of what is left of our strategy structure, we find that what we’ve lost is the orderly and measured progression of time. Tim Brown, of IDEO, recently put it this way at the Global Peter Drucker Forum 2016, in Vienna: “So many things that used to have a beginning, a middle and an end, no longer have a middle or an end.” Which is gone: strategy or tactics? And, does it matter?

Without a proper middle, or end, for any initiative, the distinction between strategy and tactics blurs: tactics become strategy, especially if they are performed in a coherent and consistent fashion. Strategy, in turn, now takes place in the moment, in the form of an agglomeration of a series (or not) of tactics.

The pace of change certainly feels faster than ever before (I’m curious, though, as to when the world has not been one of “never-ending change”). However, that nugget of truth is wrapped in layers of fallacy and a huge misunderstanding of the definitions of “tactics” and “strategy”. “Tactical and Strategic Interdependence”, a commentary from the Clausewitzian viewpoint, contrasts the terms in this manner:

Both strategy and tactics depend on combat, but, and this is their essential difference, they differ in their specific connection to it. Tactics are considered “the formation and conduct of these single combats in themselves” while strategy is “the combination of them with one another, with a view to the ultimate object of the War.”[8] Through the notion of combat we begin to see the differentiation forming between tactics and strategy. Tactics deals with the discrete employment of a single combat, while strategy handles their multiplicity and interdependence. Still we need a rigorous conception. Clausewitz strictly defines “tactics [as] the theory of the use of military forces in combat,” while “Strategy is the theory of the use of combats for the object of the War.”[9] These definitions highlight the difference between the means and ends of tactics and strategy. Tactics considers the permutations of military forces, strategy the combinations of combats, actual and possible.

In other words, tactics are the day to day methods you use to do things. Strategy is how you achieve your long term goals by doing the things you do. Tactics without strategy is a pile of bricks without an idea of what you’re going to build. Strategy without tactics is an idea of what to build without a clue as to how you’d build it.

Fischer is correct that strategy executed is the “…agglomeration of a series (or not) of tactics”, but his contention that it “…now takes place in the moment…” is suspect, predicated as it is on the idea that things suddenly lack “…a proper middle or end…”. I would argue that any notion of a middle or end that was determined in advance rather than retroactively, is an artificial one. Furthermore, the idea that there are no more endings due to the pace of change is more than a little ludicrous. If anything, the faster the pace, the more likely endings become as those who can’t keep up drop out. Best of all is the line “…tactics become strategy, especially if they are performed in a coherent and consistent fashion”. Tactics performed in “…a coherent and consistent fashion” is pretty much the definition of executing a strategy (negating the premise of the article).

Flailing around without direction will not result in innovation, no matter how fast you flail. While change is inevitable, innovation is not. Innovating, making “significant positive change”, is not a matter of doing a lot of things fast and hoping for the best. Breakthroughs may occasionally be “happy accidents”, but even then are generally ones where intentional effort has been expended towards making them likely.

In today’s business environment, organizations must be moving forward just to maintain the status quo, much less innovate. This requires knowing where you are, where you’re headed, and what obstacles you’re likely to face. This assessment of your operating context is known as situational awareness. It’s not simple, because your context isn’t simple. It’s not a recipe, because your context is ever-changing. It’s not a product you can buy nor a project you can finish and be done with. It’s an ongoing, deliberate process of making sense of your context and reacting accordingly.

Situational awareness exists on multiple levels, tactical through strategic. While the pace of change is high, the relative pace between the tactical and strategic is still one of faster and slower. Adjustments to strategic goals may come more frequently, but daily changes in long-term goals would be a red flag. Not having any long-term goals would be another. Very specific, very static long-range plans are probably wasted effort, but having some idea of what you’ll be doing twelve months down the road is a healthy sign.

Form Follows Function on SPaMCast 411

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast, number 411, features Tom’s essay on Servant Leadership (which I highly recommened), John Quigley on managing requirements as a part of product management, a Form Follows Function installment based on my post “Organizations as Systems – ‘Uneasy Lies the Head that Wears the Crown'”, and Kim Pries on software craftsmanship.

Tom and I discuss the danger of trying to use simplistic explanations for the interactions that make up complex human systems. No one has the power to force things in a particular direction, rather the direction comes about as a result of the actions and interactions of everyone involved. It might be comforting to believe that there’s one single lever for change, but it’s wrong.

You can find all my SPaMCast episodes using under the SPAMCast Appearances category on this blog. Enjoy!

Organizations as Systems – “Uneasy Lies the Head that Wears the Crown”

Bavarian Crown and Regalia, Royal Treasury Munich

 

One of the benefits of having a (very) wide range of interests is that every so often a flash of insight gets dropped into my lap. In this case, it was a matter of “We must recognise that single events have multiple causes” showing up as a suggested read from Aeon on the same day that Thomas Power retweeted this:

The image in the tweet is an excerpt from an interview with Rory Stewart, Conservative Member of Parliament for Penrith in the UK. The collision of themes between the two articles struck me.

“You get there and you pull the lever, and nothing happens.”

The behavior of a system is determined not by the structure of the components of that system, but by the relationships and interactions between those components. Moreover, those relationships and interactions are dynamic and complex, even when that’s contrary to the designer’s intent. In fact, the gap between the behavior as intended and as experienced introduces a tension. I would argue that it’s less a matter of nothing happening when the “lever” is pulled and more that something different from what’s expected happens. Rather than simple cause and effect, “if this, then that”, multiple factors are in play.

In mechanical systems, parts wear, subtly changing the physics of the mechanism. Foreign objects invading the system can impose change in a more dramatic fashion. Context, both that of the system’s internals and its environment, influences its operation.

As was noted in the Aeon article, agency adds to the complexity. In social systems, all of the “components” are individuals with agency, making those systems chaotic in at least the colloquial sense of the word. Using Tom Graves’ sense-making framework, SCAN, these interactions fall into the more uncertain quadrants, either “Ambiguous but Actionable” or “Not-known, None-of-the-above”. Attempting to deal with them as though they fell into the “Simple and Straightforward” quadrant increases the likelihood of getting unexpected results.

Learning/sense-making is critical to dealing with change, whether internal or external (or both). The manner in which change is appreciated and reacted to, affects the health of the system. Consider three boilers: one where pressure is continuously monitored and adjusted, one which is equipped with a pressure relief valve which will open prior to a catastrophic failure, and one where problems are signaled by an explosion. It’s a trivial exercise to come up with examples of social systems, from businesses all the way up to political systems, using the third method. It’s probably a more interesting exercise to consider why that’s the case for so many.

In a recent post, “Architecting the shadows”, Tom Graves discussed the phenomenon of ad hoc, unofficial “shadow” organizational interactions that arise in order to get work done:

In SCAN terms, we could summarise the generic positioning of all ‘shadow’ functions – shadow-IT, shadow-business-models, shadow-management and more – much as follows:

Scan Diagram: Official vs. Shadow

In other words, the ‘shadow’-world exists to deal with and resolve all the uncertainties and over-simplifications that overly-mechanistic management models tend to overlook. Even in more aware management-models, in which some exploration of the uncertain is officially sanctioned and allowed, the shadow-world will still always need to exist – particularly whenever the work gets closer towards real-time action:

Scan Diagram: Official vs. Shadow showing sanctioned Shadow Activity

In closing the post, Tom makes the following observation:

As the literal ‘the architecture of the enterprise’, a real enterprise-architecture must, by definition, cover every aspect of the enterprise – including all of the ‘shadow’-elements. And yet, also by definition, those ‘shadow’-elements cannot be brought ‘under control’ – not least because they deal with the themes and factors that are beyond the reach of conventional concepts of ‘control’.

The “conventional concepts of ‘control'”, the deluded belief that complex interactions can be managed as though they were simple, poses an immense risks to organizations. Even attempting to treat those interactions as merely complicated, rather than complex, introduces a gap between reality and perception, between “the way we do things” and the way things actually get done. When the concept and reality of the system’s interactions differ, it’s more likely that the components of the system will wind up working at cross-purposes.

In a comment on Tom’s post, I noted that where the shadow elements are a “French Resistance”, flouting the rules in order to actually get work done, that’s a red flag.

The most important thing to learn about management and governance is knowing when and how to manage or govern and more importantly, when not to. Knowing what can actually be controlled is an important first step.

All models may be wrong, but it’s not a contest to see how wrong you can be

HO Scale locomotive beside a pencil

The one thing you can be sure of is that nothing is dependent on only one thing.

Michael Feathers‘ tweet last week brought this to mind:

Too often we construct simplistic mental models that fail to account for outcomes that are possible, but inconvenient for us in some way. As Aneel noted while discussing OODA loops in his post “All Models Are Wrong Some Are Useful [In Some Context]”:

OODA is just a vehicle for the larger issue of models, biases, and model-based blindness — Taleb’s Procrustean Bed. Where we chop off the disconfirmatory evidence that suggests our models are wrong AND manipulate [or manufacture] confirmatory evidence.

Because if we allowed the wrongness to be true, or if we allowed ourselves to see that differentness works, we’d want/have to change. That hurts.

Furthermore:

Our attachment [and self-identification] to particular models and ideas about how things are in the face of evidence to the contrary — even about how we ourselves are — is the source of avoidable disasters like the derivatives driven financial crisis. Black Swans.

I wonder how many black swans are only “unpredictable” because we blind ourselves to possibilities.

It’s possible that we cannot eradicate self-deception. “Rats Can Be Smarter Than People” speaks to this via study results that found rats outperforming humans in one of a pair of learning tasks:

The first task involved rules. The second focused on information integration. Humans learn in both ways. Our rule-based system was an evolutionary development: How do you tell if a berry is good for eating? You learn that this small red one is good, and then you save energy by bypassing the ones of a different shape or color. So our brains have been conditioned to look for rules. We’re taught them in school, at work, and by our parents, and we can make many good decisions by applying the ones we’ve learned. But in other situations there’s too much going on for simple rules to work, and that’s when information integration learning has to kick in. Think of a radiologist evaluating an X-ray. If you ask him what rules he uses to determine whether a spot is cancer, he’d probably have a hard time verbalizing them. He’s learned from labeled examples in medical school and his own experience, and then developed an instinct for identifying cancerous spots based on what he’s seen before. Another example that comes to mind is a manager interviewing a job candidate. There aren’t any hard-and-fast rules about who will be a good hire. You have to consider many factors and rely on your judgment or on a gut feeling based on your experience with people in the workplace. Unfortunately, there’s a great deal of evidence showing that humans have a harder time learning how to integrate information in this way, because they seek rules even when there are none.

In other words, we have a model about learning (meta-model?) that works well in many situations. It works so well that we resist looking for context where it’s not the appropriate model. While this shows that a propensity for self-deception is natural, the fact that “self” is in there suggest that we have some control. Having some control obligates us to exercise what control we have and work to gain more.

Why?

A critical argument for the practice of software architecture is that the design of the solution must cohere with the problem space in order to be effective. In order to deliver decisions that achieve this goal, we need to be able to make sense of the problem space. Systems thinking, described by Tom Cagley as “…an approach to problem solving that emphasizes viewing problems as the output of the whole process, including the environment the system operates within”, is a technique to do so. The more we force ourselves to be aware of our biases and work to counteract them, the better our decisions will be.

Simple is good, but not when it’s too good to be true.

Cause and Effect – Cargo Cults and Carts Before Horses

Sometimes our love of shortcuts can make us really stupid. Take, for example, the idea of “Fail Fast”. As Jeff Sussna observed in his post “Rethinking Failure”, “Suddenly failure is all the rage.” He also noted:

By itself, failure is anything but good. Making the same mistake over and over again doesn’t help anyone. Failure leads to success when I learn from it by changing my behavior or understanding in response to it. Even then, it’s impossible to guarantee that my response will in fact lead to success. The validity of any given response can only be evaluated in hindsight.

Dan McClure, in “Why the “Fail Fast” Philosophy Doesn’t Work”, agreed:

If your only strategy for exploring the unknown is to pick up rocks and look underneath, then the more rocks you turn over the better. The problem is that for real world innovations, test and reject doesn’t scale well. For disruptive ideas with the potential to make a difference in the market there are lots and lots of rocks.

The value in “Fail Fast” lies in the “Fast” part; there’s no magic in the “Fail”. If you’re going to fail, finding out about it sooner, rather than later, is less costly. Less costly, however, is a far cry from best. Succeeding obviously works much better than failing fast, meaning that methods which allow you to evaluate without incurring the time, pain, and expense of a failure are a better choice when available.

Another example of this phenomenon is what Matt Balantine recently referred to as “investor-centric” development:

That, of course, creates an interesting rabbit hole – investors chasing products that will be “hot” and products designed to appeal to investors rather than customers (which would result in the product becoming “hot”). Matt’s comment from his post “What if the answer isn’t software?” applies, “I’ve no doubt that we are seeing real issues and opportunities being ignored in the pursuit of the rainbow-pooping unicorns.”

Yet another example of magical thinking is via believing in the “Great Man Theory”. People like Steve Jobs and Elon Musk have achieved great things, but as a result of what they did, not who they are. Divorced from their context, it’s a hard sell to argue that they would be equally successful.

Effectiveness is more likely to come from systems thinking than magical thinking. Understanding cause and effect as well as interrelationships and context makes the difference between rational decision-making and superstition.