Systems Thinking Complicates Things

4th UK Rock Paper Scissors Championships by James Bamber via Wikimedia

 

I’ve had the honor and pleasure of appearing as a regular on Tom Cagley‘s SPaMCast podcast for almost three years now. Before I write one of my “Form Follows Function on SPaMCast x” posts, I always listen to the podcast to make sure that the summary is right (the implication being, relying purely on my memory won’t be right). I got a bonus while writing up last week’s appearance, because Tom asked an excellent question that deserved its own post: does thinking about a problem (legacy systems, in the instance of last week’s discussion) holistically/systematically complicate things?

Abso-freakin’-lutely.

It is much easier to avoid all the twists and turns and possibilities inherent in systems thinking. A simpler approach, picking one lever to pull/one button to push, makes it much easier to come up with a solution.

It just doesn’t work very well at coming up with solutions that actually work.

When there is a mismatch in complexity between problem and solution architectures, this mismatch will be an additional problem to deal with. This will apply when the solution is more complex than the problem space warrants and when the opposite is the case. Solutions that fail to account for the context they will encounter are vulnerable. This is the idea behind the quote attributed to Albert Einstein: “Everything should be made as simple as possible, but not simpler.”

Human nature can push us to fix problems quickly, and quick will generally equate to simple. It takes time to analyse the angles and consider the alternatives. How often have you seen people ask for “the best” way to do something absent any context? How often have you seen people ask “why would someone ever do that?”

I’ll answer that by asking 3 questions:

  • since Rock beats Scissors, why would anyone ever choose Scissors?
  • since Paper beats Rock, why would anyone ever choose Rock?
  • since Scissors beats Paper, why would anyone ever choose Paper?

Reality isn’t binary. It’s not what’s “best”, it’s what’s fit for purpose in a given context and there are lots and lots of contexts out there.

This isn’t to say that all quick, simple interventions are wrong. If you find yourself in a house fire, more action and less comprehensive deliberation may well be in order. The key is matching the cost (largely in terms of time) of defining the problem space with cost (in terms of both effort and risk that the intervention adds to the problem) of crafting the solution.

Rock, Paper, Scissor, Lizard, Spock rules diagram

It’s almost guaranteed that the system contexts we deal with (both technical and social) will evolve toward more and more complexity. Surprises will emerge as a matter of course. We don’t need to make more by failing to take a more holistic view when we have the time to do so.

Advertisement

Organizations as Systems – Kurosawa, Clausewitz, and Chess

16th Century Market Scene

In order to respond appropriately to the context we find ourselves in, it’s helpful that we be able to correctly define that context. It’s something humans aren’t always good at.

Not too long ago, Sun Tzu’s The Art of War was all the rage as among executives. While the book contains some excellent lessons that have applications beyond the purely military, as someone in my Twitter feed noted recently, “Business is not war”.

[Had I realized that the tweet, in combination with another article, would trigger something in my byzantine thought processes, I would have bookmarked it to give them credit – sorry!]

Business is, indeed, not war. In fact, one of the nuggets of wisdom to be found in Clausewitz’s treatise, On War, is that war is often not war. Specifically, what he is saying is that the reality of a concept often diverges from our (mis)understanding of that concept. Our perception is colored by factors such as our experience, beliefs, and interests. Additionally, our tendency to employ abstraction can be both tool and trap. Ignoring irrelevant detail can simplify reasoning about something, assuming that the detail ignored is actually irrelevant. Ignoring relevant detail can quickly lead to problems.

The game of chess illustrates this. Chess involves strategy and has its origins as an abstract simulation of war. Beyond promoting a very rudimentary type of strategic thought, chess is far from capable of simulating the complex social system of warfare. Perhaps if all the pieces were sentient and had both agency and agenda (bonus points for contradictory ones potentially conflicting with the player’s agenda), it might come closer. Perhaps if the boundaries of the arena were indeterminate, it might come closer. Perhaps if the state of the terrain, the composition and disposition of forces (friend, as well as foe), and the goals of the opponent were less transparent, it might come closer.

In short, the more certainty there is, the less accuracy there is. Where the human aspect is ignored or minimized, you may gain certainty, but it comes at the cost of losing contact with reality. Social systems are highly complex and treating them otherwise is like looking for a gas leak with a lighter – you may be able to do so, but your chances of liking the results are pretty small.

This post was originally planned to be for last week, but I stumbled into a Twitter conversation that illustrates my point (specifically re: leadership and management), so I wrote that first as a preamble. Systems of practice designed for a context where value equals effort expended are unlikely to work well in a knowledge work context where the relationship between effort and value is less direct (where, in fact, the value curve may invert past a certain point). Putting an updated veneer on the technique with data and algorithms won’t improve the results if the technique is fundamentally mismatched to the context (or if there is a disconnect between what you can measure and what you actually want). Sometimes, the most important thing to learn about management is when not to manage.

Disconnects between complex contexts and simplistic practices transcend the management of an organization, reaching into the very architecture of the enterprise itself (both in the organization and its relationship to its ecosystem). Poorly designed organizations (which includes those with no intentional design) can wind up with their employees faced with perverse incentives to act in a manner that conflicts with the best interests of the organization. When the employee is actually under pressure from the organization to sabotage the organization, the problem is not with the employee.

Just as with a software system, social systems have both problem and solution architectures. Likewise, in both cases the quality of the solution architecture is dependent on how well (or not) it addresses the architecture of the problem. Recognizing the various contexts in play and then resolving the conflicts between them (to include resolving challenges arising from the resolution of the original conflicts) is the essence of architectural design, regardless of the type of system (software or social). Rather than a static, one time activity, it is an ongoing need for sensing system health and responding appropriately throughout the lifecycle of the system (in fact, stopping the process will likely hasten the end of the lifecyle by way of achieving a state where the system cannot be corrected).

Innovation in Inner Space

KGL dragoons at the Battle of Garcia Hernandez

 

Long-time readers know that I have a rather varied set of interests and that I’ve got a “thing” for history, particularly military history. Knowing that, it shouldn’t come as a surprise that I was recently reading an article titled “Cyber is the fourth dimension of war” (ground, sea and air being the first three dimensions). It’s not a bad article, but it is mistaken. Cyberwar is the fifth dimension of war. The first dimension, today and for all of time past, is the human mind. Contests are won or lost, not on some field of battle, virtual or physical, but in the minds of the combatants. For example, if you believe you’ve lost, then you have.

The painting shown above illustrates this nicely. During the Napoleonic period, infantry that was charged by cavalry would form a square, presenting a hedge of bayonets to all sides. Horses, being intelligent creatures, will not impale themselves on pointy things, thus the formation provides protection to the infantry who were free to fire at the encircling cavalry. Charging disciplined, unbroken infantry was a losing proposition for the cavalry under almost all circumstances. Note the use of “almost”.

At the Battle of García Hernández, July 1812, something unusual happened. One French formation was late in firing, and a wounded horse ran blindly into the square, breaking it up. The attacking British (Hannoverian, to be precise) cavalry rode into the gap and forced the surrender of the French infantry that comprised it. This, of course, was simply a matter of physics. However, two further squares broke up when charged due to the effect of what happened to the first one on their morale. Believing they were beaten, they failed to maintain cohesion and their anticipated defeat became a reality.

So, what’s the point?

Greger Wikstrand and I have been trading posts on the topic of innovation since late 2015. Greger’s latest, “Spring clean your mind”, deals with the concepts of infowar and propaganda (aka “fake news”). This is another example of what Greger’s written about in the past, a concept he dubbed black hat innovation: “Whenever there is innovation or invention there is also misuse”.

Whether you call it black hat innovation or abuse cases (my term), it’s a concept we need to be aware of. It is a concept that affect us, not just as technologists, but as ordinary human beings. We need to be aware of the potential for active abuse. We also need to be aware of the potential for problems that caused by things that make our life more convenient or more pleasant:

This isn’t to say that Facebook is some evil empire, but that we need to bear some responsibility for not allowing ourselves to become trapped in an echo chamber:

It’s something we need to take responsibility for. We can’t hope for a technological deus ex machina to bail us out. As Tim Bass recently noted on his Cyberspace Event Processing Blog:

The big “AI” processing “pie in the sky” plan for cyber defense we all read about is not going to work “as advertised” because we cannot program machines to solve problems that we cannot solve ourselves. There is no substitute for the advancement and development of the human mind to solve complex problems. Delegating the task of “thinking” to machines is doomed to fail, and fail “big time”. It seems like humanity has, in a manner of speaking, “given up” on humans developing the intelligence to manage and defend cyberspace, so they have decided to turn it all over to machines.

Wrong approach!

The right approach, in my opinion, is to be intentional and active in learning. Consuming information should not be a matter of sitting back and shoveling it in, but one of filtering, testing, and appraising. How much time do you spend reading viewpoints you absolutely disagree with? How much time do you spend exploring information?

In 1645, as he was looking back at his long and successful career as a samurai, where a single loss often meant death, Miyamoto Musashi concluded that although rigorous sword practice was essential, it wasn’t enough. At the end of the first chapter of A Book of Five Rings, he also admonishes aspiring warriors to “Cultivate a wide variety of interests in the arts” and “Be knowledgable in a wide variety of occupations.”

Similarly, Boyd, who was was a keen student of Musashi, described his method as looking across a wide variety of fields — “domains” he called them — searching for underlying principles, “invariants.” He would then experiment with syntheses involving these principles until he evolved a solution to the problem he was working on. Because they involved bits and pieces from a variety of domains, he called these syntheses “snowmobiles” (skis, handlebar from a bicycle, etc.)

 

Perception is critical. We are made or unmade, less by our circumstances and more by our perception of them. Companies that have suffered disruptions have done so not because they were unable to respond, but because they either believed themselves invulnerable or believed themselves incapable. Likewise, as individuals, we have control over what information we expose ourselves to and how we manage that exposure.

Sense-making is a critical skill that requires active involvement. The passive get passed by.

[Painting of the battle of Garcia Hernandez by Adolf Northen, housed in the Landesmuseum Hannover. Photo by Michael Ritter via Wikimedia Commons]

Abstract Dangers – When ‘And’ Meets ‘Or’

There’s an old saying that if you put one foot in a bucket of ice and the other in a bucket of boiling water, on average you’re comfortable. Sometimes analyzing information in the aggregate obscures rather than enlightens.

A statistician named Francis Anscombe pointed out this same principle in a more visual (though less colorful) manner more than forty years ago:

It’s an idea that I’ve been meaning to write about for a while, but was brought back to mind last week while reading an article on the Austrian school of economic theory posted on a site about medical practice and health care in the U.S. (diversity of interests and a very broad reading list is something I find useful, but that’s a topic for another day). The relevant passage:

When Ludwig von Mises began to establish a systematic theory of economics, he insisted on what he called the principle of methodological dualism: the scientific methods of the hard sciences are great to study rocks, stars, atoms, and molecules, but they should not be applied to the study of human beings. In stating this principle, he was voicing opposition to the introduction into economics of concepts such as “market equilibrium,” which were largely inspired by the physical sciences, and were perhaps motivated by a desire on the part of some economists to establish their field as a science on par with physics.

Mises remarked that human beings distinguish themselves from other natural things by making intentional (and usually rational) choices when they act, which is not the case for stones falling to the ground or animals acting on instinct. The sciences of human affairs therefore deserve their own methods and should not be tempted to apply the tools of the physical sciences willy-nilly. In that respect, Mises agreed with Aristotle’s famous dictum that ” It is the mark of an educated man to look for precision in each class of things just so far as the nature of the subject admits.”

I find myself agreeing and disagreeing with this at the same time. Human behavior is far from being as predictable as gravity and I agree with this for exactly the reason I disagree (at least in part) with the second paragraph. It is a mistake to characterize human action as intentional and rational. That’s not to say that all our choices are irrational and reactionary, but that there is a blend. Not only will different people respond in different ways to the same circumstance for different reasons, but the same person may react differently with different motivations on another occasion. Human nature isn’t rigidly deterministic and we consider it so at our peril.

Tom Graves post “Control, complex, chaotic” makes the same observation:

Attempting to ‘control’ complexity just doesn’t work: we need to treat the complex as complex, not as a ‘controllable problem for which we don’t quite know all the rules (but will know them all Real Soon Now, honest…)’.

Yet I’m also noticing another deeper problem: misguided attempts to apply complexity-theory to things that are neither rule-bound control nor pattern-based complexity, but are inherently ‘chaotic’ – a ‘market-of-one‘. Although we can identify definite patterns in health and health-care – that’s the whole basis of epidemiology, for example – neither rules nor statistics can help us deal with the blunt fact the everyone is different. The kind of patterns that we’d use in a complexity-model – probabilities, Bell-curve distributions, outliers, all that kind of thing – can all too easily mask the real underlying fact of uniqueness, from which that supposed ‘pattern’ will actually arise: somewhat like the barely-visible deep-randomness that underlies the visible patterns of Brownian-motion.

Trying to force something into a mold which it doesn’t fit is unlikely to work well.

Abstraction can be useful in understanding the contexts that influence the architecture of the problem. Designing an effective solution, however, will involve not just integrating the concerns of those contexts, but also dealing with any emergent challenges. The variability of human nature (in other words, sometimes the members of those contexts will not all think and act alike) can be one such emergent challenge.

Tom Grave’s again, this time from his “On mass-uniqueness”:

In practice, the scope of every system will comprise a mix of sameness and uniqueness – of predictable and unpredictable, certain and uncertain. If we design only on an assumption of sameness – as IT-systems often are – we set ourselves up for guaranteed failure. The same applies if – as is all too common – we say that our IT-system will handle all of the ‘sameness’ part of the context, and that the ‘not-sameness’ will Somebody Else’s Problem – without giving any means for that supposed ‘somebody else’ to be able to address the rest of the problem, or to link it up with the parts of the context that our system does handle.

The first requirement to make something that works in the real-world is to design for uniqueness, not against it.

In other words, a solution based on a poorly understood problem is unlikely to be a good fit. Abstraction is one tool to understand the problem, but doesn’t provide the whole picture. Shades of gray (black and white) is more likely than black or white.

Form Follows Function on SPaMCast 377

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast, number 377, features Tom’s essay on empathy, Kim Pries talking about the application of David Allen’s concepts for Getting Things Done, and the first Form Follows Function installment for 2016 on organizations and innovation.

Tom and I discuss my post “Changing Organizations Without Changing People”, talking about the need to work with, not against, human nature in the design and operation of organizations.

You can find all my SPaMCast episodes using under the SPAMCast Appearances category on this blog. Enjoy!

Changing Organizations Without Changing People

The Thin Red Line at Balaclava

Prof Bo Molander once pointed out to me and the other students in the class that when you try to change people, you go up against billions of years of evolution, “good luck with that” and when you try to change groups, you go up against millions of years of evolutions, “good luck with that too”. The only thing you can hope to change is the organization.

Greger Wikstrand and I have been carrying on a discussion about architecture, innovation, and organizations as systems. Here’s the background so far:

  1. “We Deliver Decisions (Who Needs Architects?)” – I discussed how the practice of software architecture involved decision-making. It combines analysis with the need for situational awareness to deal with the emergent factors and avoiding cognitive biases.
  2. “Serendipity with Woody Zuill” – Greger pointed me to a short video of him and Woody Zuill discussing serendipity in software development.
  3. “Fixing IT – Too Big to Succeed?” – Woody’s comments in the video re: the stifling effects of bureaucracy in IT inspired me to discuss the need for embedded IT to address those effects and to promote better customer-centricity than what’s normal for project-oriented IT shops.
  4. “Serendipity and successful innovation” – Greger’s post pointed out that structure is insufficient to promote innovation, organizations must be prepared to recognize and respond to opportunities and that innovation must be able to scale.
  5. “Inflection Points and the Ingredients of Innovation” – I expanded on Greger’s post, using WWI as an example of a time where innovation yielded uneven results because effective innovation requires technology, understanding of how to employ it, and an organizational structure that allows it to be used well.
  6. “Social innovation and tech go hand-in-hand” – Greger continued with the same theme, the social and technological aspects of innovation.
  7. “Organizations and Innovation – Swim or Die!” – I discussed the ongoing need of organizations to adapt to their changing contexts or risk “death”.
  8. “Innovation – Resistance is Futile” – Continuing on in the same vein, Greger points out that resistance to change is futile (though probably inevitable). This post contained the wonderful quote above.

What an intriguing statement: you can’t change the behavior of individual people; you can’t change the behavior of groups; you have to change the behavior of the organization. What?

The rest of the paragraph sheds some light:

It is the same with my sheep, I do not try to change them as individuals or as a flock but by managing their access to shelter, food and water and by managing onboarding and offboarding of individual sheep in the flock I do manage the whole organization according to my goals.

Rather than changing the nature of sheep, individually or as a group, Greger uses his knowledge of their nature to structure things so that compliance is the natural outcome. Changing their nature, assuming it’s even possible, would take millions of years. Working with the grain of their nature is considerably easier. Military organizations have recognized this since ancient times, using individual and group characteristics to promote unit cohesion.

In the post “Locking Down the Prisoners: Control, Conflict and Compliance for Organizations”, I noted something similar. You get a lot more compliance when you make it easier to comply. Conversely, making it difficult for someone to do their job well is an excellent way to kill both motivation and effectiveness. I’ve used the quote from Tom Graves before, but it bears repeating: “…things work better when they work together, on purpose”.

Matt Ballantine, in his post “Best Practice versus Good Ideas”, showed how an organization promoted innovation. Rather than imposing “best practices”, which depending on context might not actually be “best”, the company promoted learning and sharing. Because these behaviors were rewarded, people engaged in them and innovation was fostered. Both the organization and the people that made it up benefited.

Congruence between what is said and what is done is critical. I’ve seen it said that changing culture is hard. Changing culture is impossible if you claim to value one thing but your actions demonstrate that you really don’t.