You can’t always get what you want…

Lucifer

You can’t always get what you want
But if you try sometimes well you just might find
You get what you need

When it comes to systems, you can’t always get what you want, but you do get what you design (intentionally or not), whether it’s what you need or not.

In other words, the architecture of systems, both social and software, evolve through some combination of intentional design and accidental emergence. Regardless of which end of the continuum the system leans toward, the end result will reflect the decisions made (or not made) in relation to various stimuli. Regarding businesses (a social system), Ruth Malan, in her February 2012 “Trace in Sand” post, put it this way:

I have been talking about agility in terms of evolutionary ecology, but with the explicit recognition that companies, comprised after all of individuals, attempt to speed and alter and intervene and interject and intercede and (I’m looking for the right word here) shape evolutionary processes with intentional actions — concerted, but also emergent from more and less choreographed, actions and intentions. Being bumped along by the unpredictable interactions of others, some from within, but also from “invasive species” from other ecosystems looking for new applications for their adaptable, mutable capability set.

Organizations create and participate in business ecologies. They build up the relationships that stabilize parts of the broader ecosystem, and create conditions for organizational forms to thrive there. They create products, they create the seeds of the next generation of harvest. They produce variants on their family tree, to target and develop niches.

Ruth further notes that while business adapt and improve in some cases, in other case they have “…become too closely adapted to and integrated within an ecosystem that has been replaced or significantly restructured by some landscape reshaping change…”. People generally refer to this phenomenon as disruption and the way they refer to it would seem to imply that it’s something that happens to or is done to a company. The role of the organization in its own difficulties (or demise) isn’t, in my opinion, well understood.

Last Friday, I saw a tweet from Noah Sussman that provides a useful heuristic for predicting the behavior of any large social system:

the actual reason behind the behavior:

It’s not that people are actively working to harm the organization, but that when there is no leadership, where there is no design, where there is no learning, the system ossifies and breaks down. Being perfectly adapted to an ecosystem that no longer exists is indistinguishable from being poorly adapted to the present context. I’m reminded of what Tom Graves stated in “The game of enterprise-architecture”: “things work better when they work together, on purpose”.

Without direction, entropy emerges where coherence is needed.

This is not to say, however, that micro-management is the answer. Too much design/control is as toxic as too little. This is particularly the case when the management system isn’t intentionally designed. Management that is both ad hoc and rigid can cause new problems while trying to solve existing ones. This is illustrated by another tweet from Friday:

The desire to avoid the “…embarrassment of cancellation” led to the decision to risk the lives of a plane load of “…foreign TV and radio journalists and also other foreign notables…”. I suspect that the fiery deaths of those individuals would have been an even bigger embarrassment. The system, however, led to the person who had the decision-making power to take that gamble.

The system works the way you built it, even when you didn’t intend to build it to work that way.

Regulating Software Development

'Belvidere Street construction, pouring concrete', Library of Virginia

 

Another weekend, another too good to pass up Twitter conversation during my “unplugged” time. This weekend, Grady Booch hooked me by retweeting Mike Potts tweet:

Mike’s tweet was a reply to Grady’s comment on the latest news out of Uber:

It’s an understandable question. It’s a reasonable question. It’s one that came up back during the healthcare.gov fiasco and it’s one raised by Volkswagen’s recent criminal misconduct.

However, when contemplating fixing a problem, we need to be extremely mindful of the potential for creating harm as a result of the “fix”. Particularly we should be wary of creating harm out of proportion to any good we do (i.e. we don’t want to kill roaches by burning down the house). I chose the image at the top to illustrate something key to this discussion – changing laws (the software of our meta-enterprise) is only slightly harder than moving a roadway once laid down.

Now for the caveats:

  • I do my utmost to avoid politics on this site – I really doubt you’re looking to me for guidance or even just my opinion. I’m not intending this post as a political statement. I’m not asserting that government is never the answer, merely that it’s a rather blunt instrument that we need to use with care.
  • I agree with Grady and Mike that those who took part in this are a disgrace. Moreover, I believe everyone involved, top to bottom, needs to be prosecuted and, if convicted, punished to the fullest extent of the law.
  • My tl;dr position is this: if we have regulation, it should be effective and without avoidable harmful side effects.

As I noted above, it’s human nature to respond to problems with some proposal to fix the problem. It also seems to be human nature to respond in a manner that doesn’t necessarily deal with an issue from a systemic perspective. We tend to allow ourselves to concentrate on the need to “do something” and ignore the hard work of making sure what we do is effective (and doesn’t cause further problems). In other words, we put band aids on bullet wounds.

In both the case of VW and Uber, the conduct alleged is criminal. We could pass new laws making it a crime to commit a crime, but that seems to be an exercise in recursive futility. If the potential penalty in the first case was insufficient to induce compliance, should we really believe adding another layer will make it better?

An element that’s present in both cases is that the illegal conduct involves creating software to help avoid detection of the fact that the company was breaking another law. Regulatory pressures coupled with a corrupt culture can create perverse incentives to cheat. This does not in any way excuse the conduct, particularly in the case of VW. It is, however, one of the systemic factors that should be taken into account.

In my experience, the most effective compliance program is one where compliance is the path of least resistance. Self-imposed compliance cannot fail to be more effective than compliance enforced externally. Corrupt agents will still violate the rules, but ideally you want to make it so that the lazy way out is the desired behavior.

Another aspect of regulation that comes up is something along the lines of professional standard similar to those of attorneys, accountants, and doctors. Increasing the level of professionalism is laudable, but would it be an effective response to the issue of criminal misconduct? Additionally, assuming it was legally enforced, what would the cost be? Everything from administration of the program to salary increases would introduce new costs and would likely affect the pace of innovation (due to the impact on both supply and demand). Again, without justifying the conduct, what was Uber’s motivation to develop its code to defeat detection by regulators?

I can well imagine other potential issues with a regulatory regime that requires a license to code. Not only commercial innovation would suffer, but the effects on the Open Source community could be disastrous if the licensing regime was expensive.

Doing “something” is easy. Doing something effective is a bit harder. I’m all aboard for punishing the guilty (each and every one), but we should move carefully when considering actions that might be more difficult to undo.

Fear of Failure, Fear and Failure

Capricho 43, Goya's 'The Sleep of Reason Produces Monsters'

Some things seem so logically inconsistent that you just have to check them out.

Such was the title of a post on LinkedIn that I saw the other day: “Innovation In Fear-Based Cultures? Or, why hire lions to be dogs?”. In it, Michael Graber noted that “…top-down organizations have the most trouble innovating.”:

In particular, the fearful mindsets that review, align, and sign off on “decks” to be presented to Vice President-level colleagues often edit out the insights and recommendations that have the power to grow the business in new ways.

These well-trained, obedient keepers of the status quo are rewarded for not taking risks and for not thinking outside of the existing paradigm of the business.

None of this is particularly shocking, a culture of fear is pretty much the antithesis of a learning culture and innovation in the absence of a learning culture is a bit like snow in the desert – not impossible, but certainly remarkable.

Learning involves risk. Whether the method is “move fast and break things” or something more deliberate and considered (such as that outlined in Greger Wikstrand‘s post “Jobs to be done innovation”), there is a risk of failure. Where there is a culture of fear, people will avoid all failure. Even limited risk failure in the context of an acknowledged experiment will be avoided because people won’t trust in the powers that be not to punish the failure. In avoiding this type of failure, learning that leads to innovation is avoided as well. You can still learn from what others have done (or failed to do), but even then there’s the problem of finding someone foolhardy enough to propose an action that’s out of the norm for the organization.

Why would an organization foster this kind of culture?

Seth Godin’s post, “What bureaucracy can’t do for you”, holds the key:

It lets us off the hook in many ways. It creates systems and momentum and eliminates many decisions for its members.

“I’m just doing my job.”

“That’s the way the system works.”

Decisions involve risk, someone could make the wrong one. For that reason, the number of people making decisions should be minimized (not a position I endorse, mind you).

That’s the irony of top-down, bureaucratic organizations – often the culture is by design, intended to eliminate risk. By succeeding in doing so on the mundane level, the organization actually introduces an existential risk, the risk of stagnation. The law of unintended consequences has a very long arm.

This type of culture actually introduces perverse incentives that further threaten the organization’s long-term health. Creativity is a huge risk, you could be wrong. Even if you’re right, you’ve become noticeable. Visibility becomes the same as risk. Likewise, responsibility means appearing on the radar. This not only discourages positive actions, but can easily be a corrupting influence.

Fear isn’t the only thing we have to fear, but sometimes it’s something we really need to be concerned about.


This post is another installment of an ongoing conversation about innovation with Greger Wikstrand.

I fought the law (of unintended consequences) and the law won

Sometimes, what seemed to be a really good idea just doesn’t turn out that way in the end.

In my opinion, a lack of a systems approach to problem solving makes that type of outcome much more likely. Simplistic responses to issues that fail to deal with problems holistically can backfire. Such ill-considered solutions not only fail to solve the original problem, but often set up perverse incentives that can lead to new problems.

An article on the Daily WTF last week, “Just the fax, Ma’am”, illustrates this perfectly. In the article, an inflexible and time-consuming database change process (layered on top of the standard change management process) leads to the “reuse” of an existing, but obsolete field in the database. Using a field labeled “Fax” for an entirely different purpose is far from “best practice”, but following the rules would lead to being seen as responsible for delaying a release. This is an example of a moral hazard, such as Tom Cagley discussed in his post “Some Moral Hazards In Software Development”. Where the cost of taking a risk is not borne by the party deciding whether to take it, potential for abuse abounds. This risk becomes particularly likely when the person taking shortcuts can claim a “moral” rationale for doing so (such as “getting it done” for the customer).

None of this is to suggest that change management isn’t a worthy goal. In fact, the worthier the goal, the greater the danger of creating an unintended consequence because it’s so easy to conflate argument over means with disagreement regarding the ends. If you’re not in favor of being strip-searched on arrival and departure from work, that doesn’t mean you’re anti-security. Nonetheless, the danger of that accusation being made will likely resonate for many. When the worthiness of the goal forestalls, or even just hinders, examination of the effectiveness of methods, then that effectiveness is likely to suffer.

Over the course of 2016, I’ve published twenty-two posts, counting this one, with the category Organizations as Systems. The fact that social systems are less deterministic than software systems only reinforces the need for intentional design. When foreseeable abuses are not accounted for, their incidence becomes more likely. Whether the abuse results from personal pettiness, doctrinal disagreements, or even just clumsy design like the change management process described above is irrelevant. In all of those cases, the problem is the same, decreased respect for institutional norms. Studies have found that “…corruption corrupts”:

Gächter has long been interested in honesty and how it manifests around the world. In 2008, he showed that students from 16 cities, from Riyadh to Boston, varied in how likely they were to punish cheaters in their midst, and how likely those cheaters were to then retaliate against their castigators. Both qualities were related to the values of the respective cities. Gächter found that the students were more likely to tolerate free-loaders and retaliate against do-gooders if they came from places whose citizens took a more relaxed view on tax evasion or fare-dodging, or had less trust in their courts and police.

If opinions around corruption and rule of law can affect people’s reactions to dishonesty, Gächter reasoned that they surely affect how honest people are themselves. If celebrities cheat, politicians rig elections, and business leaders engage in nepotism, surely common citizens would feel more justified in cutting corners themselves.

Taking a relaxed attitude toward the design of a social system can result in its constituents taking a relaxed attitude toward those aspects of the system that are inconvenient to them.

Barriers to Innovation

US Soldiers crossing Siegfried Line in WWII

 

Is innovation inevitable?

Greger Wikstrand and I have been trading blog posts on innovation since last November. In his latest post, “Credit card fraud and stalled innovation”, Greger discusses the relatively slow pace of innovation in credit card security. Those best placed to increase security neglect it because they don’t own the risk (a concept called “moral hazard”).

Sometimes, a potential innovation is not the best route to take. For example, the situation I discussed in “What’s Innovation Worth”, where change was avoided because the payoff didn’t justify the cost. Sometimes, however, potentially valuable innovation can be blocked in ways similar to what Greger outlined.

Laws and regulations can introduce perverse incentives that distort economic conditions. Ironically, where this hurts some innovations (e.g. Uber and other “gig economy” companies), it can also unintentionally push others. Increases in minimum wage laws are making automation more likely for some jobs.

External factors are far from the only barriers to innovation. Technological innovations, no matter how promising, will fail to flourish when placed in an inhospitable ecosystem. If all the systems, social and technological, fail to complement each other, then effectiveness will be diminished via friction. Technology, organization (structure), and process are all intertwined and interdependent.

Culture and structure are aspects of social systems that can contribute to impeding innovation. Organizations which are highly focused on efficiency and stability will be disposed to avoid the risk inherent in experimenting. Likewise, rigidly siloed organizations will have difficulty with activities that require crossing reporting structures. This can be the result of deliberate and destructive office politics, or less obvious (therefore, more insidious) cognitive biases that lead to evidence being overlooked:

Yet ‘evidence’ literally means ‘that which is seen’. And here we hit right up against a fundamental problem of cognitive-bias, sometimes known as Gooch’s Paradox: that “things not only have to be seen to be believed, but also have to be believed to be seen”.


Inertia, the “indisposition to motion, exertion, or change”, is another social system innovation killer. In the seventh installment of our series on innovation, “Organizations and Innovation – Swim or Die!”, I made the point that organizations need constantly to adapt to their changing contexts or risk “death”. Sitting still in a changing world is a losing tactic.

It should be obvious that all the barriers to innovation I’ve listed are aspects of the social systems involved. The technology part is relatively easy compared to the social. Technology (at least at present) isn’t lazy, complacent, biased, fearful, or malicious. The upside is that organizations, being composed of people, can change.

To return to the question above, is innovation inevitable?

Perhaps. The better question is whether it’s inevitable for your organization. The more your organization is subject to the barriers listed above, the more likely an organization not subject to them will eat your lunch.