I fought the law (of unintended consequences) and the law won

Sometimes, what seemed to be a really good idea just doesn’t turn out that way in the end.

In my opinion, a lack of a systems approach to problem solving makes that type of outcome much more likely. Simplistic responses to issues that fail to deal with problems holistically can backfire. Such ill-considered solutions not only fail to solve the original problem, but often set up perverse incentives that can lead to new problems.

An article on the Daily WTF last week, “Just the fax, Ma’am”, illustrates this perfectly. In the article, an inflexible and time-consuming database change process (layered on top of the standard change management process) leads to the “reuse” of an existing, but obsolete field in the database. Using a field labeled “Fax” for an entirely different purpose is far from “best practice”, but following the rules would lead to being seen as responsible for delaying a release. This is an example of a moral hazard, such as Tom Cagley discussed in his post “Some Moral Hazards In Software Development”. Where the cost of taking a risk is not borne by the party deciding whether to take it, potential for abuse abounds. This risk becomes particularly likely when the person taking shortcuts can claim a “moral” rationale for doing so (such as “getting it done” for the customer).

None of this is to suggest that change management isn’t a worthy goal. In fact, the worthier the goal, the greater the danger of creating an unintended consequence because it’s so easy to conflate argument over means with disagreement regarding the ends. If you’re not in favor of being strip-searched on arrival and departure from work, that doesn’t mean you’re anti-security. Nonetheless, the danger of that accusation being made will likely resonate for many. When the worthiness of the goal forestalls, or even just hinders, examination of the effectiveness of methods, then that effectiveness is likely to suffer.

Over the course of 2016, I’ve published twenty-two posts, counting this one, with the category Organizations as Systems. The fact that social systems are less deterministic than software systems only reinforces the need for intentional design. When foreseeable abuses are not accounted for, their incidence becomes more likely. Whether the abuse results from personal pettiness, doctrinal disagreements, or even just clumsy design like the change management process described above is irrelevant. In all of those cases, the problem is the same, decreased respect for institutional norms. Studies have found that “…corruption corrupts”:

Gächter has long been interested in honesty and how it manifests around the world. In 2008, he showed that students from 16 cities, from Riyadh to Boston, varied in how likely they were to punish cheaters in their midst, and how likely those cheaters were to then retaliate against their castigators. Both qualities were related to the values of the respective cities. Gächter found that the students were more likely to tolerate free-loaders and retaliate against do-gooders if they came from places whose citizens took a more relaxed view on tax evasion or fare-dodging, or had less trust in their courts and police.

If opinions around corruption and rule of law can affect people’s reactions to dishonesty, Gächter reasoned that they surely affect how honest people are themselves. If celebrities cheat, politicians rig elections, and business leaders engage in nepotism, surely common citizens would feel more justified in cutting corners themselves.

Taking a relaxed attitude toward the design of a social system can result in its constituents taking a relaxed attitude toward those aspects of the system that are inconvenient to them.

Advertisement

Barriers to Innovation

US Soldiers crossing Siegfried Line in WWII

 

Is innovation inevitable?

Greger Wikstrand and I have been trading blog posts on innovation since last November. In his latest post, “Credit card fraud and stalled innovation”, Greger discusses the relatively slow pace of innovation in credit card security. Those best placed to increase security neglect it because they don’t own the risk (a concept called “moral hazard”).

Sometimes, a potential innovation is not the best route to take. For example, the situation I discussed in “What’s Innovation Worth”, where change was avoided because the payoff didn’t justify the cost. Sometimes, however, potentially valuable innovation can be blocked in ways similar to what Greger outlined.

Laws and regulations can introduce perverse incentives that distort economic conditions. Ironically, where this hurts some innovations (e.g. Uber and other “gig economy” companies), it can also unintentionally push others. Increases in minimum wage laws are making automation more likely for some jobs.

External factors are far from the only barriers to innovation. Technological innovations, no matter how promising, will fail to flourish when placed in an inhospitable ecosystem. If all the systems, social and technological, fail to complement each other, then effectiveness will be diminished via friction. Technology, organization (structure), and process are all intertwined and interdependent.

Culture and structure are aspects of social systems that can contribute to impeding innovation. Organizations which are highly focused on efficiency and stability will be disposed to avoid the risk inherent in experimenting. Likewise, rigidly siloed organizations will have difficulty with activities that require crossing reporting structures. This can be the result of deliberate and destructive office politics, or less obvious (therefore, more insidious) cognitive biases that lead to evidence being overlooked:

Yet ‘evidence’ literally means ‘that which is seen’. And here we hit right up against a fundamental problem of cognitive-bias, sometimes known as Gooch’s Paradox: that “things not only have to be seen to be believed, but also have to be believed to be seen”.


Inertia, the “indisposition to motion, exertion, or change”, is another social system innovation killer. In the seventh installment of our series on innovation, “Organizations and Innovation – Swim or Die!”, I made the point that organizations need constantly to adapt to their changing contexts or risk “death”. Sitting still in a changing world is a losing tactic.

It should be obvious that all the barriers to innovation I’ve listed are aspects of the social systems involved. The technology part is relatively easy compared to the social. Technology (at least at present) isn’t lazy, complacent, biased, fearful, or malicious. The upside is that organizations, being composed of people, can change.

To return to the question above, is innovation inevitable?

Perhaps. The better question is whether it’s inevitable for your organization. The more your organization is subject to the barriers listed above, the more likely an organization not subject to them will eat your lunch.