You can’t always get what you want…

Lucifer

You can’t always get what you want
But if you try sometimes well you just might find
You get what you need

When it comes to systems, you can’t always get what you want, but you do get what you design (intentionally or not), whether it’s what you need or not.

In other words, the architecture of systems, both social and software, evolve through some combination of intentional design and accidental emergence. Regardless of which end of the continuum the system leans toward, the end result will reflect the decisions made (or not made) in relation to various stimuli. Regarding businesses (a social system), Ruth Malan, in her February 2012 “Trace in Sand” post, put it this way:

I have been talking about agility in terms of evolutionary ecology, but with the explicit recognition that companies, comprised after all of individuals, attempt to speed and alter and intervene and interject and intercede and (I’m looking for the right word here) shape evolutionary processes with intentional actions — concerted, but also emergent from more and less choreographed, actions and intentions. Being bumped along by the unpredictable interactions of others, some from within, but also from “invasive species” from other ecosystems looking for new applications for their adaptable, mutable capability set.

Organizations create and participate in business ecologies. They build up the relationships that stabilize parts of the broader ecosystem, and create conditions for organizational forms to thrive there. They create products, they create the seeds of the next generation of harvest. They produce variants on their family tree, to target and develop niches.

Ruth further notes that while business adapt and improve in some cases, in other case they have “…become too closely adapted to and integrated within an ecosystem that has been replaced or significantly restructured by some landscape reshaping change…”. People generally refer to this phenomenon as disruption and the way they refer to it would seem to imply that it’s something that happens to or is done to a company. The role of the organization in its own difficulties (or demise) isn’t, in my opinion, well understood.

Last Friday, I saw a tweet from Noah Sussman that provides a useful heuristic for predicting the behavior of any large social system:

the actual reason behind the behavior:

It’s not that people are actively working to harm the organization, but that when there is no leadership, where there is no design, where there is no learning, the system ossifies and breaks down. Being perfectly adapted to an ecosystem that no longer exists is indistinguishable from being poorly adapted to the present context. I’m reminded of what Tom Graves stated in “The game of enterprise-architecture”: “things work better when they work together, on purpose”.

Without direction, entropy emerges where coherence is needed.

This is not to say, however, that micro-management is the answer. Too much design/control is as toxic as too little. This is particularly the case when the management system isn’t intentionally designed. Management that is both ad hoc and rigid can cause new problems while trying to solve existing ones. This is illustrated by another tweet from Friday:

The desire to avoid the “…embarrassment of cancellation” led to the decision to risk the lives of a plane load of “…foreign TV and radio journalists and also other foreign notables…”. I suspect that the fiery deaths of those individuals would have been an even bigger embarrassment. The system, however, led to the person who had the decision-making power to take that gamble.

The system works the way you built it, even when you didn’t intend to build it to work that way.

Advertisement

Regulating Software Development

'Belvidere Street construction, pouring concrete', Library of Virginia

 

Another weekend, another too good to pass up Twitter conversation during my “unplugged” time. This weekend, Grady Booch hooked me by retweeting Mike Potts tweet:

Mike’s tweet was a reply to Grady’s comment on the latest news out of Uber:

It’s an understandable question. It’s a reasonable question. It’s one that came up back during the healthcare.gov fiasco and it’s one raised by Volkswagen’s recent criminal misconduct.

However, when contemplating fixing a problem, we need to be extremely mindful of the potential for creating harm as a result of the “fix”. Particularly we should be wary of creating harm out of proportion to any good we do (i.e. we don’t want to kill roaches by burning down the house). I chose the image at the top to illustrate something key to this discussion – changing laws (the software of our meta-enterprise) is only slightly harder than moving a roadway once laid down.

Now for the caveats:

  • I do my utmost to avoid politics on this site – I really doubt you’re looking to me for guidance or even just my opinion. I’m not intending this post as a political statement. I’m not asserting that government is never the answer, merely that it’s a rather blunt instrument that we need to use with care.
  • I agree with Grady and Mike that those who took part in this are a disgrace. Moreover, I believe everyone involved, top to bottom, needs to be prosecuted and, if convicted, punished to the fullest extent of the law.
  • My tl;dr position is this: if we have regulation, it should be effective and without avoidable harmful side effects.

As I noted above, it’s human nature to respond to problems with some proposal to fix the problem. It also seems to be human nature to respond in a manner that doesn’t necessarily deal with an issue from a systemic perspective. We tend to allow ourselves to concentrate on the need to “do something” and ignore the hard work of making sure what we do is effective (and doesn’t cause further problems). In other words, we put band aids on bullet wounds.

In both the case of VW and Uber, the conduct alleged is criminal. We could pass new laws making it a crime to commit a crime, but that seems to be an exercise in recursive futility. If the potential penalty in the first case was insufficient to induce compliance, should we really believe adding another layer will make it better?

An element that’s present in both cases is that the illegal conduct involves creating software to help avoid detection of the fact that the company was breaking another law. Regulatory pressures coupled with a corrupt culture can create perverse incentives to cheat. This does not in any way excuse the conduct, particularly in the case of VW. It is, however, one of the systemic factors that should be taken into account.

In my experience, the most effective compliance program is one where compliance is the path of least resistance. Self-imposed compliance cannot fail to be more effective than compliance enforced externally. Corrupt agents will still violate the rules, but ideally you want to make it so that the lazy way out is the desired behavior.

Another aspect of regulation that comes up is something along the lines of professional standard similar to those of attorneys, accountants, and doctors. Increasing the level of professionalism is laudable, but would it be an effective response to the issue of criminal misconduct? Additionally, assuming it was legally enforced, what would the cost be? Everything from administration of the program to salary increases would introduce new costs and would likely affect the pace of innovation (due to the impact on both supply and demand). Again, without justifying the conduct, what was Uber’s motivation to develop its code to defeat detection by regulators?

I can well imagine other potential issues with a regulatory regime that requires a license to code. Not only commercial innovation would suffer, but the effects on the Open Source community could be disastrous if the licensing regime was expensive.

Doing “something” is easy. Doing something effective is a bit harder. I’m all aboard for punishing the guilty (each and every one), but we should move carefully when considering actions that might be more difficult to undo.

Fear of Failure, Fear and Failure

Capricho 43, Goya's 'The Sleep of Reason Produces Monsters'

Some things seem so logically inconsistent that you just have to check them out.

Such was the title of a post on LinkedIn that I saw the other day: “Innovation In Fear-Based Cultures? Or, why hire lions to be dogs?”. In it, Michael Graber noted that “…top-down organizations have the most trouble innovating.”:

In particular, the fearful mindsets that review, align, and sign off on “decks” to be presented to Vice President-level colleagues often edit out the insights and recommendations that have the power to grow the business in new ways.

These well-trained, obedient keepers of the status quo are rewarded for not taking risks and for not thinking outside of the existing paradigm of the business.

None of this is particularly shocking, a culture of fear is pretty much the antithesis of a learning culture and innovation in the absence of a learning culture is a bit like snow in the desert – not impossible, but certainly remarkable.

Learning involves risk. Whether the method is “move fast and break things” or something more deliberate and considered (such as that outlined in Greger Wikstrand‘s post “Jobs to be done innovation”), there is a risk of failure. Where there is a culture of fear, people will avoid all failure. Even limited risk failure in the context of an acknowledged experiment will be avoided because people won’t trust in the powers that be not to punish the failure. In avoiding this type of failure, learning that leads to innovation is avoided as well. You can still learn from what others have done (or failed to do), but even then there’s the problem of finding someone foolhardy enough to propose an action that’s out of the norm for the organization.

Why would an organization foster this kind of culture?

Seth Godin’s post, “What bureaucracy can’t do for you”, holds the key:

It lets us off the hook in many ways. It creates systems and momentum and eliminates many decisions for its members.

“I’m just doing my job.”

“That’s the way the system works.”

Decisions involve risk, someone could make the wrong one. For that reason, the number of people making decisions should be minimized (not a position I endorse, mind you).

That’s the irony of top-down, bureaucratic organizations – often the culture is by design, intended to eliminate risk. By succeeding in doing so on the mundane level, the organization actually introduces an existential risk, the risk of stagnation. The law of unintended consequences has a very long arm.

This type of culture actually introduces perverse incentives that further threaten the organization’s long-term health. Creativity is a huge risk, you could be wrong. Even if you’re right, you’ve become noticeable. Visibility becomes the same as risk. Likewise, responsibility means appearing on the radar. This not only discourages positive actions, but can easily be a corrupting influence.

Fear isn’t the only thing we have to fear, but sometimes it’s something we really need to be concerned about.


This post is another installment of an ongoing conversation about innovation with Greger Wikstrand.

I fought the law (of unintended consequences) and the law won

Sometimes, what seemed to be a really good idea just doesn’t turn out that way in the end.

In my opinion, a lack of a systems approach to problem solving makes that type of outcome much more likely. Simplistic responses to issues that fail to deal with problems holistically can backfire. Such ill-considered solutions not only fail to solve the original problem, but often set up perverse incentives that can lead to new problems.

An article on the Daily WTF last week, “Just the fax, Ma’am”, illustrates this perfectly. In the article, an inflexible and time-consuming database change process (layered on top of the standard change management process) leads to the “reuse” of an existing, but obsolete field in the database. Using a field labeled “Fax” for an entirely different purpose is far from “best practice”, but following the rules would lead to being seen as responsible for delaying a release. This is an example of a moral hazard, such as Tom Cagley discussed in his post “Some Moral Hazards In Software Development”. Where the cost of taking a risk is not borne by the party deciding whether to take it, potential for abuse abounds. This risk becomes particularly likely when the person taking shortcuts can claim a “moral” rationale for doing so (such as “getting it done” for the customer).

None of this is to suggest that change management isn’t a worthy goal. In fact, the worthier the goal, the greater the danger of creating an unintended consequence because it’s so easy to conflate argument over means with disagreement regarding the ends. If you’re not in favor of being strip-searched on arrival and departure from work, that doesn’t mean you’re anti-security. Nonetheless, the danger of that accusation being made will likely resonate for many. When the worthiness of the goal forestalls, or even just hinders, examination of the effectiveness of methods, then that effectiveness is likely to suffer.

Over the course of 2016, I’ve published twenty-two posts, counting this one, with the category Organizations as Systems. The fact that social systems are less deterministic than software systems only reinforces the need for intentional design. When foreseeable abuses are not accounted for, their incidence becomes more likely. Whether the abuse results from personal pettiness, doctrinal disagreements, or even just clumsy design like the change management process described above is irrelevant. In all of those cases, the problem is the same, decreased respect for institutional norms. Studies have found that “…corruption corrupts”:

Gächter has long been interested in honesty and how it manifests around the world. In 2008, he showed that students from 16 cities, from Riyadh to Boston, varied in how likely they were to punish cheaters in their midst, and how likely those cheaters were to then retaliate against their castigators. Both qualities were related to the values of the respective cities. Gächter found that the students were more likely to tolerate free-loaders and retaliate against do-gooders if they came from places whose citizens took a more relaxed view on tax evasion or fare-dodging, or had less trust in their courts and police.

If opinions around corruption and rule of law can affect people’s reactions to dishonesty, Gächter reasoned that they surely affect how honest people are themselves. If celebrities cheat, politicians rig elections, and business leaders engage in nepotism, surely common citizens would feel more justified in cutting corners themselves.

Taking a relaxed attitude toward the design of a social system can result in its constituents taking a relaxed attitude toward those aspects of the system that are inconvenient to them.

We Deliver Decisions (Who Needs Architects?)

Broken Window

What do medicine, situational awareness, economics, confirmation bias, and value all have to do with all have to do with the architectural design of software systems?

Quite a lot, actually. To connect the dots, we need to start from the point of view that the architecture is essentially a set of design decisions intended to solve a problem. The architecture of that problem consists of a set of contexts. The fitness of a solution architecture will depend on how well it addresses the problem architecture. While challenges will emerge in the course of resolving a set of contexts, understanding up front what can be known provides reserves to deal with what cannot.

About two weeks ago, during a Twitter discussion with Greger Wikstrand, I mentioned that the topic (learning via observational studies rather than controlled experiment) coincided with a post I was publishing that day, “First Do No Harm – the Practice of Software Development” (comparing software development to medicine). That triggered the following exchange:

A few days later, I stumbled across a reference to Frédéric Bastiat‘s classic essay on economics, “What Is Seen and What Is Not Seen”. For those that aren’t motivated to read the work of 19th century French economists, it deals with the concepts of opportunity costs and the law of unintended consequences via a parable that attacks the notion that broken windows benefit the economy by putting glaziers to work.

A couple more days went by and Greger posted “Confirmation bias in software engineering” on the pitfalls of being too willing to believe information that conforms to our own preferences. That same day, I posted “Let’s Talk Value (Who Needs Architects?)”, discussing the effects of perception on determining value. Matt Ballantine made a comment on that post, and coincidentally, “confirmation bias” came up again:

I think it’s always going to be a balance of expediency and pragmatism when it comes to architecture. And to an extend it relates back to your initial point about value – I’m not sure that value is *anything* but perception, no matter what logical might tell us. Think about the things that you truly value in your life outside of work, and I’d wager that few of them would fit neatly into the equation…

So why should we expect the world of work to be any different? The reality is that it isn’t, and we just have a fashion these days in business for everything to be attributable to numbers that masks what is otherwise a bunch of activities happening under the cognitive process of confirmation bias.

So when it comes to arguing the case for architecture, despite the logic of the long-term gain, short term expedience will always win out. I’d argue that architectural approaches need to flex to accommodate that… http://mmitii.mattballantine.com/2012/11/27/the-joy-of-hindsight/

The common thread through all this is cognition. Perceiving and making sense of circumstances, then deciding how best to respond. The quality of the decision(s) will be directly related to the quality of the cognition. Failing to take a holistic view (big picture and details, not either-or) will impair our perception of the problem, sabotaging our ability to design effective solutions. Our biases can lead to embracing fallacies like the one in Bastiat’s parable, but stakeholders will likely be sensitive to the opportunity costs of avoidable architectural refactoring (the unintentional consequence of applying YAGNI at the architectural level). That sensitivity will color their perception of the value of the solution and their perception is the one that counts.

Making the argument that you did well by costing someone money is a lot easier in the abstract than it is in reality.

Locking Down the Prisoners: Control, Conflict and Compliance for Organizations

Newgate Prison Inmates

The most important thing to learn about management and governance is knowing when and how to manage or govern and more importantly, when not to.

The story is told about a very new and modern penal facility, the very epitome of security and control. Each night, precisely at 11:00 PM, the televisions were shut off and the inmates were herded into their cells for lights out. Since the inmates tended to dislike their enforced bedtime, fights would ensue during the lockdown and throughout the night when the cells needed to be opened (both for purposes of head counts and to respond to the inevitable conflicts caused by locking people in close quarters). If the problems were pervasive enough, an entire housing unit might be punished by – wait for it – being confined to their cells (perpetuating the cycle).

Management of the facility was at a loss on what to do. The conflict was causing disruption in the day-to-day activities. This disruption further exacerbated tensions. The fights led to injuries to both staff and inmates, raising costs and risk of civil litigation, as well as causing staffing problems.

The answer was simple – stop the lockdowns. When the policy was reviewed objectively, it was obvious that enforcement was yielding no benefits to offset the many costs. In fact, stopping enforcement actually increased security by reducing tensions and causing the night owls to sleep in during the day. In a real-life zen moment, it was realized that letting go of the illusion of control provided real control (or at least something closer to it).

Most organizations could benefit from a similar epiphany.

This is not to suggest that process, management, and governance are unnecessary, far from it. Instead, it’s important that the system by which things are run is…systemic. As Tom Graves likes to say, “…things work better when they work together, on purpose”. Intentional design applies to social systems, just as it applies to software systems. Ad hoc evolution, by way of disjointed decisions unencumbered with any coherence, lead to accidental structures. Entropy emerges.

This can be seen in a tweet from Charles T. Betz:

Or, as Gary Hamel tweeted:

The alternative is to do as Yves Morieux stated in his TED talk: “We need to create organizations in which it becomes individually useful for people to cooperate.” This involves a ruthless attention to cause and effect. This involves creating environments where unnecessary friction is removed and necessary friction is understood to be necessary by all involved. It’s a lot easier to get compliance when it’s easier to comply and a lot easier to get conflict when you provoke it.

Law of Unintended Consequences – Security Edition

Bank Vault

More isn’t always better. When it comes to security, more can even be worse.

As the use of encryption has increased, management of encryption keys has emerged as a pain point for many organizations. The amount of encrypted data passing through corporate firewalls, which has doubled over the last year, poses a severe challenge to security professionals responsible for protecting corporate data. The mechanism that’s intended to protect information in transit does so regardless of whether the transmission is legitimate or not.

Greater complexity, which means greater inconvenience, can lead to decreased security. Usability increases security by increasing compliance. Alarm fatigue means that as the number of warnings increase, so does the likelihood of their being ignored

Like any design issue, security should be approached from a systems thinking viewpoint (at least in my opinion). Rather than a one-dimensional, naive approach, a holistic one that recognizes and deals with the interrelationships is more likely to get it right. Thinking solely in terms of actions while ignoring the reactions that result from them hampers effective decision-making.

To be effective, security should be comprehensive, coordinated, collaborative, and contextual.

Comprehensive security is security that involves the entire range of security concerns: application, network, platform (OS, etc.), and physical. Strength in one or more of these areas means little if only one of the others is fatally compromised. Coordination of the efforts of those responsible for these aspects is essential to ensure that the various security enhance rather than hinder security. This coordination is better achieved via a collaborative process that reconciles the costs and benefits systemically than a prescriptive one imposed without regard to those factors. Lastly, practices should be tailored to the context of the problem at hand. Value at risk and amount of exposure are two factors that should help determine the effort expended. Putting a bank vault door on the garden shed not only wastes money, but also hinders security by taking those resources away from an area of greater need.

As with most quality of service concerns, security is not a binary toggle but a continuum. Matching the response to the need is a good way to stay on the right side of the law of unintended consequences.

#4U2U – Canned Competency, Values & Pragmatism

home canned food

Not quite two years ago, I put up a quick post entitled “The Iron Law of Tools”, which in its essence was: “that which does for you, can do to you” (whence comes the #4U2U in the title of this post). That particular post focused on ORMs (Entity Framework to be specific), but the warning equally applies to libraries and frameworks for other technical issues as well as process, methodology and techniques.

Libraries, frameworks, and processes (“tools” from this point forward) can make things easier by allowing you to concentrate on what to do rather than how to do it (via high-level abstractions and/or conventions). However, tools are not a substitute for understanding. Neither the Law of Unintended Consequences nor Murphy’s Law have been repealed. Without an adequate understanding of how something works, you cannot assess the costs of the trade-offs that are being made (and there are trade-offs involved, you can rely on that). Understanding is likewise necessary to recognize and fix those situations where the tool causes more problems than it solves. As Pawel Brodzinski observed in his post “A Fool With a Tool Is Still a Fool”:

Any time a discussion goes toward tools, any tools really, it’s a good idea to challenge the understanding of a tool itself and principles behind its successes. Without that shared success stories bear little value in other contexts, thus end result of applying the same tools will frequently result in yet another case of a cargo cult. While it may be good for training and consulting businesses (aka prophets) it won’t help to improve our organizations.

A fool with a tool will remain a fool, only more dangerous since now they’re armed.

Pawel’s point regarding cargo cults is particularly important. Lack of understanding of how a particular effect proceeds from a given cause often manifests as dogmatic assertions in defense of some “universal truth”. The closest thing I’ve found to a universal truth of software development is that it’s very unlikely that anything is universally applicable to every context.

It’s dangerous to conflate adherence to a tool with one’s core values, such that anyone who disagrees is “wrong” or “deluded” or “unprofessional”. That being said, values can provide a frame of reference in understanding someone’s position in regard to a tool. In “The TDD Divide: Everyone is Right”, Cory House addresses the current dispute over Test-Driven Development and notes (rightly, in my opinion):

The world is a messy place. Deadlines loom, team skills vary widely, and the impact of bugs varies greatly by industry. Ultimately, we write software to make money and solve problems. Tests are a tool that help us do both. Consider the context to determine which testing style fits for your project.

Uncle Bob is right. Quality matters. Separation of concerns and unit testing help assure the utmost quality, speed, and flexibility.

DHH is right. Sometimes the cost of unit tests exceed their benefit. Some of the benefit of automated testing can be achieved through automated integration testing instead.

You need to understand what a tool offers and what it costs and the result of that equation in relation to what’s important in your context. With that understanding, you can make a rational choice.