Form Follows Function on SPaMCast 446

SPaMCAST logo

It’s time for another appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

This week’s episode, number 446, features Tom’s essay on questions, a powerful tool for coaches and facilitators. A Form Follows Function installment based on my post “Go-to People Considered Harmful” comes next and Kim Pries rounds out the podcast with a Software Sensei column on servant leadership.

Our conversation in this episode continues with the organizations as system concept and how concentrating institutional knowledge in go-to people creates a dependency management nightmare. Social systems run on relationships and when we allow knowledge and skill bottlenecks to form, we set our organization up for failure. Specialists with deep knowledge are great, but if they don’t spread that knowledge around, we risk avoidable disasters when they’re unavailable. Redundancy aids resilience.

You can find all my SPaMCast episodes using under the SPaMCast Appearances category on this blog. Enjoy!

Advertisements

Innovation in Inner Space

KGL dragoons at the Battle of Garcia Hernandez

 

Long-time readers know that I have a rather varied set of interests and that I’ve got a “thing” for history, particularly military history. Knowing that, it shouldn’t come as a surprise that I was recently reading an article titled “Cyber is the fourth dimension of war” (ground, sea and air being the first three dimensions). It’s not a bad article, but it is mistaken. Cyberwar is the fifth dimension of war. The first dimension, today and for all of time past, is the human mind. Contests are won or lost, not on some field of battle, virtual or physical, but in the minds of the combatants. For example, if you believe you’ve lost, then you have.

The painting shown above illustrates this nicely. During the Napoleonic period, infantry that was charged by cavalry would form a square, presenting a hedge of bayonets to all sides. Horses, being intelligent creatures, will not impale themselves on pointy things, thus the formation provides protection to the infantry who were free to fire at the encircling cavalry. Charging disciplined, unbroken infantry was a losing proposition for the cavalry under almost all circumstances. Note the use of “almost”.

At the Battle of García Hernández, July 1812, something unusual happened. One French formation was late in firing, and a wounded horse ran blindly into the square, breaking it up. The attacking British (Hannoverian, to be precise) cavalry rode into the gap and forced the surrender of the French infantry that comprised it. This, of course, was simply a matter of physics. However, two further squares broke up when charged due to the effect of what happened to the first one on their morale. Believing they were beaten, they failed to maintain cohesion and their anticipated defeat became a reality.

So, what’s the point?

Greger Wikstrand and I have been trading posts on the topic of innovation since late 2015. Greger’s latest, “Spring clean your mind”, deals with the concepts of infowar and propaganda (aka “fake news”). This is another example of what Greger’s written about in the past, a concept he dubbed black hat innovation: “Whenever there is innovation or invention there is also misuse”.

Whether you call it black hat innovation or abuse cases (my term), it’s a concept we need to be aware of. It is a concept that affect us, not just as technologists, but as ordinary human beings. We need to be aware of the potential for active abuse. We also need to be aware of the potential for problems that caused by things that make our life more convenient or more pleasant:

This isn’t to say that Facebook is some evil empire, but that we need to bear some responsibility for not allowing ourselves to become trapped in an echo chamber:

It’s something we need to take responsibility for. We can’t hope for a technological deus ex machina to bail us out. As Tim Bass recently noted on his Cyberspace Event Processing Blog:

The big “AI” processing “pie in the sky” plan for cyber defense we all read about is not going to work “as advertised” because we cannot program machines to solve problems that we cannot solve ourselves. There is no substitute for the advancement and development of the human mind to solve complex problems. Delegating the task of “thinking” to machines is doomed to fail, and fail “big time”. It seems like humanity has, in a manner of speaking, “given up” on humans developing the intelligence to manage and defend cyberspace, so they have decided to turn it all over to machines.

Wrong approach!

The right approach, in my opinion, is to be intentional and active in learning. Consuming information should not be a matter of sitting back and shoveling it in, but one of filtering, testing, and appraising. How much time do you spend reading viewpoints you absolutely disagree with? How much time do you spend exploring information?

In 1645, as he was looking back at his long and successful career as a samurai, where a single loss often meant death, Miyamoto Musashi concluded that although rigorous sword practice was essential, it wasn’t enough. At the end of the first chapter of A Book of Five Rings, he also admonishes aspiring warriors to “Cultivate a wide variety of interests in the arts” and “Be knowledgable in a wide variety of occupations.”

Similarly, Boyd, who was was a keen student of Musashi, described his method as looking across a wide variety of fields — “domains” he called them — searching for underlying principles, “invariants.” He would then experiment with syntheses involving these principles until he evolved a solution to the problem he was working on. Because they involved bits and pieces from a variety of domains, he called these syntheses “snowmobiles” (skis, handlebar from a bicycle, etc.)

 

Perception is critical. We are made or unmade, less by our circumstances and more by our perception of them. Companies that have suffered disruptions have done so not because they were unable to respond, but because they either believed themselves invulnerable or believed themselves incapable. Likewise, as individuals, we have control over what information we expose ourselves to and how we manage that exposure.

Sense-making is a critical skill that requires active involvement. The passive get passed by.

[Painting of the battle of Garcia Hernandez by Adolf Northen, housed in the Landesmuseum Hannover. Photo by Michael Ritter via Wikimedia Commons]

Regulating Software Development

'Belvidere Street construction, pouring concrete', Library of Virginia

 

Another weekend, another too good to pass up Twitter conversation during my “unplugged” time. This weekend, Grady Booch hooked me by retweeting Mike Potts tweet:

Mike’s tweet was a reply to Grady’s comment on the latest news out of Uber:

It’s an understandable question. It’s a reasonable question. It’s one that came up back during the healthcare.gov fiasco and it’s one raised by Volkswagen’s recent criminal misconduct.

However, when contemplating fixing a problem, we need to be extremely mindful of the potential for creating harm as a result of the “fix”. Particularly we should be wary of creating harm out of proportion to any good we do (i.e. we don’t want to kill roaches by burning down the house). I chose the image at the top to illustrate something key to this discussion – changing laws (the software of our meta-enterprise) is only slightly harder than moving a roadway once laid down.

Now for the caveats:

  • I do my utmost to avoid politics on this site – I really doubt you’re looking to me for guidance or even just my opinion. I’m not intending this post as a political statement. I’m not asserting that government is never the answer, merely that it’s a rather blunt instrument that we need to use with care.
  • I agree with Grady and Mike that those who took part in this are a disgrace. Moreover, I believe everyone involved, top to bottom, needs to be prosecuted and, if convicted, punished to the fullest extent of the law.
  • My tl;dr position is this: if we have regulation, it should be effective and without avoidable harmful side effects.

As I noted above, it’s human nature to respond to problems with some proposal to fix the problem. It also seems to be human nature to respond in a manner that doesn’t necessarily deal with an issue from a systemic perspective. We tend to allow ourselves to concentrate on the need to “do something” and ignore the hard work of making sure what we do is effective (and doesn’t cause further problems). In other words, we put band aids on bullet wounds.

In both the case of VW and Uber, the conduct alleged is criminal. We could pass new laws making it a crime to commit a crime, but that seems to be an exercise in recursive futility. If the potential penalty in the first case was insufficient to induce compliance, should we really believe adding another layer will make it better?

An element that’s present in both cases is that the illegal conduct involves creating software to help avoid detection of the fact that the company was breaking another law. Regulatory pressures coupled with a corrupt culture can create perverse incentives to cheat. This does not in any way excuse the conduct, particularly in the case of VW. It is, however, one of the systemic factors that should be taken into account.

In my experience, the most effective compliance program is one where compliance is the path of least resistance. Self-imposed compliance cannot fail to be more effective than compliance enforced externally. Corrupt agents will still violate the rules, but ideally you want to make it so that the lazy way out is the desired behavior.

Another aspect of regulation that comes up is something along the lines of professional standard similar to those of attorneys, accountants, and doctors. Increasing the level of professionalism is laudable, but would it be an effective response to the issue of criminal misconduct? Additionally, assuming it was legally enforced, what would the cost be? Everything from administration of the program to salary increases would introduce new costs and would likely affect the pace of innovation (due to the impact on both supply and demand). Again, without justifying the conduct, what was Uber’s motivation to develop its code to defeat detection by regulators?

I can well imagine other potential issues with a regulatory regime that requires a license to code. Not only commercial innovation would suffer, but the effects on the Open Source community could be disastrous if the licensing regime was expensive.

Doing “something” is easy. Doing something effective is a bit harder. I’m all aboard for punishing the guilty (each and every one), but we should move carefully when considering actions that might be more difficult to undo.

Go-to People Considered Harmful

Neck of Codd bottle

Okay, so the title’s a little derivative, but it’s both accurate and it fits in with the “organizations as systems” theme of recent posts. Just as dependency management is important for software systems, it’s likewise just as critical for social systems. Failures anywhere along the chain of execution can potentially bring the whole system to a halt if resilience isn’t considered in the design (and evolution) of the system.

Dependency issues in social systems can take a variety of forms. One that comes easily to mind is what is referred to as the “bus factor” – how badly the team is affected if a person is lost (e.g. hit by a bus). Roy Osherove’s post from today, “A Critical Chain of Bus Factors”, expands on this. Interlocking chains of dependencies can multiply the bus factor:

A chain of bus factors happens when you have bus factors depending on bus factors:

Your one developer who knows how to configure the pipeline can’t test the changes because the agent is down. The one guy in IT who has access to the agent needs to reboot it, but does not have access. The one person who has access to reboot it (in the Infra team) is sick, so now there are three people waiting, and there is nothing in this earth that can help that situation.

The “bus factor”, either individually or as a cascading chain, is only part of the problem, however. A column on CIO.com, “The hazards of go-to people”, identifies the potential negative impacts on the go-to person:

They may:

  • Resent that they shoulder so much of the burden for the entire group.
  • Feel underpaid.
  • Burn out from the stress of being on the never-ending-crisis treadmill.
  • Feel trapped and unable to progress in their careers since they are so important in the role that they are in.
  • Become arrogant and condescending to their peers, drunk with the glory of being important.

The same column also lists potential problems for those who are not the go-to person:

When they realize that they are not one of the go-to people they might:

  • Feel underappreciated and untrusted.
  • Lose the desire to work hard since they don’t feel that their work will be recognized or rewarded.
  • Miss out on the opportunities to work on exciting or important things, since they are not considered dedicated and capable.
  • Feel underappreciated and untrusted.

A particularly nasty effect of relying on go-to people is that it’s self-reinforcing if not recognized and actively worked against. People get used to relying on the specialist (which is, admittedly, very effective right up until the bus arrives) and neglect learning to do for themselves. Osherove suggests several methods to mitigate these problems: pairing, teaching, rotating positions, etc. The key idea being, spreading the knowledge around.

Having individuals with deep knowledge can be a good thing if they’re a reservoir supplying others and not a pipeline constraining the flow. Intentional management of dependencies is just as important in social systems as in software systems.

I fought the law (of unintended consequences) and the law won

Sometimes, what seemed to be a really good idea just doesn’t turn out that way in the end.

In my opinion, a lack of a systems approach to problem solving makes that type of outcome much more likely. Simplistic responses to issues that fail to deal with problems holistically can backfire. Such ill-considered solutions not only fail to solve the original problem, but often set up perverse incentives that can lead to new problems.

An article on the Daily WTF last week, “Just the fax, Ma’am”, illustrates this perfectly. In the article, an inflexible and time-consuming database change process (layered on top of the standard change management process) leads to the “reuse” of an existing, but obsolete field in the database. Using a field labeled “Fax” for an entirely different purpose is far from “best practice”, but following the rules would lead to being seen as responsible for delaying a release. This is an example of a moral hazard, such as Tom Cagley discussed in his post “Some Moral Hazards In Software Development”. Where the cost of taking a risk is not borne by the party deciding whether to take it, potential for abuse abounds. This risk becomes particularly likely when the person taking shortcuts can claim a “moral” rationale for doing so (such as “getting it done” for the customer).

None of this is to suggest that change management isn’t a worthy goal. In fact, the worthier the goal, the greater the danger of creating an unintended consequence because it’s so easy to conflate argument over means with disagreement regarding the ends. If you’re not in favor of being strip-searched on arrival and departure from work, that doesn’t mean you’re anti-security. Nonetheless, the danger of that accusation being made will likely resonate for many. When the worthiness of the goal forestalls, or even just hinders, examination of the effectiveness of methods, then that effectiveness is likely to suffer.

Over the course of 2016, I’ve published twenty-two posts, counting this one, with the category Organizations as Systems. The fact that social systems are less deterministic than software systems only reinforces the need for intentional design. When foreseeable abuses are not accounted for, their incidence becomes more likely. Whether the abuse results from personal pettiness, doctrinal disagreements, or even just clumsy design like the change management process described above is irrelevant. In all of those cases, the problem is the same, decreased respect for institutional norms. Studies have found that “…corruption corrupts”:

Gächter has long been interested in honesty and how it manifests around the world. In 2008, he showed that students from 16 cities, from Riyadh to Boston, varied in how likely they were to punish cheaters in their midst, and how likely those cheaters were to then retaliate against their castigators. Both qualities were related to the values of the respective cities. Gächter found that the students were more likely to tolerate free-loaders and retaliate against do-gooders if they came from places whose citizens took a more relaxed view on tax evasion or fare-dodging, or had less trust in their courts and police.

If opinions around corruption and rule of law can affect people’s reactions to dishonesty, Gächter reasoned that they surely affect how honest people are themselves. If celebrities cheat, politicians rig elections, and business leaders engage in nepotism, surely common citizens would feel more justified in cutting corners themselves.

Taking a relaxed attitude toward the design of a social system can result in its constituents taking a relaxed attitude toward those aspects of the system that are inconvenient to them.

Form Follows Function on SPaMCast 403

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast, number 403, features Tom’s essay on Agile practices at scale, Kim Pries on transformations, and a Form Follows Function installment based on my post “NPM, Tay, and the Need for Design”.

Although the specific controversies have died down since we recorded the segment, the issues remain. Designs that fail to account for foreseeable issues are born vulnerable. Fixing problems after they “emerge” can be much more expensive than a little defensive design.

You can find all my SPaMCast episodes using under the SPAMCast Appearances category on this blog. Enjoy!

When Will We Learn?

Plato's Academy mosaic from Pompeii

We’ve all heard the sayings about history repeating. Did we pay attention? Did we actually hear what was said, or were we just in the room when it was mentioned? Did we learn anything?

Greger Wikstrand and I have been trading posts on innovation for more than seven months. His last post, “Black hat innovation”, touched on the dark side of innovation:

I think the following are good examples of black hat innovation in the digital space: credit card fraud, ransomware and identity theft. There are many other black hat innovations that does not rely on tech such as chain letters, counterfeit money and even weighted dice.

Greger noted “Sometimes, it might seem as if black hat innovation out paces white hat innovation.” Certainly, a black hat innovator faces fewer barriers to innovation. Following rules is less of a consideration when breaking rules. We also have to be aware of the advantage we give them when we fail to learn from the past. None of the abuse cases that Greger mentioned are black swans. They’re merely new ways to commit old crimes. Waiting to react to a foreseeable issue means we start from behind because we failed to learn.

Failure to learn can manifest in a variety of ways. In his post, “Enterprise challenges”, Peter Murchland noted:

All enterprises face challenges (of varying magnitude and complexity). These challenges are either problems to be solved or opportunities to be pursued. One way of considering these challenges is through the following three lenses – those arising due to:

  • external change
  • internal change
  • growth and development

Peter asserts that the health of an enterprise depends on integrating coherent responses to these challenges into its architecture. This is a position shared by Aaron Dignan in “How To Eliminate Organizational Debt”:

Organizational Debt: The interest companies pay when their structure and policies stay fixed and/or accumulate as the world changes.

Let’s unpack that. As time passes, companies create roles, structures, rules, policies, and other norms that become fixed, and often, difficult to change. This is by design. For example, a company’s travel budget may balloon one year, only to be restricted by a travel policy the next — a well intentioned control designed to reduce expense. If that policy starts costing more than it’s saving (e.g. by reducing commercial success due to a lack of face time, frustrating top talent, etc.), it becomes an unacknowledged debt. The “interest” comes in the form of reduced speed, capacity, engagement, flexibility, and innovation that ultimately undermine the macro objectives of the firm: to survive, thrive, and achieve its purpose.

In other words, failure to learn leads to failure to adapt to a changing context, internal and/or external. This mismatch between the enterprise and its context represents a destructive friction. Since an enterprise is a social system and the components of a social system are people, this means that the effects of this friction erodes morale. Disengaged, demotivated employees will hinder an organization’s ability to deliver effectively. The only question is how much of a hindrance it will be.

In my post “Learning to Deal with the Inevitable”, I talked about the value of a learning culture for an organization. Effective execution requires effective decision-making, which requires learning. If we haven’t learned from the past, then there’s no rational basis for our future actions. We’re winging it solely on the basis of hope.

An organization is unlikely to develop a learning culture by accident. Just as a tornado hitting an auto parts store is unlikely to spit out a sports car, it’s unlikely that a system of sensing and adjusting to an ever-changing environment will emerge without intentional action. The social system that is the enterprise has a design and requires design, much the way an automated system has a design and requires design. Part of that design should incorporate learning.

Different organizations will have different needs and different styles of learning. Getting groups of people together to play with technology (as described by Matt Ballantine in his post “The Art of Play”) may not work for every organization. However, it is shortsighted to make no provision for learning whatsoever. Even the British Army in World War I, a conservative institution with an aristocratic officer corps in a conflict that can be described as 19th century warfare with 20th century weapons, “…developed a number of different methods to disseminate knowledge, catering for a variety of different circumstances and needs”. With a hundred years’ worth of extra experience and technology, it would be hard to justify taking a less rigorous approach.