Form Follows Function on SPaMCast 442

SPaMCAST logo

A new month brings a new appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

This week’s episode, number 442, features Tom’s excellent essay on capability teams (highly recommended!), followed by a Form Follows Function installment based on my post “Systems of Social Systems and the Software Systems They Create”. Kim Pries bats cleanup with a Software Sensei column, “Software Quality and the Art of Skateboard Maintenance”.

In this episode, Tom and I continue our discussion on the organizations as system concept and how systems must fit into their context and ecosystem. In my previous posts on the subject, I took more of a top down approach. With this post, I flipped things around to a bottom up view. Understanding how the social and software systems interact (including the social system involved in creating/maintaining the software system) is critical to avoid throwing sand in the gears.

You can find all my SPaMCast episodes using under the SPaMCast Appearances category on this blog. Enjoy!

Form Follows Function on SPaMCast 438

SPaMCAST logo

Once again, I’m making an appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

This week’s episode, number 438, features Tom’s essay on using sizing for software testing, Kim Pries with a Software Sensei column (canned solutions), and a Form Follows Function installment based on my post “Organizations as Systems and Innovation”.

In this episode, Tom and I discuss how systems must fit into their context and ecosystem, otherwise it can be like dropping a high-performance sports car engine into a VW Beetle. Disney-physics may work in the movies, but it’s unlikely to be successful in the real world. If all the parts don’t fit together, friction ensues.

You can find all my SPaMCast episodes using under the SPaMCast Appearances category on this blog. Enjoy!

Square Pegs, Round Holes, and Silver Bullets

Werewolf

People like easy answers.

Why spend time analyzing and evaluating when you can just take some thing or some technique that someone else has already put to use and be done with it?

Why indeed?

I mean, “me too” is a valid strategy, right?

And we don’t want people to get off message, right?

And we can always find a low cost, minimal disruption way of dealing with issues, right?

I mean, after all, we’ve got data and algorithms, and stuff:

The thing is, actions need to make sense in context. Striking a match is probably a good idea in the dark, but it’s probably less so in daylight. In the presence of gasoline fumes, it’s a bad idea regardless of ambient light.

A recent post on Medium, “Design Sprints Are Snake Oil” is a good example. Erika Hall’s title was a bit click-baitish, but as she responded to one commenter:

The point is that the original snake oil was legitimate and effective. It ended up with a bad reputation from copycats who over-promised results under the same name while missing the essential ingredients.

Sprints are legitimate and effective. And now there is a lot of follow-up hype treating them as a panacea and a replacement for other types of work.

Good things (techniques, technologies, strategies, etc.) are “good”, not because they are innately right, but because they fit the context of the situation at hand. Those that don’t fit, cease being “good” for that very reason. Form absent function is just a facade. Whether it’s business strategy, management technique, innovation efforts, or process, there is no recipe. The hard work to match the action with the context has to be done.

Imitation might be the sincerest form of flattery, but it’s a really poor substitute for strategy.

Fear of Failure, Fear and Failure

Capricho 43, Goya's 'The Sleep of Reason Produces Monsters'

Some things seem so logically inconsistent that you just have to check them out.

Such was the title of a post on LinkedIn that I saw the other day: “Innovation In Fear-Based Cultures? Or, why hire lions to be dogs?”. In it, Michael Graber noted that “…top-down organizations have the most trouble innovating.”:

In particular, the fearful mindsets that review, align, and sign off on “decks” to be presented to Vice President-level colleagues often edit out the insights and recommendations that have the power to grow the business in new ways.

These well-trained, obedient keepers of the status quo are rewarded for not taking risks and for not thinking outside of the existing paradigm of the business.

None of this is particularly shocking, a culture of fear is pretty much the antithesis of a learning culture and innovation in the absence of a learning culture is a bit like snow in the desert – not impossible, but certainly remarkable.

Learning involves risk. Whether the method is “move fast and break things” or something more deliberate and considered (such as that outlined in Greger Wikstrand‘s post “Jobs to be done innovation”), there is a risk of failure. Where there is a culture of fear, people will avoid all failure. Even limited risk failure in the context of an acknowledged experiment will be avoided because people won’t trust in the powers that be not to punish the failure. In avoiding this type of failure, learning that leads to innovation is avoided as well. You can still learn from what others have done (or failed to do), but even then there’s the problem of finding someone foolhardy enough to propose an action that’s out of the norm for the organization.

Why would an organization foster this kind of culture?

Seth Godin’s post, “What bureaucracy can’t do for you”, holds the key:

It lets us off the hook in many ways. It creates systems and momentum and eliminates many decisions for its members.

“I’m just doing my job.”

“That’s the way the system works.”

Decisions involve risk, someone could make the wrong one. For that reason, the number of people making decisions should be minimized (not a position I endorse, mind you).

That’s the irony of top-down, bureaucratic organizations – often the culture is by design, intended to eliminate risk. By succeeding in doing so on the mundane level, the organization actually introduces an existential risk, the risk of stagnation. The law of unintended consequences has a very long arm.

This type of culture actually introduces perverse incentives that further threaten the organization’s long-term health. Creativity is a huge risk, you could be wrong. Even if you’re right, you’ve become noticeable. Visibility becomes the same as risk. Likewise, responsibility means appearing on the radar. This not only discourages positive actions, but can easily be a corrupting influence.

Fear isn’t the only thing we have to fear, but sometimes it’s something we really need to be concerned about.


This post is another installment of an ongoing conversation about innovation with Greger Wikstrand.

Emergence: Babies and Bathwater, Plans and Planning

blueprints

 

“Emergent” is a word that I run into from time to time. When I do run into it, I’m reminded of an exchange from the movie Gallipoli:

Archy Hamilton: I’ll see you when I see you.
Frank Dunne: Yeah. Not if I see you first.

The reason for my ambivalent relationship with the word is that it’s frequently used in a sense that doesn’t actually fit its definition. Dictionary.com defines it like this:

adjective

1. coming into view or notice; issuing.
2. emerging; rising from a liquid or other surrounding medium.
3. coming into existence, especially with political independence: the emergent nations of Africa.
4. arising casually or unexpectedly.
5. calling for immediate action; urgent.
6. Evolution. displaying emergence.

noun

7. Ecology. an aquatic plant having its stem, leaves, etc., extending above the surface of the water.

Most of the adjective definitions apply to planning and design (which I consider to be a specialized form of planning). Number 3 is somewhat tenuous for that sense and and 5 only applies sometimes, but 6 is dead on.

My problem, however, starts when it’s used as a euphemism for a directionless. The idea that a cohesive, coherent result will “emerge” from responding tactically (whether in software development or in managing a business) is, in my opinion, a dangerous one. I’ve never heard an explanation of how strategic success emerges from uncoordinated tactical excellence that doesn’t eventually come down to faith. It’s why I started tagging posts on the subject “Intentional vs Accidental Architecture”. Success that arises from lack of coordination is accidental rather than by design (not to mention ironic when the lack of intentional coordination or planning/design is intentional itself):

If you don’t know where you are going, any road will get you there.

 

The problem, of course, is do you want to be at the “there” you wind up at? There’s also the issue of cost associated with a meandering path when a more direct route was available.

None of this, however, should be taken as a rejection of emergence. In fact, a dogmatic attachment to a plan in the face of emergent facts is as problematic as pursuing an accidental approach. Placing your faith in a plan that has been invalidated by circumstances is as blinkered an approach as refusing to plan at all. Neither extreme makes much sense.

We lack the ability to foresee everything that can occur, but that limit does not mean that we should ignore what we can foresee. A purely tactical focus can lead us down obvious blind alleys that will be more costly to back out of in the long run. Experience is an excellent teacher, but the tuition is expensive. In other words, learning from our mistakes is good, but learning from other’s mistakes is better.

Darwinian evolution can produce lead to some amazing things provided you can spare millions of years and lots of failed attempts. An intentional approach allows you to tip the scales in your favor.


Many thanks to Andrew Campbell and Adrian Campbell for the multi-day twitter conversation that spawned this post. Normally, I unplug from almost all social media on the weekends, but I enjoyed the discussion so much I bent the rules. Cheers gentlemen!

Go-to People Considered Harmful

Neck of Codd bottle

Okay, so the title’s a little derivative, but it’s both accurate and it fits in with the “organizations as systems” theme of recent posts. Just as dependency management is important for software systems, it’s likewise just as critical for social systems. Failures anywhere along the chain of execution can potentially bring the whole system to a halt if resilience isn’t considered in the design (and evolution) of the system.

Dependency issues in social systems can take a variety of forms. One that comes easily to mind is what is referred to as the “bus factor” – how badly the team is affected if a person is lost (e.g. hit by a bus). Roy Osherove’s post from today, “A Critical Chain of Bus Factors”, expands on this. Interlocking chains of dependencies can multiply the bus factor:

A chain of bus factors happens when you have bus factors depending on bus factors:

Your one developer who knows how to configure the pipeline can’t test the changes because the agent is down. The one guy in IT who has access to the agent needs to reboot it, but does not have access. The one person who has access to reboot it (in the Infra team) is sick, so now there are three people waiting, and there is nothing in this earth that can help that situation.

The “bus factor”, either individually or as a cascading chain, is only part of the problem, however. A column on CIO.com, “The hazards of go-to people”, identifies the potential negative impacts on the go-to person:

They may:

  • Resent that they shoulder so much of the burden for the entire group.
  • Feel underpaid.
  • Burn out from the stress of being on the never-ending-crisis treadmill.
  • Feel trapped and unable to progress in their careers since they are so important in the role that they are in.
  • Become arrogant and condescending to their peers, drunk with the glory of being important.

The same column also lists potential problems for those who are not the go-to person:

When they realize that they are not one of the go-to people they might:

  • Feel underappreciated and untrusted.
  • Lose the desire to work hard since they don’t feel that their work will be recognized or rewarded.
  • Miss out on the opportunities to work on exciting or important things, since they are not considered dedicated and capable.
  • Feel underappreciated and untrusted.

A particularly nasty effect of relying on go-to people is that it’s self-reinforcing if not recognized and actively worked against. People get used to relying on the specialist (which is, admittedly, very effective right up until the bus arrives) and neglect learning to do for themselves. Osherove suggests several methods to mitigate these problems: pairing, teaching, rotating positions, etc. The key idea being, spreading the knowledge around.

Having individuals with deep knowledge can be a good thing if they’re a reservoir supplying others and not a pipeline constraining the flow. Intentional management of dependencies is just as important in social systems as in software systems.

Systems of Social Systems and the Software Systems They Create

I’ve mentioned before that the idea of looking at organizations as systems is one that I’ve been focusing on for quite a while now. From a top-down perspective, this makes sense – an organization is a system that works better when it’s component parts (both machine and human) intentionally work together.

It also works from the bottom up. For example, from a purely technical perspective, we have a system:

Generic System

However, without considering those who use the system, we have limited picture of the context the system operates within. The better we understand that context, the better we can shape the system to fit the context, otherwise we risk the square peg in a round hole situation:

Generic System with Users

Of course, the users who own the system are also only a part of the context. We have to consider the customers as well:

Generic System with Users and customers

Likewise, we need to consider that the customers of some systems can be internal to the organization while others are external. Some of the “customers” may not even be human. For that matter, sometimes the customer’s interface might be a human (user) rather than software. Things get complicated when we begin adding in the social systems:

Generic EITA with Users and customers

The situation is even more complicated than what’s seen above. We need to account for the team developing and operating the automated system:

Generic System with Users, customers, and IT team

And if that team is not a unified whole, then the picture gets a whole lot more interesting:

Generic System with Users, customers, and IT teams

Zoomed out to the enterprise level, that’s a lot of social systems. When multiplied by the number of automated systems involved, the number easily becomes staggering. What’s even more sobering is reflecting on whether those interactions have been intentionally structured or have grown organically over time. The interrelationship of social and software systems is under-appreciated. A series of tweets from Gregory Brown last week makes the same case:

A number of questions come to mind:

  • Is anyone aware of all the systems (social and software) in play?
  • Is anyone aware of all the interactions between these systems?
  • Are the relationships and interactions a result of intentional design or have they “just happened”?
  • Are you comfortable with the answers to the first three questions above?