Form Follows Function on SPaMCast 373

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast, number 373, features Tom’s essay on #NotImplementedNoValue and a Form Follows Function installment on simplistic mental models.

Tom and I discuss my post “All models may be wrong, but it’s not a contest to see how wrong you can be”, talking about cognitive biases and how overly simplistic mental models fail us.

You can find all my SPaMCast episodes using under the SPAMCast Appearances category on this blog. Enjoy!

Advertisement

The Seductive Myth of Greenfield Development

Greger Wikstrand‘s tweet from earlier this week packed a wealth of inspiration into one image:

The second statement particularly resonated with me: “The present is built on the past.”

How often do we, or those around us, long for a chance to do things “from scratch”. The idea being, without the constraints of “legacy” code, we could do things “right”. While it’s a nice idea, it has no basis in reality.

Rewrites, of course, will involve dealing with existing data. I’ve yet to encounter a system where no one was interested in the data when it was replaced. I’ve shut down a few where there was no interest, but that’s a different story. The need for that existing data will serve as a potent influence on what can or cannot be done with the replacement system. Likewise, its structure. It’s not reasonable to assume that the data will be any less “legacy” than the code.

We might be tempted to believe that brand new systems escape this pitfall. In doing so, we fail to consider that new systems still must deal with the wants, needs, and attitudes of their stakeholders. People, processes, and organization form the ecosystem that new systems must fit into as surely as replacement systems must.

A crucial part of problem solving is having an adequate understanding of the problem. Everything has a backstory. Understanding the backstory is dependent on understanding the ecosystem the thing fits into. This what Sullivan was talking about when he said “…form ever follows function”.

Nothing’s Ex Nihilo.

If You Had a Choice, Would You Buy Your Brand of IT?

Acme Brand Anvil

People of a certain age might remember the Road Runner cartoons from their childhood. In each episode, Wile E. Coyote suffered numerous accidents attempting to snare the bird using products from Acme, Inc. Aside from the opportunities for a product liability lawsuit, I always wondered why he didn’t just quit buying from them.

Sometimes I wonder the same thing about IT organizations and the business units they serve. How many business units, given a choice, would choose their own in-house IT as their provider?

Recent research by Cisco (as reported on CIO.com) suggests that quite a few would not:

Consulting with CIOs and analyzing network traffic in a set of large enterprises in a variety of industries, Cisco determined that the typical firm has on the order of 15 to 22 times more cloud applications running in the workplace than have been authorized by the IT department.

And by Cisco’s tally, there is quite a bit that CIOs aren’t seeing. On average, CIOs surveyed estimated that there were 51 cloud services running within their organization. According to Cisco’s analysis, the actual number is 730.

And it cuts across sectors. Even in highly regulated industries such as healthcare and financial services, Cisco found between 17 and 20 times more cloud applications running than the IT department estimated.

What’s worse, many recommend heavy-handed tactics on the part of IT to deal with their “unruly” customers:

Now, note the potential benefits, “…can improve productivity and collaboration with little or no financial cost to the company…” versus the potential downside, “…corporate data may be put at risk or even lost if the employee leaves the company”. Given that both are valid, it would certainly make sense to evaluate the risks involved and whether they can be mitigated. Instead, a draconian knee-jerk is recommended.

The advice to tighten controls on users and consider replicating applications in-house where necessary is laughable. IT traditionally has a customer service problem to begin with, getting stricter probably won’t help. You would also think someone would have noticed that offering to replicate products in-house seems like an empty promise when the reason for going outside is that IT isn’t able to provide what’s needed. It seems like piling on to mention that replicating commercially available services probably won’t make a CIO seem very business-savvy.

Building trust in the process and trust in the product would be a good start to making the customer a partner rather than an antagonist. Or you can rely on their being forced to use you. Monopolies can be a sweet deal…while they last.

Would you buy what you’re selling?

All models may be wrong, but it’s not a contest to see how wrong you can be

HO Scale locomotive beside a pencil

The one thing you can be sure of is that nothing is dependent on only one thing.

Michael Feathers‘ tweet last week brought this to mind:

Too often we construct simplistic mental models that fail to account for outcomes that are possible, but inconvenient for us in some way. As Aneel noted while discussing OODA loops in his post “All Models Are Wrong Some Are Useful [In Some Context]”:

OODA is just a vehicle for the larger issue of models, biases, and model-based blindness — Taleb’s Procrustean Bed. Where we chop off the disconfirmatory evidence that suggests our models are wrong AND manipulate [or manufacture] confirmatory evidence.

Because if we allowed the wrongness to be true, or if we allowed ourselves to see that differentness works, we’d want/have to change. That hurts.

Furthermore:

Our attachment [and self-identification] to particular models and ideas about how things are in the face of evidence to the contrary — even about how we ourselves are — is the source of avoidable disasters like the derivatives driven financial crisis. Black Swans.

I wonder how many black swans are only “unpredictable” because we blind ourselves to possibilities.

It’s possible that we cannot eradicate self-deception. “Rats Can Be Smarter Than People” speaks to this via study results that found rats outperforming humans in one of a pair of learning tasks:

The first task involved rules. The second focused on information integration. Humans learn in both ways. Our rule-based system was an evolutionary development: How do you tell if a berry is good for eating? You learn that this small red one is good, and then you save energy by bypassing the ones of a different shape or color. So our brains have been conditioned to look for rules. We’re taught them in school, at work, and by our parents, and we can make many good decisions by applying the ones we’ve learned. But in other situations there’s too much going on for simple rules to work, and that’s when information integration learning has to kick in. Think of a radiologist evaluating an X-ray. If you ask him what rules he uses to determine whether a spot is cancer, he’d probably have a hard time verbalizing them. He’s learned from labeled examples in medical school and his own experience, and then developed an instinct for identifying cancerous spots based on what he’s seen before. Another example that comes to mind is a manager interviewing a job candidate. There aren’t any hard-and-fast rules about who will be a good hire. You have to consider many factors and rely on your judgment or on a gut feeling based on your experience with people in the workplace. Unfortunately, there’s a great deal of evidence showing that humans have a harder time learning how to integrate information in this way, because they seek rules even when there are none.

In other words, we have a model about learning (meta-model?) that works well in many situations. It works so well that we resist looking for context where it’s not the appropriate model. While this shows that a propensity for self-deception is natural, the fact that “self” is in there suggest that we have some control. Having some control obligates us to exercise what control we have and work to gain more.

Why?

A critical argument for the practice of software architecture is that the design of the solution must cohere with the problem space in order to be effective. In order to deliver decisions that achieve this goal, we need to be able to make sense of the problem space. Systems thinking, described by Tom Cagley as “…an approach to problem solving that emphasizes viewing problems as the output of the whole process, including the environment the system operates within”, is a technique to do so. The more we force ourselves to be aware of our biases and work to counteract them, the better our decisions will be.

Simple is good, but not when it’s too good to be true.

Changing Organizations Without Changing People

The Thin Red Line at Balaclava

Prof Bo Molander once pointed out to me and the other students in the class that when you try to change people, you go up against billions of years of evolution, “good luck with that” and when you try to change groups, you go up against millions of years of evolutions, “good luck with that too”. The only thing you can hope to change is the organization.

Greger Wikstrand and I have been carrying on a discussion about architecture, innovation, and organizations as systems. Here’s the background so far:

  1. “We Deliver Decisions (Who Needs Architects?)” – I discussed how the practice of software architecture involved decision-making. It combines analysis with the need for situational awareness to deal with the emergent factors and avoiding cognitive biases.
  2. “Serendipity with Woody Zuill” – Greger pointed me to a short video of him and Woody Zuill discussing serendipity in software development.
  3. “Fixing IT – Too Big to Succeed?” – Woody’s comments in the video re: the stifling effects of bureaucracy in IT inspired me to discuss the need for embedded IT to address those effects and to promote better customer-centricity than what’s normal for project-oriented IT shops.
  4. “Serendipity and successful innovation” – Greger’s post pointed out that structure is insufficient to promote innovation, organizations must be prepared to recognize and respond to opportunities and that innovation must be able to scale.
  5. “Inflection Points and the Ingredients of Innovation” – I expanded on Greger’s post, using WWI as an example of a time where innovation yielded uneven results because effective innovation requires technology, understanding of how to employ it, and an organizational structure that allows it to be used well.
  6. “Social innovation and tech go hand-in-hand” – Greger continued with the same theme, the social and technological aspects of innovation.
  7. “Organizations and Innovation – Swim or Die!” – I discussed the ongoing need of organizations to adapt to their changing contexts or risk “death”.
  8. “Innovation – Resistance is Futile” – Continuing on in the same vein, Greger points out that resistance to change is futile (though probably inevitable). This post contained the wonderful quote above.

What an intriguing statement: you can’t change the behavior of individual people; you can’t change the behavior of groups; you have to change the behavior of the organization. What?

The rest of the paragraph sheds some light:

It is the same with my sheep, I do not try to change them as individuals or as a flock but by managing their access to shelter, food and water and by managing onboarding and offboarding of individual sheep in the flock I do manage the whole organization according to my goals.

Rather than changing the nature of sheep, individually or as a group, Greger uses his knowledge of their nature to structure things so that compliance is the natural outcome. Changing their nature, assuming it’s even possible, would take millions of years. Working with the grain of their nature is considerably easier. Military organizations have recognized this since ancient times, using individual and group characteristics to promote unit cohesion.

In the post “Locking Down the Prisoners: Control, Conflict and Compliance for Organizations”, I noted something similar. You get a lot more compliance when you make it easier to comply. Conversely, making it difficult for someone to do their job well is an excellent way to kill both motivation and effectiveness. I’ve used the quote from Tom Graves before, but it bears repeating: “…things work better when they work together, on purpose”.

Matt Ballantine, in his post “Best Practice versus Good Ideas”, showed how an organization promoted innovation. Rather than imposing “best practices”, which depending on context might not actually be “best”, the company promoted learning and sharing. Because these behaviors were rewarded, people engaged in them and innovation was fostered. Both the organization and the people that made it up benefited.

Congruence between what is said and what is done is critical. I’ve seen it said that changing culture is hard. Changing culture is impossible if you claim to value one thing but your actions demonstrate that you really don’t.