I fought the law (of unintended consequences) and the law won

Sometimes, what seemed to be a really good idea just doesn’t turn out that way in the end.

In my opinion, a lack of a systems approach to problem solving makes that type of outcome much more likely. Simplistic responses to issues that fail to deal with problems holistically can backfire. Such ill-considered solutions not only fail to solve the original problem, but often set up perverse incentives that can lead to new problems.

An article on the Daily WTF last week, “Just the fax, Ma’am”, illustrates this perfectly. In the article, an inflexible and time-consuming database change process (layered on top of the standard change management process) leads to the “reuse” of an existing, but obsolete field in the database. Using a field labeled “Fax” for an entirely different purpose is far from “best practice”, but following the rules would lead to being seen as responsible for delaying a release. This is an example of a moral hazard, such as Tom Cagley discussed in his post “Some Moral Hazards In Software Development”. Where the cost of taking a risk is not borne by the party deciding whether to take it, potential for abuse abounds. This risk becomes particularly likely when the person taking shortcuts can claim a “moral” rationale for doing so (such as “getting it done” for the customer).

None of this is to suggest that change management isn’t a worthy goal. In fact, the worthier the goal, the greater the danger of creating an unintended consequence because it’s so easy to conflate argument over means with disagreement regarding the ends. If you’re not in favor of being strip-searched on arrival and departure from work, that doesn’t mean you’re anti-security. Nonetheless, the danger of that accusation being made will likely resonate for many. When the worthiness of the goal forestalls, or even just hinders, examination of the effectiveness of methods, then that effectiveness is likely to suffer.

Over the course of 2016, I’ve published twenty-two posts, counting this one, with the category Organizations as Systems. The fact that social systems are less deterministic than software systems only reinforces the need for intentional design. When foreseeable abuses are not accounted for, their incidence becomes more likely. Whether the abuse results from personal pettiness, doctrinal disagreements, or even just clumsy design like the change management process described above is irrelevant. In all of those cases, the problem is the same, decreased respect for institutional norms. Studies have found that “…corruption corrupts”:

Gächter has long been interested in honesty and how it manifests around the world. In 2008, he showed that students from 16 cities, from Riyadh to Boston, varied in how likely they were to punish cheaters in their midst, and how likely those cheaters were to then retaliate against their castigators. Both qualities were related to the values of the respective cities. Gächter found that the students were more likely to tolerate free-loaders and retaliate against do-gooders if they came from places whose citizens took a more relaxed view on tax evasion or fare-dodging, or had less trust in their courts and police.

If opinions around corruption and rule of law can affect people’s reactions to dishonesty, Gächter reasoned that they surely affect how honest people are themselves. If celebrities cheat, politicians rig elections, and business leaders engage in nepotism, surely common citizens would feel more justified in cutting corners themselves.

Taking a relaxed attitude toward the design of a social system can result in its constituents taking a relaxed attitude toward those aspects of the system that are inconvenient to them.

What Customer-Centric Looks Like

My last post, “Defense Against the Dark Art of Disruption”, went into some detail about notable failures in customer-experience for 2016. This week, however, I ran across a counter-example (h/t to Tim Worstall) showing that a little social media awareness and a customer-centric culture can make magic:

A baby products company is launching a special run of ‘little blue cups’ for a 13-year-old boy with autism following a global appeal by his father.

Ben Carter, from Devon, will only drink from a blue Tommee Tippee cup, prompting father Marc to put out an appeal on social media after becoming concerned the cup was wearing out.

Ben would refuse drinks that were not in the cup and had been to hospital with severe dehydration.
His father, tweeting as @GrumpyCarer, prompted people across the world to look through their cupboards for identical cups or to spread the #cupforben message. His request was retweeted more than 12,000 times.

Tommee Tippee, based in Northumberland, said it was nearly 20 years since it had manufactured that product, but has now rediscovered the design and found the mould used to make the two-handled originals, stored in a usable condition in China.

It has said it will make a run of 500 cups to ensure ‘that Ben has a lifetime supply and that his family won’t ever have to worry about finding another cup’.

 

While I don’t know what it cost them to find the molds and run a one-off batch of cups, I suspect that the value of the positive global media coverage should substantially offset it. As a father, I know that the gesture was priceless.

Win-win.

Defense Against the Dark Art of Disruption

Woman with Crystal Ball

My first post for 2016 was titled “Is 2016 the Year for Customer-Focused IT?”. The closing line was “If 2016 isn’t the year for customer-focused IT, I wonder just what kind of year it will be for IT?”.

I am so sorry for jinxing so many things for so many people.🙂

So far, the year has brought us great moments in customer experience like:

  • Google Mic Drop – an automated kiss-off for email (“you meant to hit that button, right?”)
  • Google/Nest and the Resolv home automation hub – retiring a product by bricking it (“it’s just not working; it’s not you, it’s us”)
  • Apple Music – cloud access to your music and freed-up disk space (“nice little music collection you have here, it’d be a shame if you quit paying for access to it”)
  • Evernote’s downsizing – because when the free plan is good enough for too many people, taking away features is the way to get them to pay, right?

Apple, of course, probably won the prize with their “courageous” iPhone 7 rollout:

Using “courage” in such a way was basically a lethal combination of a giant middle finger mixed with a swift kick in the nuts, all wrapped in a seemingly tone-deaf soundbite. This is the kind of stuff critics dream about.

Because Schiller said exactly what he said, he left the company open to not only mockery, but also bolstered a common line of criticism that often gets leveled upon Apple: that they think they know best, and everyone else can hit the road. You can argue that this is a good mentality to have in some cases — the whole “faster horse” thing — but it’s not a savvy move for a company to say this so directly in such a manner.

Apple then continued it’s tradition of “courage” with the new MacBook Pro models.

So, is there a point to all this?

Beyond the obvious, “it’s my site and I’ll snark if I want to”, there’s a very important point. Matt Ballantine captured it perfectly in his post, “Ripe for Disruption”: “You’re less likely to be disrupted if you are in sync with your customers’ view of your value proposition.” His definitive example:

I think that most of the classic cases of organisational extinction through disruption can be framed in this way: Kodak thought their value was in film and cameras. Their customers wanted to capture memories. Kodak missed digital (even though they kind of invented it).

The quote bears repeating with emphasis: “You’re less likely to be disrupted if you are in sync with your customers’ view of your value proposition.” What you think your value proposition is means a whole lot less than your customer’s perception of the value of what you’re delivering. This is a really good way to poison that perception:

Disappointment, betrayal (perception is reality here) are not conducive to a positive customer experience. Customer acquisition is important, but retention is far more important to gaining market share (h/t to Matt Collins). The key to retention is to relate to your customers; understand what they need, then provide that. Having them pay for what’s in your best interest, rather than theirs (hello Kodak), is a much harder sell.

Capability Now, Capability Later

Mock tank, British Army in Italy, WWII

In my post “Strategic Tunnel Vision”, I touched on the concept of capability. I discussed how focusing on new capabilities can crowd out existing capabilities and the detrimental effects of that when those existing capabilities are still necessary. I also spoke to how choices about strategic capabilities can trickle down to effect tactical capabilities.

What I failed to do, however, was define what was meant by the term “capability”. That’s a pretty big oversight on my part, because, in my opinion, understanding the concept is critical across all levels of architectural concerns.

Tom Graves, in his “Definitions on capability”, defines the term (along with some related concepts):

— Capability: the ability to do something.

— Capability-based planning: planning to do something, based on capabilities that already exist, and/or that will be added to the existing suite of capabilities; also, identifying the capabilities that would be needed to implement and execute a plan.

— Capability increment: an extension to an existing capability; also, a plan to extend a capability.

— Capability map: a visual and/or textual description of (usually) an organisation’s capabilities.

Yes, I do know that those definitions are terribly bland and generic – and they need to be that way. That’s the whole point: they need to be generic enough to be valid and usable at every possible level and in every possible context – otherwise we’ll introduce yet more confusion to something that’s often way too confused already.

That last paragraph is critical. The concept of “capability” is a high-level one that is useful across multiple levels of architectural concern (ie. application, solution, enterprise IT, and the enterprise itself). Quoting Tom again:

Note what else is intentionally not in that definition of ‘capability‘:

  • there’s no actual doing – it’s just an ability to do something, not the usage of that ability
  • there’s no ‘how’ – we don’t assume anything about how that capability works, or what it’s made up of
  • there’s no ‘why‘ – we don’t assume any particular purpose
  • there’s no ‘who‘ – we don’t assume anything about who’s responsible for this capability, or where it sits in an organisational hierarchy or suchlike

We do need all of those items, of course, as we start to flesh out the details of how the capabilities would be implemented and used in real-world practice. But in the core-definition itself, we very carefully don’t – they must not be included in the definition itself.

The reason why we have to be so careful and pedantic about this is because the relationship between service, capability, function and the rest is inherently recursive and fractal: each of them contains all of the others, which in turn each contain all of the others, and so on almost to infinity. If we don’t use deliberately-generic definitions for all of those items, we get ourselves into a tangle very quickly indeed – as can be seen all too easily in the endless definitional-battles about the relationships between ‘business-function’ versus ‘business-process’ versus ‘business-service’ versus ‘business-capability’ and so on.

In short, it’s a crucial building block in our designs and plans (which is redundant, since design is a form of planning). If we don’t have and can’t get the ability to do something, it’s game over. However, as Tom noted, we need to move beyond the raw ability in order to make effective use of capabilities. We need to think timing and personnel (which will probably largely drive timing anyway). A capability later may well not be as valuable as the same capability right now.

This was brought to mind while skimming a book review on a military strategy site (emphasis added by me):

In March 2015, then-Chief of Staff of the U.S. Army General Raymond T. Odierno admitted to the British newspaper The Telegraph that the so-called special relationship between the United States and Great Britain isn’t what it used to be. “In the past we would have a British Army division working alongside an American army division,” he said, but he feared that in the future British battalions and brigades would have to operate “inside” American units. “What has changed,” Odierno declared, “is the level of capability.”

Later that week, I asked a senior British general about Odierno’s remarks. He replied, deadpan, that although Odierno’s candor was appreciated, his statement was factually incorrect. “We can still field a division,” the general insisted. “It is just a question of how long it takes us to field one.” Potential tanks, he seemed to think, were just as relevant as an actual ones.

The highlighted portion of the quote illustrates my point. Having the capability to do something immediately and the capability to do that same thing at some point in the future are not equivalent (just to be fair to the British Army, the US Army was in this same position during Operation Desert Shield – the initial ground forces that could be deployed were extremely thin). Treating them as equivalent potentially risks disaster.

It should be noted, however, that level of concern will color the perception of the value of a future capability versus a current one. At the tactical level, in business as well as in war, “…first with the most…” is likely a winning move. At the strategic level, however, where resources must be budgeted across multiple initiatives, priorities should dictate which capabilities get preference. Tactical leaders may have to be satisfied with “on time with just enough”.

Regardless of level, a clear assessment of capabilities, what’s available when, is key to making effective decisions.

Pragmatic Application Architecture

I saw a tweet on Friday about a SlideShare deck that looked interesting, so I bookmarked it to read later. As I was reading it this morning, I found myself agreeing with the points being made. When I got to the next to the last slide, I found myself (or at least, this blog) listed alongside some very distinguished company under “Reading Material”.

Thanks, Bart Blommaerts and nice job!

Strategic Tunnel Vision

Mouth of a Tunnel

 

Change and innovation are topics that have been prominent on this blog over the last year. In fact, Greger Wikstrand and I have traded a total of twenty-six posts (twenty-seven counting this one) on the subject.

Greger’s last post, “Successful digitization requires focus on the entire customer experience – not just a neat app” (it’s in Swedish, but it translates well to English), discussed the critical nature of customer experience to digital innovation. According to Greger, without taking customer experience into account:

One can make the world’s best app without getting more, more satisfied and profitable customers. It’s like trying to make a boring games more exciting by spraying gold paint on the playing pieces.

Change and innovation are not the same thing. Change is inevitable, innovation is not (with a h/t to Tom Cagley for that quote). As Greger pointed out in his latest article, to get improved customer experience, you need depth. Sprinkling digital fairy dust over something is not likely result in innovation. New and different can be really great, but new and different solely for the sake of new and different doesn’t win the prize. Context is critical.

If you’ve read more than a couple of my posts, you’ve probably realized that among my rather varied interests, history is a major one. I lean heavily on military history in particular when discussing innovation. This post won’t break with that tradition.

The blog Defense in Depth, operated by the Defence Studies Department, King’s College London, has published two posts this week dealing with the Suez Crisis of 1956, primarily in terms of the Anglo-French forces. One deals with the land operations and the other with naval operations. They struck a chord because they both illustrated how an overreaction to change can have drastic consequences from the strategic level down to the tactical.

Buying into a fad can be extremely expensive.

The advent of the nuclear age at the end of World War II dramatically transformed military and political thought. The atomic bomb was the ultimate game-changer in that respect. In the time-honored tradition, the response was over-reaction. “Atomic” was the “digital” of the late 40s into the 60s. They even developed a recoilless gun that could launch a 50 pound nuclear warhead 1.25-2.5 miles. “Move fast and break things” was serious business back in the day.

This extreme focus on what had changed, however, led to a rather common problem, tunnel vision. Nuclear capability became such an overarching consideration that other capabilities were neglected. Due to this neglect of more conventional capabilities, the UK’s forces were seriously hampered in their ability to perform their mission effectively. Misguided thinking at the strategic level affected operations all the way down to the lowest tactical formations.

It’s easy to imagine present-day IT scenarios that fall prey to the same issues. A cloud or digital initiative given top priority without regard to maintaining necessary capabilities could easily wind up failing in a costly manner and impairing the existing capability. It’s important to understand that time, money, and attention are finite resources. Adding capability requires increasing the resources available for it, either through adding new resources or freeing up existing ones by reducing the commitment to less important capabilities. If there is no real appreciation of what capabilities exist and what the relative value of each is, making this decision becomes a shot in the dark.

Situational awareness across all levels is required. To be effective, that awareness must integrate changes to the context while not losing sight of what already was. Otherwise, to use a metaphor from my high school football days, you risk acting like a “blind dog in a meat-packing plant”.

Monolithic Applications and Enterprise Gravel

Pebbles

It’s been almost a year since I’ve written anything about microservices, and while a lot has been said on that subject, it’s one I still monitor to see what new pops up. The opening of a blog post that I read last week caught my attention:

Coined by Melvin Conway in 1968, Conway’s Law states: “Any organization that designs a system will produce a design whose structure is a copy of the organization’s communication structure.” In software development terms, Conway’s Law suggests that a given team will build apps that mirror the team’s organizational structure. Siloed functional teams produce siloed application architectures.

The result is a monolith: A massive application whose functionality is crammed into a few crowded parts. Scaling a simple pattern to the enterprise level often results in a monolith.

None of this is wrong, per se, but in reading it, one could come to a wrong conclusion. Siloed functional teams (particularly where the culture of the organization encourages siloed business units) produce siloed application architectures that are most likely monoliths. From an enterprise IT architecture aspect, though, the result is not monolithic. Googling the definition of “monolithic”, we get this:

mon·o·lith·ic
ˌmänəˈliTHik/
adjective
  1. formed of a single large block of stone.
  2. (of an organization or system) large, powerful, and intractably indivisible and uniform.
    “rejecting any move toward a monolithic European superstate”
    synonyms: inflexible, rigid, unbending, unchanging, fossilized
    “a monolithic organization”

Rather than “a single large block of stone”, we get gravel. The architecture of the enterprise’s IT isn’t “large, powerful, and intractably indivisible and uniform”. It may well be large, but its power in relation to its size will be lacking. Too much effort is wasted reinventing wheels and maintaining redundant data (most likely with no real sense of which set of data is authoritative). Likewise, while “intractably indivisible” isn’t a virtue, being intractable while also lacking cohesion is worse. Such an IT architecture is a foundation built on shifting sand. Lastly, whether the EITA is uniform or not (and I would give good odds that it’s not), is irrelevant given the other negative aspects. Under the circumstances, worrying about uniformity would be like worrying about whether the superstructure of the Titanic had a fresh paint job.

Does this mean that microservices are the answer to having an effective EITA? Hardly.

There are prerequisites for being able to support a microservice architecture; table stakes, if you will. However, the service-oriented mindset can be of value whether it’s applied as far down as the intra-application level (i.e. microservices – it is an application architecture pattern) or inter-application (the more traditional SOA). Where the line is drawn depends on the context of the application(s) and their ecosystem. What can be afforded and supported are critical aspects of the equation at all levels.

What is necessary for an effective EITA is a full-stack approach. Governance and data architecture in particular are important aspects to consider. The goal is consistent, intentional alignment across all levels (enterprise, EITA, solution, and application), promoting a cohesive architecture throughout, not a top-down dictatorship.

Large edifices that last are built from smaller pieces that fit together on purpose.