Innovation, Intention, Planning and Execution

Napoleon at Wagram

 

Convergence is an interesting thing. Greger Wiktrand and I have been trading posts back and forth on the subject of innovation for almost eighteen months now (forty posts in total). I’ve also been writing a lot on the concept of organizations as systems, (twenty-two posts over the last year, with some overlap with innovation). The need for architectural design (and make no mistake, social systems like organizations require as much architectural design over their lifetime as any software system) and the superiority, in my opinion, of intentional architecture versus accidental architecture are also themes of long standing on this site.

My last post, “Architecture Corner: Good at innovation – Seven Deadly Sins of IT”, linked to a YouTube video produced by and starring Greger and Casimir Artmann. It’s worth the watching, so I won’t give away the plot, but I will say that it demonstrates how these concepts interrelate.

Effectiveness requires reasoned intentional action. I’ve used this Tom Graves’ quote many times before, but it still applies: “things work better when they work together, on purpose”.

The word “purpose” is critical to that sentence. The difference between intentional rather than accidental activity is the difference between being goal-directed and flailing blindly (n.b. experimenting, done right, is the former, not the latter). An understanding of purpose can allow a goal to be reached, even when the initial route to that goal is closed off. Completing a required set of tasks lacks that flexibility. This appreciation of the utility of purpose-oriented direction over micro-management is an old one that the military periodically re-visits:

An understanding of the purpose aids the joint force in exercising disciplined initiative to facilitate the commander’s visualized end state. Moreover, the purpose itself not only drives why tasks must happen, but also how subordinate commanders choose to execute their assigned mission(s).

Purposes must be carefully crafted, nested, and organized not only to achieve unity of effort, but also the intended outcomes (selected tasks to execute, method of execution, and/or desired effects). They also must give subordinates the latitude to find better, innovative solutions to tactical and operational problems. Finally, the operational purpose must ultimately nest back to the strategic national interest in order to affect change in the human domain. Purposes for the subordinate operations must be well thought out, nested within the desired operational objectives, and be the correct purpose in order to achieve the desired operational end state. Therefore, it is incumbent upon commanders to develop purposes for subordinate operations first and subsequently build the tasks. The “why” trumps the “how” both in importance and in priority.

What to accomplish and why are more important than how to accomplish something. As the author of article above noted, communicating purpose “…enables subordinates to take advantage of emergent opportunities that arise by enabling shared understanding of the commander’s purpose and end state.” It should also force those providing direction to examine their rationale for what they’re asking for. “Why” is the most important question that can be asked. Activity that is not tailored to achieving a particular aim will be ineffective. This includes chasing the latest silver bullet. A recent article on International Business Times, “As a term of description, ‘digital’ is now an anachronism”, had this to say:

As a term of description, digital is an anachronism. It reflects an organisational mindset that views technological transformation itself as the aim. It’s a common mistake. At the height of the dotcom boom, suddenly everyone needed a website, but not everyone understood why.

Over the last few years, the drive to digitisation has intensified. Business models, brands, products and services, customer relationships and business processes are increasingly governed by digital elements such as data.

But much the same as building a website in 1999, it’s not a question of becoming “more digital”. It’s a question of what you want digital to do.

Confusing means and ends is both futile and expensive. No matter how many tools I buy, buying tools won’t make me a carpenter (though my bank balance will continue to shrink regardless of whether the purchase helped or not). Dropping tools and techniques into a culture that is not able or prepared to use them accomplishes nothing. Likewise, becoming more “digital” (or for that matter, more “agile”), will not help an organization if it’s heading in the wrong direction. Efficiency and effectiveness are two different things that may well not go hand in hand. Just as important to understand, efficiency must take a subordinate position to effectiveness. You cannot do the wrong thing efficiently enough to turn it into the right thing.

You need to understand what you want to do and what the constraints, if any, are. That understanding will allow you to figure out how you’re going to try to do it and determine why the tools and techniques will get you there or not. The alternative is delay (waiting for new instructions) caused by the bottleneck of over-centralized decision-making with a high probability of something getting lost in translation.

Work together purposefully so things work better.

Square Pegs, Round Holes, and Silver Bullets

Werewolf

People like easy answers.

Why spend time analyzing and evaluating when you can just take some thing or some technique that someone else has already put to use and be done with it?

Why indeed?

I mean, “me too” is a valid strategy, right?

And we don’t want people to get off message, right?

https://twitter.com/jetpack/status/844195111407943680

And we can always find a low cost, minimal disruption way of dealing with issues, right?

https://twitter.com/jetpack/status/845323751545884675

I mean, after all, we’ve got data and algorithms, and stuff:

The thing is, actions need to make sense in context. Striking a match is probably a good idea in the dark, but it’s probably less so in daylight. In the presence of gasoline fumes, it’s a bad idea regardless of ambient light.

A recent post on Medium, “Design Sprints Are Snake Oil” is a good example. Erika Hall’s title was a bit click-baitish, but as she responded to one commenter:

The point is that the original snake oil was legitimate and effective. It ended up with a bad reputation from copycats who over-promised results under the same name while missing the essential ingredients.

Sprints are legitimate and effective. And now there is a lot of follow-up hype treating them as a panacea and a replacement for other types of work.

Good things (techniques, technologies, strategies, etc.) are “good”, not because they are innately right, but because they fit the context of the situation at hand. Those that don’t fit, cease being “good” for that very reason. Form absent function is just a facade. Whether it’s business strategy, management technique, innovation efforts, or process, there is no recipe. The hard work to match the action with the context has to be done.

Imitation might be the sincerest form of flattery, but it’s a really poor substitute for strategy.

Form Follows Function on SPaMCast 415

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast, number 415, features Tom’s essay on recognizing risk and risk tolerance, Kim Pries on change models, a Form Follows Function installment based on my post “All Aboard the Innovation Band Wagon?”, and Jon Quigley on requirements management.

Innovation is the topic of our discussion this time – what it is, whether it’s something that every organization can achieve, and is there a recipe? Everyone wants to be an innovator, but there’s a lot of question around just what that entails.

You can find all my SPaMCast episodes using under the SPAMCast Appearances category on this blog. Enjoy!

All Aboard the Innovation Band Wagon?

Bandwagon

 

It seems like everyone wants to be an innovator nowadays. Being “digital” is in – never mind what it means, you’ve just got to be “digital”. Being innovative, however, is more than being buzzword-compliant. Being innovative, particularly in a digital sense, means solving problems (for customers, not yourself) in a new way with technology. Being innovative means meeting a need in a sustainable way (eventually you have to make money). Being innovative means understanding your strategy, not just following the latest thing.

Casimir Artmann published a post this week, “Digital is not enough”, outlining Kodak’s failures in the digital photography space. As digital cameras entered the market, Kodak introduced ways to turn film into digital images. Kodak’s move into digital photography (which, ironically, they invented in 1975), coincided with the rise of camera phones. By concentrating more on perpetuating their film product line than their customers’ needs, Kodak wound up chasing the trend and losing out.

Customers’ cash follow products that meet customer needs (even needs that they didn’t know they had).

Sometimes a product or service can meet a need and still fail. A Business Insider article yesterday morning discussed the weakness of the peer-to-peer foreign exchange business model, saying it only works in “fair weather”. In the article, Richard Kimber, CEO of the foreign exchange company OFX Group, observes:

When you’ve got currency moving dramatically one way or the other, what you can have happen is it encourages asymmetric activity. As we saw in Brexit, you had lots and lots of sellers and very few buyers. That can lead to an inability to transact because you simply have all these sellers lined up and no buyers. That’s one of the reasons why the peer-to-peer players opted out of their model during this period of volatility because it wouldn’t have been sustainable.

While Brexit might be the latest event to expose the weakness of the peer-to-peer model, it’s not the first. The Business Insider article referenced another article from January on The Memo that made the same point. Small wonder, the concept of a market maker is a well established component of financial markets.

Disintermediation, cutting out the “middleman”, is only innovative when the “middleman” is, or can be made, superfluous.

Blindly following a trend can be another innovation anti-pattern. In an article for the Wharton Business School, “Rethinking Retail: When Location Is a Liability”, the authors discussed the pressures on brick and mortar retailers and the need to be “Digital-first”. The following was recommended:

  1. Identify some of your common habits and perspectives about how the retail sector should function, including guiding principles, time and capital allocation patterns, primary skills and capabilities, and the key metrics and outcomes that you track.
  2. Uncover the core beliefs about retailing that motivate your behaviors, and are the priorities of your firm and board. This step usually takes some ongoing reflection and added perspective from your peers. Industry best practices likely influence your thinking greatly.
  3. Invert your core beliefs about retailing and consider the implications for your firm and board. There are many possible inversions in each instance. For example, all retailers should ask themselves, ‘Is digital our first priority? How about our customer network — do we put them in front of merchandise and do we have an entire department dedicated to mobilizing them?’
  4. Extrapolate what implications these new core beliefs, and the various ripple effects, would have for your organization and board. Observe what is happening in your industry and, more broadly, how different core beliefs might help you get ahead of digital disruption by companies like Amazon.
  5. Act on your new retail core beliefs (preferably with digital as the center) by sharing them broadly with your customers, employees, suppliers and investors. Purposely changing your business actions, particularly when it comes to time and capital allocation, is an important part of the process and helps reinforce the changes in mental models you are trying to achieve.

Note the generous usage of “your” (retailer) instead of “their” (customer). Sharing “…your new retail core beliefs (preferably with digital as the center)…” with your customers will only be fruitful if those new beliefs align with those the customer has or can be convinced to adopt. Retail is a very broad segment and a very large part of it needs to be digital. That being said, over-focusing on it carries risk as well. Convenience stores, for example, catering to a “we’re out and need it now” market, is unlikely to benefit from a digital-first strategy in the same way big-box retailers might. Not having a one-size-fits-all strategy is why Amazon is opening physical stores.

We don’t drive customer behavior. We provide opportunities that hopefully makes it more like for them to choose us.

Innovation doesn’t come from a recipe. Digital isn’t the magic secret sauce for everything. Change occurs, but at different speeds in different areas. The future is not evenly distributed. As Joanna Young observed in “Obsolescence: Take With Grain Of Salt”:

I recall clearly in the mid-1990s hearing an executive say “by the year 2000, we will be paperless.” I signed, with a pen, four approval forms just today. Has technology failed us? No. The technology exists to make mailboxes obsolete and signatures purely ceremonial. However the willingness to change behavior and ergo retire old methods is up to humans, not technology.

Innovation is significant positive change, an improvement in our customers’ lives, not a recipe.

Cause and Effect – Cargo Cults and Carts Before Horses

Sometimes our love of shortcuts can make us really stupid. Take, for example, the idea of “Fail Fast”. As Jeff Sussna observed in his post “Rethinking Failure”, “Suddenly failure is all the rage.” He also noted:

By itself, failure is anything but good. Making the same mistake over and over again doesn’t help anyone. Failure leads to success when I learn from it by changing my behavior or understanding in response to it. Even then, it’s impossible to guarantee that my response will in fact lead to success. The validity of any given response can only be evaluated in hindsight.

Dan McClure, in “Why the “Fail Fast” Philosophy Doesn’t Work”, agreed:

If your only strategy for exploring the unknown is to pick up rocks and look underneath, then the more rocks you turn over the better. The problem is that for real world innovations, test and reject doesn’t scale well. For disruptive ideas with the potential to make a difference in the market there are lots and lots of rocks.

The value in “Fail Fast” lies in the “Fast” part; there’s no magic in the “Fail”. If you’re going to fail, finding out about it sooner, rather than later, is less costly. Less costly, however, is a far cry from best. Succeeding obviously works much better than failing fast, meaning that methods which allow you to evaluate without incurring the time, pain, and expense of a failure are a better choice when available.

Another example of this phenomenon is what Matt Balantine recently referred to as “investor-centric” development:

https://twitter.com/jetpack/status/623200517607583744

That, of course, creates an interesting rabbit hole – investors chasing products that will be “hot” and products designed to appeal to investors rather than customers (which would result in the product becoming “hot”). Matt’s comment from his post “What if the answer isn’t software?” applies, “I’ve no doubt that we are seeing real issues and opportunities being ignored in the pursuit of the rainbow-pooping unicorns.”

Yet another example of magical thinking is via believing in the “Great Man Theory”. People like Steve Jobs and Elon Musk have achieved great things, but as a result of what they did, not who they are. Divorced from their context, it’s a hard sell to argue that they would be equally successful.

Effectiveness is more likely to come from systems thinking than magical thinking. Understanding cause and effect as well as interrelationships and context makes the difference between rational decision-making and superstition.

Cargo Cult Architecture

According to Mark Little, Red Hat VP of Engineering, the microservice backlash has arrived, coming from “people who were really pushing it at the beginning and who are now just starting to realise it’s not all sunshine and roses, or people who never felt the need for it at all”. The Twitterverse seems to agree:

This post, however, is less about microservices, and more about what their rise and fall (and, no doubt, recovery as we violently discover equilibrium) says about software development as a discipline.

As Sander Mak observed in his post “On monoliths, microservices and critical thinking” (h/t Paul Bakker):

What does it mean if public software engineering opinion flips 180 degrees in a matter of weeks? It’s too easy to chalk it all up to people needing authority figures. Yes, I know: not everybody was all over microservices. But you have to admit there’s something fundamentally unsound going on here.

This is hardly a new problem. The same Mark Little mentioned in the opening wrote an article for InfoQ almost three years ago titled “IT Values Technologies Over Thought” where he stated “If the people delivering the implementations that are supposed to be solutions to business problems aren’t looking beyond the hype and considering alternatives, especially when those alternatives may have been tried and tested for many years, then we are in for some very interesting times ahead”.

It’s a known problem. We even laugh at articles that trade on our tendency to jump from silver bullet to silver bullet (although I’m not sure if that laughter is based on sangfroid or fatalism).

It’s not even a problem that’s exclusively ours. An article in Forbes, “Why So Many Management Strategies Become Fads That Fade Away”, refers to it as “idea surfing”. When complexity, unrealistic expectations, cultural resistance, or poor fit lead to management souring on the current strategy du jour, there’s always a shinier object just down the road that promises to be the recipe for success.

Accord to “Rats Can Be Smarter Than People” in January’s Harvard Business Review, our predilection for easy answers is deeply rooted (emphasis added):

Our rule-based system was an evolutionary development: How do you tell if a berry is good for eating? You learn that this small red one is good, and then you save energy by bypassing the ones of a different shape or color. So our brains have been conditioned to look for rules. We’re taught them in school, at work, and by our parents, and we can make many good decisions by applying the ones we’ve learned. But in other situations there’s too much going on for simple rules to work, and that’s when information integration learning has to kick in. Think of a radiologist evaluating an X-ray. If you ask him what rules he uses to determine whether a spot is cancer, he’d probably have a hard time verbalizing them. He’s learned from labeled examples in medical school and his own experience, and then developed an instinct for identifying cancerous spots based on what he’s seen before. Another example that comes to mind is a manager interviewing a job candidate. There aren’t any hard-and-fast rules about who will be a good hire. You have to consider many factors and rely on your judgment or on a gut feeling based on your experience with people in the workplace. Unfortunately, there’s a great deal of evidence showing that humans have a harder time learning how to integrate information in this way, because they seek rules even when there are none.

In spite of how much it’s part of our nature, we have to overcome the desire for easy answers. No matter how many jumps we make, the magic recipe will not be found:

Ignore that last guy 😉

Institutional Amnesia, Cargo Cults and Software Development

When George Santayana stated that “Those who cannot remember the past are condemned to repeat it.”, he wasn’t talking about technology. When Brenda Michelson and Ed Featherston said much the same thing recently, they were:

https://twitter.com/jetpack/status/573850405026861056
https://twitter.com/jetpack/status/573851102808010752

It’s a sad fact of life that today’s silver bullet is likely to be yesterday’s junk which was probably the day before yesterday’s silver bullet.

Poor design choices are made for a variety of reasons. Sometimes it’s a matter of ego. Sometimes inadequate analysis is the culprit. Focusing on technology rather than problem-solving can be another pitfall. Even attempts at post-hoc justification of a prior bad decision can drive new mistakes.

An uncritical acceptance of tradition is a significant source of problem designs. Eberhard Wolff recently took a swipe at one old standard:

The stock reason for a tiered/distributed design is scalability. However, it’s not a given that distributing X horizontal layers across Y machines (yielding X/Y instances) will yield better results than Y machines, each with all three layers deployed on the same machine. The context in which this sort of distribution makes sense is far from universal. Even when the costs of distribution are outweighed by the benefits, traditional monolithic horizontal layers will likely be less efficient than vertical slices. One of the purported benefits of microservices is the ability to independently scale according to business concerns (vertical slices organized around bounded contexts) rather technology concerns (horizontal layers).

The mention of microservices brings to mind the problem of jumping on bandwagons. How many applications currently under development are being designed using this architectural style because it’s the “next big thing” rather than because the style fits the problem? Sam Newman, author of O’Reilly’s Building Microservices, in “Microservices for Greenfield?”, even states that he considers the style to be more suitable for evolving an existing system rather than building from scratch:

I remain convinced that it is much easier to partition an existing, “brownfield” system than to do so up front with a new, greenfield system. You have more to work with. You have code you can examine, you can speak to people who use and maintain the system. You also know what ‘good’ looks like – you have a working system to change, making it easier for you to know when you may have got something wrong or been too aggressive in your decision making process.

You also have a system that is actually running. You understand how it operates, how it behaves in production. Decomposition into microservices can cause some nasty performance issues for example, but with a brownfield system you have a chance to establish a healthy baseline before making potentially performance-impacting changes.

I’m certainly not saying ‘never do microservices for greenfield’, but I am saying that the factors above lead me to conclude that you should be cautious. Only split around those boundaries that are very clear at the beginning, and keep the rest on the more monolithic side. This will also give you time to assess how how mature you are from an operational point of view – if you struggle to manage two services, managing 10 is going to be difficult.

This same over-eagerness is present in front-end development as much as back-end development. Stefan Tilkow recently tweeted regarding the trend of jumping straight into complex Javascript framework applications rather than evolving into them based on need:

In my opinion, the key to effective design is being able to give a good answer when asked “why”. Being able to articulate the reasons behind the choices made is critical to justifying them. By reasons, I mean a logical explanations of how the techniques chosen contribute to the desired ends. Neither “X recommends this” nor “This is what everybody’s doing” count. Designing, developing, and evolving software systems is not a game of following a recipe. In the words of Grady Booch:

Quick Fixes That Last a Lifetime

Move Fast and Break Things on xkcd

“Move fast and break things.”

“Fail fast.”

“YAGNI.”

“Go with the simplest thing that can possibly work.”

I’ve written previously about my dislike for simplistic sound-bite slogans. Ideas that have real merit under the appropriate circumstances can be deadly when stripped of context and touted as universal truths. As Tom Graves noted in his recent post “Fail, to learn”, it’s not about failing, it’s about learning. We can’t laugh at cargo cultists building faux airports to lure the planes back while we latch on to naive formulas for success in complex undertakings without a clue as to how they’re supposed to work.

The concepts of emergent design and emergent architecture are cases in point. Some people contend that if you do the simplest thing that could possibly work, “The architecture follows immediately from that: the architecture is just the accumulation of these small steps”. It is trivially true that an architecture will emerge under those circumstances. What is unclear (and unexplained) is how a coherent architecture is supposed to emerge without any consideration for the higher levels of scope. Perhaps the intent is to replicate Darwinian evolution. If so, that would seem to ignore the fact that Darwinian evolution occurs over very long time periods and leaves a multitude of bodies in its wake. While the species (at least those that survive) ultimately benefit, individuals may find the process harsh. If the fittest (most adaptable, actually) survive, that leaves a bleaker future for those that are less so. Tipping the scales by designing for more than the moment seems prudent.

Distributed systems find it even more difficult to evolve. Within the boundary of a single application, moving fast and breaking things may not be fatal (systems dealing with health, safety, or finance are likely to be less tolerant than social networks and games). With enough agility, unfavorable mutations within an application can be responded to and remediated relatively quickly. Ill-considered design decisions that cross system boundaries can become permanent problems when cost and complexity outweigh the benefits of fixing them. There is a great deal of speculation that the naming of Windows 10 was driven by the number of potential issues that would be created by naming it Windows 9. Allegedly, Microsoft based its decision on not triggering issues caused by short-sighted decisions on the part of developers external to Microsoft. As John Cook noted:

Many think this is stupid. They say that Microsoft should call the next version Windows 9, and if somebody’s dumb code breaks, it’s their own fault.

People who think that way aren’t billionaires. Microsoft got where it is, in part, because they have enough business savvy to take responsibility for problems that are not their fault but that would be perceived as being their fault.

It is naive, particularly with distributed applications, to act as if there are no constraints. Refactoring is not free, and consumers of published interfaces create inertia. While it would be both expensive and ultimately futile to design for every circumstance, no matter how improbable, it is foolish to ignore foreseeable issues and allow a weakness to become a “standard”. There is a wide variance between over-engineering/gold-plating (e.g. planting land mines in my front yard just in case I get attacked by terrorists) and slavish adherence to a slogan (e.g. waiting to install locks on my front door until I’ve had something stolen because YAGNI).

I can move fast and break things by wearing a blindfold while driving, but that’s not going to get me anywhere, will it?