Systems of Social Systems and the Software Systems They Create

I’ve mentioned before that the idea of looking at organizations as systems is one that I’ve been focusing on for quite a while now. From a top-down perspective, this makes sense – an organization is a system that works better when it’s component parts (both machine and human) intentionally work together.

It also works from the bottom up. For example, from a purely technical perspective, we have a system:

Generic System

However, without considering those who use the system, we have limited picture of the context the system operates within. The better we understand that context, the better we can shape the system to fit the context, otherwise we risk the square peg in a round hole situation:

Generic System with Users

Of course, the users who own the system are also only a part of the context. We have to consider the customers as well:

Generic System with Users and customers

Likewise, we need to consider that the customers of some systems can be internal to the organization while others are external. Some of the “customers” may not even be human. For that matter, sometimes the customer’s interface might be a human (user) rather than software. Things get complicated when we begin adding in the social systems:

Generic EITA with Users and customers

The situation is even more complicated than what’s seen above. We need to account for the team developing and operating the automated system:

Generic System with Users, customers, and IT team

And if that team is not a unified whole, then the picture gets a whole lot more interesting:

Generic System with Users, customers, and IT teams

Zoomed out to the enterprise level, that’s a lot of social systems. When multiplied by the number of automated systems involved, the number easily becomes staggering. What’s even more sobering is reflecting on whether those interactions have been intentionally structured or have grown organically over time. The interrelationship of social and software systems is under-appreciated. A series of tweets from Gregory Brown last week makes the same case:

https://twitter.com/jetpack/status/829366709370908673 https://twitter.com/jetpack/status/829366901356781569 https://twitter.com/jetpack/status/829367017761292288 https://twitter.com/jetpack/status/829367397899440128 https://twitter.com/jetpack/status/829367860128538624 https://twitter.com/jetpack/status/829368126454296576 https://twitter.com/jetpack/status/829368276467728385 https://twitter.com/jetpack/status/829369755136094210 https://twitter.com/jetpack/status/829369991111782401 https://twitter.com/jetpack/status/829370136666767360

A number of questions come to mind:

  • Is anyone aware of all the systems (social and software) in play?
  • Is anyone aware of all the interactions between these systems?
  • Are the relationships and interactions a result of intentional design or have they “just happened”?
  • Are you comfortable with the answers to the first three questions above?

Organizations as Systems and Innovation

Portrait of Gustavus Adolphus of Sweden

Over the last year or so, the concept of looking at organizations as systems has been a major theme for me. Enterprises, organizations and their ecosystems (context) are social systems composed of a fractal set of social and software systems. As such, enterprises have an architecture.

Another long-term theme for this site has been my conversation with Greger Wikstrand regarding innovation. This post is the thirty-fifth entry in that series.

So where do these two intersect? And why is there a picture of a Swedish king from four-hundred years ago up there?

Innovation, by its very nature (“…significant positive change”), does not happen in a vacuum. Greger’s last post, “Innovation arenas and outsourcing”, illustrates one aspect of this. Shepherding ideas into innovations is a deliberate activity requiring structural support. Being intentional doesn’t turn bad ideas into innovations, but lack of a system can cause an otherwise good idea to wither on the vine.

Another intersection, the one I’m focusing on here, can be found in the nature of innovation itself. It’s common to think of technological innovation, but innovation can also be found in changes to organizational structure and processes (e.g. Henry Ford and the assembly line). Organization, process, and technology are not only areas for innovation, but when coupled with people, form the primary elements of an enterprise architecture. It should be clear that the more these elements are intentionally coordinated towards a specific goal, the more cohesive the effort should be.

This brings us to Gustavus Adolphus of Sweden. In his twenty years on the throne, he converted Sweden into a major power in Europe. Militarily, he upended the European status quo in a very short time (after intervening in the Thirty Years’ War in 1630, he was killed in battle in 1632) by marshaling organizational, procedural, technological innovations:

The Swedish army stood apart from its’ contemporaries through five characteristics. Its’ soldiers wore uniform and had a nucleus of native Swedes, raised from a surprisingly diplomatic system of conscription, at its’ core. The Swedish regiments were small in comparison to their opponents and were lightly equipped for speed. Each regiment had its’ own light and mobile field artillery guns called ‘leathern guns’ that were easy to handle and could be easily manoeuvred to meet sudden changes on the battlefield. The muskets carried by these soldiers were of a type superior to that in general use and allowed for much faster rates of fire. Swedish cavalry, instead of galloping up to the enemy, discharging their pistols and then turning around and galloping back to reload, ruthlessly charged with close quarter weapons once their initial shot had been expended. By analysing this paradigm it becomes apparent that the army under Gustavus emphasized speed and manoeuvrability above all – this greatly set him apart from his opponents.

By themselves, none of the innovations were original to Gustavus. Combining them together, however, was and European military practice was irrevocably changed. Inflection points can be dependent on multiple technologies catching up with one another (since the future is “…not very evenly distributed”), but in this case the pieces were all in place. The catalyst was someone with the vision to combine them, not random chance.

Emergence will be a factor in any complex system. That being said, the inevitability of those emergent events does not invalidate intentional design and planning. If anything, design and planning is more necessary to deal with the mundane, foreseeable things in order to leave more cognitive capacity to deal with that which can’t be foreseen.

Learning Organizations: When Wrens Take Down Wolfpacks

A Women's Royal Naval Service plotter at work in the Operations Room at Derby House in Liverpool, the headquarters of the Commander-in-Chief Western Approaches, September 1944.

What does the World War II naval campaign known as the Battle of the Atlantic have to do with learning and innovation?

Quite a lot, as it turns out. Early in the war, Britain found itself in a precarious position. While being an island nation provided defensive advantages, it also came with logistical challenges. Food, armaments, and other vital supplies as well as reinforcements had to come to it by sea. The shipping lanes were heavily threatened, primarily by the German u-boat (submarine) fleet. Needing more than a million tons of imports per week, maintaining the flow of goods was a matter of survival.

Businesses may not have to worry about literal torpedoes severing their lifelines, but they are at risk due to a number of factors. Whether its changing technology or tastes, competitive pressures, or even criminal activity, organizations cannot afford to sit idle. In his post “Heraclitus was wrong about innovation”, Greger Wikstrand talked about the mismatch between the speed of change (high) and rate of innovation (not fast enough). This is a recurrent theme in our ongoing discussion of innovation (we’ve been trading posts on the subject for over a year now).

The British response to the threat involved many facets, but an article I saw yesterday about one response in particular struck a chord. “The Wargaming “Wrens” of the Western Approaches Tactical Unit” told the story of a group of officers and ratings of the Women’s Royal Naval Service (nicknamed “Wrens”) who, under the command of a naval officer, Captain Gilbert Roberts, revolutionized British anti-submarine warfare (ASW). Their mandate was to “…explore and evaluate new tactics and then to pass them on to escort captains in a dedicated ASW course”.

Using simulation (wargaming) to develop and improve tactics was an unorthodox proposition, particularly in the eyes of Admiral Percy Noble, who was responsible for Britain’s shipping lifeline. However, Admiral Noble was capable of appreciating the value of unorthodox methods:

A sceptical Sir Percy Noble arrived with his staff the next day and watched as the team worked through a series of attacks on convoy HG.76. As Roberts described the logic behind their assumptions about the tactics being used by the U-Boats and demonstrated the counter move, one that Wren Officer Laidlaw had mischievously named Raspberry, Sir Percy changed his view of the unit. From now on the WATU would be regular visitors to the Operations Room and all escort officers were expected to attend the course.

Each of the courses looked at ASW and surface attacks on a convoy and the students were encouraged to take part in the wargames that evaluated potential new tactics. Raspbery was soon followed by Strawberry, Goosebery and Pineapple and as the RN went over to the offensive, the tactical priority shifted to hunting and killing U Boats. Roberts continued as Director of WATU but was also appointed as Assistant Chief of Staff Intelligence at Western Approaches Command.

This type of learning culture, such as I described in “Learning to Deal with the Inevitable”, was key to winning the naval war. Clinging to tradition would have led to a fatal inertia.

One aspect of the WATU approach that I find particularly interesting is the use of simulation to limit risk during learning. Experiments involving real ships cost real lives when they don’t pan out. Simulation (assuming sufficient validity of the theoretical underpinnings of the model used) is a technique that can be used to explore more without sending costs through the roof.

I fought the law (of unintended consequences) and the law won

https://twitter.com/jetpack/status/714972957269839874

Sometimes, what seemed to be a really good idea just doesn’t turn out that way in the end.

In my opinion, a lack of a systems approach to problem solving makes that type of outcome much more likely. Simplistic responses to issues that fail to deal with problems holistically can backfire. Such ill-considered solutions not only fail to solve the original problem, but often set up perverse incentives that can lead to new problems.

An article on the Daily WTF last week, “Just the fax, Ma’am”, illustrates this perfectly. In the article, an inflexible and time-consuming database change process (layered on top of the standard change management process) leads to the “reuse” of an existing, but obsolete field in the database. Using a field labeled “Fax” for an entirely different purpose is far from “best practice”, but following the rules would lead to being seen as responsible for delaying a release. This is an example of a moral hazard, such as Tom Cagley discussed in his post “Some Moral Hazards In Software Development”. Where the cost of taking a risk is not borne by the party deciding whether to take it, potential for abuse abounds. This risk becomes particularly likely when the person taking shortcuts can claim a “moral” rationale for doing so (such as “getting it done” for the customer).

None of this is to suggest that change management isn’t a worthy goal. In fact, the worthier the goal, the greater the danger of creating an unintended consequence because it’s so easy to conflate argument over means with disagreement regarding the ends. If you’re not in favor of being strip-searched on arrival and departure from work, that doesn’t mean you’re anti-security. Nonetheless, the danger of that accusation being made will likely resonate for many. When the worthiness of the goal forestalls, or even just hinders, examination of the effectiveness of methods, then that effectiveness is likely to suffer.

Over the course of 2016, I’ve published twenty-two posts, counting this one, with the category Organizations as Systems. The fact that social systems are less deterministic than software systems only reinforces the need for intentional design. When foreseeable abuses are not accounted for, their incidence becomes more likely. Whether the abuse results from personal pettiness, doctrinal disagreements, or even just clumsy design like the change management process described above is irrelevant. In all of those cases, the problem is the same, decreased respect for institutional norms. Studies have found that “…corruption corrupts”:

Gächter has long been interested in honesty and how it manifests around the world. In 2008, he showed that students from 16 cities, from Riyadh to Boston, varied in how likely they were to punish cheaters in their midst, and how likely those cheaters were to then retaliate against their castigators. Both qualities were related to the values of the respective cities. Gächter found that the students were more likely to tolerate free-loaders and retaliate against do-gooders if they came from places whose citizens took a more relaxed view on tax evasion or fare-dodging, or had less trust in their courts and police.

If opinions around corruption and rule of law can affect people’s reactions to dishonesty, Gächter reasoned that they surely affect how honest people are themselves. If celebrities cheat, politicians rig elections, and business leaders engage in nepotism, surely common citizens would feel more justified in cutting corners themselves.

Taking a relaxed attitude toward the design of a social system can result in its constituents taking a relaxed attitude toward those aspects of the system that are inconvenient to them.

Monolithic Applications and Enterprise Gravel

Pebbles

It’s been almost a year since I’ve written anything about microservices, and while a lot has been said on that subject, it’s one I still monitor to see what new pops up. The opening of a blog post that I read last week caught my attention:

Coined by Melvin Conway in 1968, Conway’s Law states: “Any organization that designs a system will produce a design whose structure is a copy of the organization’s communication structure.” In software development terms, Conway’s Law suggests that a given team will build apps that mirror the team’s organizational structure. Siloed functional teams produce siloed application architectures.

The result is a monolith: A massive application whose functionality is crammed into a few crowded parts. Scaling a simple pattern to the enterprise level often results in a monolith.

None of this is wrong, per se, but in reading it, one could come to a wrong conclusion. Siloed functional teams (particularly where the culture of the organization encourages siloed business units) produce siloed application architectures that are most likely monoliths. From an enterprise IT architecture aspect, though, the result is not monolithic. Googling the definition of “monolithic”, we get this:

mon·o·lith·ic
ˌmänəˈliTHik/
adjective
  1. formed of a single large block of stone.
  2. (of an organization or system) large, powerful, and intractably indivisible and uniform.
    “rejecting any move toward a monolithic European superstate”
    synonyms: inflexible, rigid, unbending, unchanging, fossilized
    “a monolithic organization”

Rather than “a single large block of stone”, we get gravel. The architecture of the enterprise’s IT isn’t “large, powerful, and intractably indivisible and uniform”. It may well be large, but its power in relation to its size will be lacking. Too much effort is wasted reinventing wheels and maintaining redundant data (most likely with no real sense of which set of data is authoritative). Likewise, while “intractably indivisible” isn’t a virtue, being intractable while also lacking cohesion is worse. Such an IT architecture is a foundation built on shifting sand. Lastly, whether the EITA is uniform or not (and I would give good odds that it’s not), is irrelevant given the other negative aspects. Under the circumstances, worrying about uniformity would be like worrying about whether the superstructure of the Titanic had a fresh paint job.

Does this mean that microservices are the answer to having an effective EITA? Hardly.

There are prerequisites for being able to support a microservice architecture; table stakes, if you will. However, the service-oriented mindset can be of value whether it’s applied as far down as the intra-application level (i.e. microservices – it is an application architecture pattern) or inter-application (the more traditional SOA). Where the line is drawn depends on the context of the application(s) and their ecosystem. What can be afforded and supported are critical aspects of the equation at all levels.

What is necessary for an effective EITA is a full-stack approach. Governance and data architecture in particular are important aspects to consider. The goal is consistent, intentional alignment across all levels (enterprise, EITA, solution, and application), promoting a cohesive architecture throughout, not a top-down dictatorship.

Large edifices that last are built from smaller pieces that fit together on purpose.

Designing Communication, Communicating Design

The Simplest metamodel in the world ever!

We work in a communications industry.

We create and maintain systems to move information around in order to get things done. That information moves between people and systems in combinations and configurations too numerous to count. In spite of that, we don’t do that great a job of communicating what should be, for us, extremely important information. We tend to be really bad at communicating the architecture of our systems – structure, behavior, and most importantly, the reasons for the decisions made. It’s bad enough when we fail to adequately communicate that information to others, it’s really bad when we fail to communicate it to ourselves. I know I’ve let myself down more than once (“What was I thinking here?!”).

Over the past few days, I’ve been privileged to follow (and even contribute a bit to) a set of conversations on Twitter. Grady Booch, Ivar Jacobson, Ruth Malan, Simon Brown, and others have been discussing the need for architectural awareness and the state of communicating architecture.

This exchange between Simon, Chris Carroll and Eoin Woods sums it up well:

First and foremost, an understanding of what the role of a software architect is and why it’s important is needed. Any organization where the role is seen as either just a senior developer or (heaven help us!) some sort of Taylorist “thinker” who designs everything for the “worker bee” coders to implement, is almost guaranteed to be challenged in terms of application architecture. Resting on that foundation of shifting sand, the organization’s enterprise IT architecture (EITA) is likewise almost guaranteed to be challenged barring a remarkable series of “happy accidents”. The role (not necessarily position) of software architect is required, because software architecture is a distinct set of concerns that can either be addressed intentionally or left to emerge haphazardly out of the construction of the system.

Before we can communicate the architecture of a system, it’s necessary to understand what that is. In “Software Architecture: Central Concerns, Key Decisions”, Ruth Malan and Dana Bredemeyer defined it as high impact, systemic decisions involving (at a minimum):

  • system priority setting
  • system decomposition and composition
  • system properties, especially cross-cutting concerns
  • system fit to context
  • system integrity

I don’t think it’s possible to over-emphasize the use of “system” and “systemic” in the preceding paragraphs. That being said, it’s important to understand that architectural concerns do not exist in a void. There is a cyclic relationship between the architectural concerns of a system and the system’s code. The architectural concerns guide the implementation, while the implementation defines the current state of the architecture and constrains the evolution of future state of the architecture. Code is a necessary, but insufficient source of architectural knowledge – it’s not enough. As Ruth Malan noted in the Visual Design portion (part II) of her presentation at the Software Architect Conference in London a year ago:

Slide from Ruth Malan's presentation on Visual Design

While the code serves as a foundation of the system, it’s also important to realize that the system exists within a larger context. There is a fractal set of systems within systems within ecosystems. Ruth illustrated this in the Intention and Reflection portion (part III) of the presentation reference above:

Slide from Ruth Malan's presentation on Intention and Reflection

[Note: Take the time to view the entirety of the Intention and Reflection presentation. It’s an excellent overview of how to design the architecture of a system.]

The fractal nature of systems within systems within ecosystems is illustrated by the image at the top of the post (h/t to Ric Phillips for the reblog of it). Richard Sage‘s humorous (though only partly, I’m sure) suggestion of it as a meta-model goes a long way towards portraying the problem of a language to communicate architecture.

Not only are we dealing with a nested set of “things”, but the understanding of those things differ according to the stakeholder. For example, while the business owner might see a “web site” as one monolithic thing, the architect might see an application made up of code components depending on other applications and services running on a collection of servers. Maintaining a coherent, normalized object model of the system yet being able to present it in multiple ways (some of which might be difficult to relate) is not a trivial exercise.

Lower-level aspects of design lend themselves to automated solutions, which can increase reliability of the model by avoiding “documentation rot”. An interesting (in my opinion) aspect that can also be automated is the evolution of code over time. What can’t be parsed from the code, however, is intention and reasoning.

Another barrier to communication is the need to be both expressive and flexible (also well illustrated by Richard’s meta-model) while also being simple enough to use. UML works well on the former, but (rightly or wrongly) is perceived to fail on the latter. Simon Brown’s C4 model aims to achieve a better balance in that aspect.

At present, I don’t think we have one tool that does it all. I suspect that even with a suite of tools, that narrative documents will still be way some aspects are captured and communicated. Having a centralized store for the non-code bits (with a way to relate them back to the code) would be a great thing.

All in all, it is encouraging to see people talking about the need for architectural design and the need to communicate the aspects of that design.

Design for Life

Soundview, Bronx, NY

 

The underlying theme of my last post, “Babies, Bathwater, and Software Architects”, was that it’s necessary to understand the role of a software architect in order to understand the need for that role. If our understanding of the role is flawed, not just missing aspects of what the role should be focusing on, but also largely consisting of things the role should not be concerned with, then we can’t really effectively determine whether the role is needed or not. If a person drowning is told that an anvil is a life-preserver, that doesn’t mean that they don’t need a life-preserver. It does mean they need a better definition of “life-preserver”.

Ruth Malan, answering the question “Do we still need architects?”, captures the essence of what is unique about the concerns of the software architect role:

There’s paying attention to the structural integrity of the code when that competes with the urge to deliver value and all we’ve been able to accomplish in terms of the responsiveness of continuous integration and deployment. We don’t intend to let the code devolve as we respond to unfolding demands, but we have to intend not to, and that takes time — possibly time away from the next increment of user-perceived value. There’s watching for architecturally impactful, structurally and strategically significant decisions, and making sure they get the attention, reflection, expertise they require. Even when the team periodically takes stock, and spends time reflecting learning back into the design/code, the architect’s role is to facilitate, to nurture, the design integrity of the system as a system – as something coherent. Where coherence is about not just fit and function, but system properties. Non-trivial, mutually interacting properties that entail tradeoffs. Moreover, this coherence must be achieved across (microservice, or whatever, focused) teams. This takes technical know-how and know-when, and know-who to work with, to bring needed expertise to bear.

These system properties are crucial, because without a cohesive set of system properties (aka quality of service requirements), the quality of the system suffers:

Even if the parts are perfectly implemented and fly in perfect formation, the quality is still lacking.

A tweet from Charles T. Betz points out the missing ingredient:

Now, even though Frederick Brooks was writing about the architecture of hardware, and the nature of users has drastically evolved over the last fifty-four years, his point remains: “Architecture must include engineering considerations, so that the design will be economica1 and feasible; but the emphasis in architecture is upon the needs of the user, whereas in engineering the emphasis is upon the needs of the fabricator.”

In other words, habitability, the quality of being “livable”, is a critical condition for an architecture. Two systems providing the exact same functionality may not be equivalent. The system providing the “better” (from the user’s perspective) quality of service will most likely be seen as the superior system. In some cases, quality of service concerns can even outweigh functional concerns.

Who, if not the software architect, is looking out for the livability of your system as a whole?

Babies, Bathwater, and Software Architects

'Fixing Problems' - XKCD 1739

I try to be disciplined about my writing (picking themes, creating a backlog, collecting notes and links on those topics, etc.), but it seems like serendipity won’t be denied, no matter what I do. On the same day that XKCD published this cartoon, Erik Dietrich published “Software Architect as a Developer Pension Plan”. While I agree with a lot that Erik says in the post, I think he also gets into a “throwing out the baby with the bathwater” situation by dismissing the need for the software architect role, part of which the cartoon illustrates.

Before I go any farther, I do need to point out that I’ve been following and interacting with Erik for about four years now. It’s a friendly disagreement; meant to be more debate than argument.

In the post, Erik makes the point (which I largely agree with) that companies see the role of software architect through a Taylorist lens. They see the software architect as a “thinker”, meant to drive a herd of “doers”:

The modern corporation, like Taylor before it, loosely divides into three categories of humans: owners/executives, managers, and grunts. Owners own and charge executives with executing their will. Executives delegate to managers. Managers think and assemble the workers into org charts designed to optimize efficiency. Workers keep their simple heads down and do as they’re told.

Theoretically, proponents would say, this applies to any domain and type of work. Corporations form themselves into these pyramids without considering any other options, whether they’re private military outfits, gigantic manufacturing facilities, or companies designing and selling software. Of course, the reality turns out to be that not all operations do well splitting thinking and labor. Let’s pick one at random. Oh, I dunno, say, writing software.

We try it. We hire junior developers to be directed by senior ones. And all of them fall in line under the watchful guise of a “tech lead,” who defers to a council of architects. And somewhere at the center of the organizational maelstrom stands The Lead Architect, directing elaborate bucket marches, water flows, and all sorts of magic, like Mickey Mouse in Fantasia. It’s a beautiful, cascading symphony of work flowing from the cleverest down to the simplest. Or… at least, that’s how it goes on paper.

Erik rightly argues that this model is fatally flawed. This is doubly so in the case of the example he gives, an individual offered a position as a software architect doing the thinking for a set of offshore “commodity” developers. Offshore development can be done well (that’s a post I’ll save for another day), but not using that model. Where Erik goes wrong, in my opinion, is in adopting this flawed definition of the software architect role.

Developers should be perfectly competent to do their own thinking. If they’re not, no software architect will be able to help. Even if that person were capable of handling the load of providing detailed directions for several developers, doing so would prevent them from attending to their own areas of concern. It would require being able to anticipate and make all the functional design decisions up front, as well as communicating them to multiple “typists” quicker than they could mindlessly hammer out the code, while still having time to consider all the requisite cross-cutting, quality of service considerations for an application. Do we really need to bother with experiments to determine that this is beyond improbable?

There is a separation between the developer role and software architect role, but it needs to be clearly understood as a difference in focus. A developer’s focus is more vertical, concentrating on implementing functionality while maintaining/improving/not harming the quality of service (aka the horrendously misnamed “non-functional”) aspects of the system. A software architect’s focus should be more horizontal, concentrating on the architecturally significant quality of service aspects that will enable the system to meet the needs of its stakeholders (or, at least, not unreasonably prevent the system from doing so).

The difference between the roles is a matter of thinking at different levels of abstraction, not of one being superior to the other. The two mindsets are both necessary and complementary. A high-quality architectural design without quality implementation cannot produce a high-quality system. Likewise, where there is implementation without architectural coordination, the quality of the resulting system is a matter of chance. For very small systems maintained by small, permanent teams, it may not be necessary for these roles to be distinct. In my experience, however, dealing with both the broadly general and the very specific at the same time scales poorly. Non-trivial systems quickly come to require architectural leadership, otherwise people are left fixing problems caused by problems fixed previously.

Leadership has been the main theme of almost all of my posts this month and has figured prominently in several of what’s been written this year. Many of these posts are assigned to the “Architectural Practice” category. This is because I absolutely believe that the software architect role is a leadership role. That being said, I don’t consider the Taylorist model to be effective leadership practice. In fact, I consider it an anti-pattern.

I’ve long been an advocate of a more collaborative model of practice. Rather than dictating, the software architect’s role should be one of consensus building and communication. Significant architectural disagreement indicates an issue with one or more members of the team. Significant uncertainty about the architecture indicates an issue with the software architect role.

Effective systems require both breadth and depth of effectiveness. This holds true for software systems as much as social systems. For the social systems producing software systems for use by other social systems (i.e. software development teams), it is imperative.

Form Follows Function on SPaMCast 403

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast, number 403, features Tom’s essay on Agile practices at scale, Kim Pries on transformations, and a Form Follows Function installment based on my post “NPM, Tay, and the Need for Design”.

Although the specific controversies have died down since we recorded the segment, the issues remain. Designs that fail to account for foreseeable issues are born vulnerable. Fixing problems after they “emerge” can be much more expensive than a little defensive design.

You can find all my SPaMCast episodes using under the SPAMCast Appearances category on this blog. Enjoy!

When Will We Learn?

Plato's Academy mosaic from Pompeii

We’ve all heard the sayings about history repeating. Did we pay attention? Did we actually hear what was said, or were we just in the room when it was mentioned? Did we learn anything?

Greger Wikstrand and I have been trading posts on innovation for more than seven months. His last post, “Black hat innovation”, touched on the dark side of innovation:

I think the following are good examples of black hat innovation in the digital space: credit card fraud, ransomware and identity theft. There are many other black hat innovations that does not rely on tech such as chain letters, counterfeit money and even weighted dice.

Greger noted “Sometimes, it might seem as if black hat innovation out paces white hat innovation.” Certainly, a black hat innovator faces fewer barriers to innovation. Following rules is less of a consideration when breaking rules. We also have to be aware of the advantage we give them when we fail to learn from the past. None of the abuse cases that Greger mentioned are black swans. They’re merely new ways to commit old crimes. Waiting to react to a foreseeable issue means we start from behind because we failed to learn.

Failure to learn can manifest in a variety of ways. In his post, “Enterprise challenges”, Peter Murchland noted:

All enterprises face challenges (of varying magnitude and complexity). These challenges are either problems to be solved or opportunities to be pursued. One way of considering these challenges is through the following three lenses – those arising due to:

  • external change
  • internal change
  • growth and development

Peter asserts that the health of an enterprise depends on integrating coherent responses to these challenges into its architecture. This is a position shared by Aaron Dignan in “How To Eliminate Organizational Debt”:

Organizational Debt: The interest companies pay when their structure and policies stay fixed and/or accumulate as the world changes.

Let’s unpack that. As time passes, companies create roles, structures, rules, policies, and other norms that become fixed, and often, difficult to change. This is by design. For example, a company’s travel budget may balloon one year, only to be restricted by a travel policy the next — a well intentioned control designed to reduce expense. If that policy starts costing more than it’s saving (e.g. by reducing commercial success due to a lack of face time, frustrating top talent, etc.), it becomes an unacknowledged debt. The “interest” comes in the form of reduced speed, capacity, engagement, flexibility, and innovation that ultimately undermine the macro objectives of the firm: to survive, thrive, and achieve its purpose.

In other words, failure to learn leads to failure to adapt to a changing context, internal and/or external. This mismatch between the enterprise and its context represents a destructive friction. Since an enterprise is a social system and the components of a social system are people, this means that the effects of this friction erodes morale. Disengaged, demotivated employees will hinder an organization’s ability to deliver effectively. The only question is how much of a hindrance it will be.

In my post “Learning to Deal with the Inevitable”, I talked about the value of a learning culture for an organization. Effective execution requires effective decision-making, which requires learning. If we haven’t learned from the past, then there’s no rational basis for our future actions. We’re winging it solely on the basis of hope.

An organization is unlikely to develop a learning culture by accident. Just as a tornado hitting an auto parts store is unlikely to spit out a sports car, it’s unlikely that a system of sensing and adjusting to an ever-changing environment will emerge without intentional action. The social system that is the enterprise has a design and requires design, much the way an automated system has a design and requires design. Part of that design should incorporate learning.

Different organizations will have different needs and different styles of learning. Getting groups of people together to play with technology (as described by Matt Ballantine in his post “The Art of Play”) may not work for every organization. However, it is shortsighted to make no provision for learning whatsoever. Even the British Army in World War I, a conservative institution with an aristocratic officer corps in a conflict that can be described as 19th century warfare with 20th century weapons, “…developed a number of different methods to disseminate knowledge, catering for a variety of different circumstances and needs”. With a hundred years’ worth of extra experience and technology, it would be hard to justify taking a less rigorous approach.