Organizations as Systems and Innovation

Portrait of Gustavus Adolphus of Sweden

Over the last year or so, the concept of looking at organizations as systems has been a major theme for me. Enterprises, organizations and their ecosystems (context) are social systems composed of a fractal set of social and software systems. As such, enterprises have an architecture.

Another long-term theme for this site has been my conversation with Greger Wikstrand regarding innovation. This post is the thirty-fifth entry in that series.

So where do these two intersect? And why is there a picture of a Swedish king from four-hundred years ago up there?

Innovation, by its very nature (“…significant positive change”), does not happen in a vacuum. Greger’s last post, “Innovation arenas and outsourcing”, illustrates one aspect of this. Shepherding ideas into innovations is a deliberate activity requiring structural support. Being intentional doesn’t turn bad ideas into innovations, but lack of a system can cause an otherwise good idea to wither on the vine.

Another intersection, the one I’m focusing on here, can be found in the nature of innovation itself. It’s common to think of technological innovation, but innovation can also be found in changes to organizational structure and processes (e.g. Henry Ford and the assembly line). Organization, process, and technology are not only areas for innovation, but when coupled with people, form the primary elements of an enterprise architecture. It should be clear that the more these elements are intentionally coordinated towards a specific goal, the more cohesive the effort should be.

This brings us to Gustavus Adolphus of Sweden. In his twenty years on the throne, he converted Sweden into a major power in Europe. Militarily, he upended the European status quo in a very short time (after intervening in the Thirty Years’ War in 1630, he was killed in battle in 1632) by marshaling organizational, procedural, technological innovations:

The Swedish army stood apart from its’ contemporaries through five characteristics. Its’ soldiers wore uniform and had a nucleus of native Swedes, raised from a surprisingly diplomatic system of conscription, at its’ core. The Swedish regiments were small in comparison to their opponents and were lightly equipped for speed. Each regiment had its’ own light and mobile field artillery guns called ‘leathern guns’ that were easy to handle and could be easily manoeuvred to meet sudden changes on the battlefield. The muskets carried by these soldiers were of a type superior to that in general use and allowed for much faster rates of fire. Swedish cavalry, instead of galloping up to the enemy, discharging their pistols and then turning around and galloping back to reload, ruthlessly charged with close quarter weapons once their initial shot had been expended. By analysing this paradigm it becomes apparent that the army under Gustavus emphasized speed and manoeuvrability above all – this greatly set him apart from his opponents.

By themselves, none of the innovations were original to Gustavus. Combining them together, however, was and European military practice was irrevocably changed. Inflection points can be dependent on multiple technologies catching up with one another (since the future is “…not very evenly distributed”), but in this case the pieces were all in place. The catalyst was someone with the vision to combine them, not random chance.

Emergence will be a factor in any complex system. That being said, the inevitability of those emergent events does not invalidate intentional design and planning. If anything, design and planning is more necessary to deal with the mundane, foreseeable things in order to leave more cognitive capacity to deal with that which can’t be foreseen.

Monolithic Applications and Enterprise Gravel

Pebbles

It’s been almost a year since I’ve written anything about microservices, and while a lot has been said on that subject, it’s one I still monitor to see what new pops up. The opening of a blog post that I read last week caught my attention:

Coined by Melvin Conway in 1968, Conway’s Law states: “Any organization that designs a system will produce a design whose structure is a copy of the organization’s communication structure.” In software development terms, Conway’s Law suggests that a given team will build apps that mirror the team’s organizational structure. Siloed functional teams produce siloed application architectures.

The result is a monolith: A massive application whose functionality is crammed into a few crowded parts. Scaling a simple pattern to the enterprise level often results in a monolith.

None of this is wrong, per se, but in reading it, one could come to a wrong conclusion. Siloed functional teams (particularly where the culture of the organization encourages siloed business units) produce siloed application architectures that are most likely monoliths. From an enterprise IT architecture aspect, though, the result is not monolithic. Googling the definition of “monolithic”, we get this:

mon·o·lith·ic
ˌmänəˈliTHik/
adjective
  1. formed of a single large block of stone.
  2. (of an organization or system) large, powerful, and intractably indivisible and uniform.
    “rejecting any move toward a monolithic European superstate”
    synonyms: inflexible, rigid, unbending, unchanging, fossilized
    “a monolithic organization”

Rather than “a single large block of stone”, we get gravel. The architecture of the enterprise’s IT isn’t “large, powerful, and intractably indivisible and uniform”. It may well be large, but its power in relation to its size will be lacking. Too much effort is wasted reinventing wheels and maintaining redundant data (most likely with no real sense of which set of data is authoritative). Likewise, while “intractably indivisible” isn’t a virtue, being intractable while also lacking cohesion is worse. Such an IT architecture is a foundation built on shifting sand. Lastly, whether the EITA is uniform or not (and I would give good odds that it’s not), is irrelevant given the other negative aspects. Under the circumstances, worrying about uniformity would be like worrying about whether the superstructure of the Titanic had a fresh paint job.

Does this mean that microservices are the answer to having an effective EITA? Hardly.

There are prerequisites for being able to support a microservice architecture; table stakes, if you will. However, the service-oriented mindset can be of value whether it’s applied as far down as the intra-application level (i.e. microservices – it is an application architecture pattern) or inter-application (the more traditional SOA). Where the line is drawn depends on the context of the application(s) and their ecosystem. What can be afforded and supported are critical aspects of the equation at all levels.

What is necessary for an effective EITA is a full-stack approach. Governance and data architecture in particular are important aspects to consider. The goal is consistent, intentional alignment across all levels (enterprise, EITA, solution, and application), promoting a cohesive architecture throughout, not a top-down dictatorship.

Large edifices that last are built from smaller pieces that fit together on purpose.

Design for Life

Soundview, Bronx, NY

 

The underlying theme of my last post, “Babies, Bathwater, and Software Architects”, was that it’s necessary to understand the role of a software architect in order to understand the need for that role. If our understanding of the role is flawed, not just missing aspects of what the role should be focusing on, but also largely consisting of things the role should not be concerned with, then we can’t really effectively determine whether the role is needed or not. If a person drowning is told that an anvil is a life-preserver, that doesn’t mean that they don’t need a life-preserver. It does mean they need a better definition of “life-preserver”.

Ruth Malan, answering the question “Do we still need architects?”, captures the essence of what is unique about the concerns of the software architect role:

There’s paying attention to the structural integrity of the code when that competes with the urge to deliver value and all we’ve been able to accomplish in terms of the responsiveness of continuous integration and deployment. We don’t intend to let the code devolve as we respond to unfolding demands, but we have to intend not to, and that takes time — possibly time away from the next increment of user-perceived value. There’s watching for architecturally impactful, structurally and strategically significant decisions, and making sure they get the attention, reflection, expertise they require. Even when the team periodically takes stock, and spends time reflecting learning back into the design/code, the architect’s role is to facilitate, to nurture, the design integrity of the system as a system – as something coherent. Where coherence is about not just fit and function, but system properties. Non-trivial, mutually interacting properties that entail tradeoffs. Moreover, this coherence must be achieved across (microservice, or whatever, focused) teams. This takes technical know-how and know-when, and know-who to work with, to bring needed expertise to bear.

These system properties are crucial, because without a cohesive set of system properties (aka quality of service requirements), the quality of the system suffers:

Even if the parts are perfectly implemented and fly in perfect formation, the quality is still lacking.

A tweet from Charles T. Betz points out the missing ingredient:

Now, even though Frederick Brooks was writing about the architecture of hardware, and the nature of users has drastically evolved over the last fifty-four years, his point remains: “Architecture must include engineering considerations, so that the design will be economica1 and feasible; but the emphasis in architecture is upon the needs of the user, whereas in engineering the emphasis is upon the needs of the fabricator.”

In other words, habitability, the quality of being “livable”, is a critical condition for an architecture. Two systems providing the exact same functionality may not be equivalent. The system providing the “better” (from the user’s perspective) quality of service will most likely be seen as the superior system. In some cases, quality of service concerns can even outweigh functional concerns.

Who, if not the software architect, is looking out for the livability of your system as a whole?

Accidental Innovation?

Hillside Slum

From my very first post, I’ve been writing on the subject of “accidental architecture”, which is also sometimes confused with “emergence”. From the picture on the right (which I used previously on a post titled “Accidental Architecture”), it should be easy to infer what my opinion is in regard to the idea that coherent system can “emerge” via a Darwinian process (at least absent millions of years and a great many extinct evolutionary dead-ends).

This is the thirteenth installment of an ongoing conversation Greger Wikstrand and I have been having about architecture, innovation, and organizations as systems (a list of previous posts can be found at the bottom of the page). In his last post, “Worthless ideas and valuable innovation”, Greger made the point that having ideas is not valuable in and of itself, but being able to turn them into useful innovation is. Triage is vital:

So how do we find the innovation needle in the haystack of ideas? How do we avoid being overwhelmed by all the hay? How do we turn worthless ideas into valuable innovation? Sadly, today the answer is more often than not that we try to “eat all the hay”. We try to implement as many ideas as possible. Sooner or later, often in IT, there is a bottleneck and a huge queue of initiatives build up. “We’ll put that on the backlog”, is the new way of saying “that’ll never happen”.

The answer is to rely on empiricism, short feedback cycles and making small bets. Lean portfolio management has many of the answers, but just as with any idea it is worthless until it is implemented.

Many things can impact our ability to implement. Process, structure, and technology are all important, but people are the key ingredient. There is no silver bullet that we can buy or build. Without the people who provide the intuition, experience and judgement, we are lacking a critical component in the system. It’s no accident that my first post in the “Organizations as Systems” category (written before I ever had an “Innovation” tag on the site) quoted Tom Graves’ “Dotting the joins (the JEA version)”:

Every enterprise is a system – an ‘ecosystem with purpose’ – constrained mainly by its core vision, values and other drivers. Within that system, everything ultimately connects with everything else, and depends on everything else: if it’s in the system, it’s part of the system, and, by definition, the system can’t operate without it.

People provide that purpose (along with the judgement, intuition, experience, etc.). Process, structure, and technology can enhance their efforts, but can just as easily get in the way. The difference between enhancing and impeding seems too important to leave to chance. When the people involved are intentional about their purpose, the scales are tipped. Otherwise, we’re left hoping for a happy accident.

Previous posts in this series:

  1. “We Deliver Decisions (Who Needs Architects?)” – I discussed how the practice of software architecture involved decision-making. It combines analysis with the need for situational awareness to deal with the emergent factors and avoiding cognitive biases.
  2. “Serendipity with Woody Zuill” – Greger pointed me to a short video of him and Woody Zuill discussing serendipity in software development.
  3. “Fixing IT – Too Big to Succeed?” – Woody’s comments in the video re: the stifling effects of bureaucracy in IT inspired me to discuss the need for embedded IT to address those effects and to promote better customer-centricity than what’s normal for project-oriented IT shops.
  4. “Serendipity and successful innovation” – Greger’s post pointed out that structure is insufficient to promote innovation, organizations must be prepared to recognize and respond to opportunities and that innovation must be able to scale.
  5. “Inflection Points and the Ingredients of Innovation” – I expanded on Greger’s post, using WWI as an example of a time where innovation yielded uneven results because effective innovation requires technology, understanding of how to employ it, and an organizational structure that allows it to be used well.
  6. “Social innovation and tech go hand-in-hand” – Greger continued with the same theme, the social and technological aspects of innovation.
  7. “Organizations and Innovation – Swim or Die!” – I discussed the ongoing need of organizations to adapt to their changing contexts or risk “death”.
  8. “Innovation – Resistance is Futile” – Continuing on in the same vein, Greger points out that resistance to change is futile (though probably inevitable). He quotes a professor of his that asserted that you can’t change people or groups, thus you have to change the organization.
  9. “Changing Organizations Without Changing People” – I followed up on Greger’s post, agreeing that enterprise architectures must work “with the grain” of human nature and that culture is “walking the walk”, not just “talking the talk”.
  10. “Developing the ‘innovation habit’” – Greger talks about creating an intentional, collaborative innovation program.
  11. “Innovation on Tap” – I responded to Greger’s post by discussing the need for collaboration across an organization as a structural enabler of innovation. Without open lines of communication, decisions can be made without a feel for customer wants and needs.
  12. “Worthless ideas and valuable innovation” – Greger makes the point that ideas, by themselves, have little or no worth. It’s one thing to have an idea, quite another to be able to turn it into a valuable innovation.

[Shanty Town Image by Otsogey via Wikimedia Commons.]

Who Needs Architects? Well, Nobody Needs this Kind

The question above came up while recording SPaMCast 357 with Tom Cagley, and it’s an extremely important one. The post we were discussing, “Who Needs Architects? Because YAGNI Doesn’t Scale”, is one of many discussing the need for architectural design in software development. While I’m firmly convinced that the need is real, it should also be realized that there is a real danger in unilaterally imposing the design on a team.

Tom’s question about an “aristocracy of architects” was taken from his post “Re-Read Saturday: The Mythical Man-Month, Part 4 Aristocracy, Democracy and System Design”, part of a series in which he is reviewing Frederick Brooks’ The Mythical Man-Month. In the essay reviewed in this post, “Aristocracy, Democracy and System Design”, Brooks discussed the importance and value of conceptual integrity (i.e. a cohesive, unified design) to software systems. While I agree wholeheartedly that both architectural design and someone (or more than one someone) responsible for that design is necessary, I disagree that establishing an aristocracy is beneficial or even necessary. In fact, the portion labeled “Reality” on the graphic in Kelly Abuelsaad‘s tweet below, although talking about imposter syndrome, also illustrates why dictating design can be a bad idea.

One can certainly influence, even control the architecture of a system via a mandate. The problem with being given control is that no one can give effectiveness to go with it. As such, it’s brittle, subject to the limitations of the person given the authority and the compliance of those implementing the system. This brittleness exists even when the architect stays within their level of detail. Combining a dictatorial style with Big Design Up Front all but ensures failure.

In my experience, a participative, collaborative style of design yields better designs. In addition to benefiting from a variety of skills and experience, it also engenders greater understanding and ownership across the team. Arrogance, on the other hand, can be costly.

I firmly believe that a product will benefit from having someone whose focus is the cross-cutting, architecturally significant concerns. I also believe that part of that job is teaching and mentoring as well as listening to the rest of the team so that architectural awareness permeates the entire team. There are many aspects to being an architect, but being a dictator should not be one of them.

Updated 4/8/2016 to fix a broken link.

Microservices, Monoliths, and Conflicts to Resolve

Two tweets, opposites in position, and both have merit. Welcome to the wonderful world of architecture, where the only rule is that there is no rule that survives contact with reality.

Enhancing resilience via redundancy is a technique with a long pedigree. While microservices are a relatively recent and extreme example of this, they’re hardly groundbreaking in that respect. Caching, mirroring, load-balancing, etc. has been with us a long, long time. Redundancy is a path to high availability.

Centralization (as exemplified by monolithic systems) can be a useful technique for simplification, increasing data and behavioral integrity, and promoting cohesion. Like redundancy, it’s a system design technique older than automation. There was a reason that “all roads lead to Rome”. Centralization provides authoritativeness and, frequently, economies of scale.

The problem with both techniques is that neither comes without costs. Redundancy introduces complexity in order to support distributing changes between multiple points and reconciling conflicts. Centralization constrains access and can introduce a single point of failure. Getting the benefits without the incurring the costs remains a known issue.

The essence of architectural design is decision-making. Given that those decisions will involve costs as well as benefits, both must be taken into account to ensure that, on balance, the decision achieves its aims. Additionally, decisions must be evaluated in the greater context rather than in isolation. As Tom Graves is fond of saying “things work better when they work together, on purpose”.

This need for designs to not only be internally optimal, but also optimized for their ecosystem means that these, as well as other principles, transcend the boundaries between application architecture, enterprise IT architecture, and even enterprise architecture. The effectiveness of this fractal architecture of systems of systems (both automated and human) is a direct result of the appropriateness of the decisions made across the full range of the organization to the contexts in play.

Since there is no one context, no rule can suffice. The answer we’re looking for is neither “microservice” nor “monolith” (or any other one tactic or technique), but fit to purpose for our context.

Who Needs Architects? Because Complexity Emerges

Why would you want to constrain creativity by controlling (Note: controlling does not necessarily imply dictating) the architecture of a system?

As Roger Sessions recently tweeted:

Another reason came out in an exchange between Roger and me:

Simplicity (“…as simple as possible, but not simpler”) is certainly a desirable system quality. Unnecessary complexity directly impairs maintainability and can indirectly affect qualities such as testability, security, extensibility, availability, and ease of deployment just to name a few. The coherent structure necessary to create and maintain simplicity when multiple people are involved is unlikely to happen accidentally. When it’s lacking, the result is often what Brian Foote and Joseph Yoder described in their 1999 classic “Big Ball of Mud”:

A BIG BALL OF MUD is haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle. We’ve all seen them. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated. The overall structure of the system may never have been well defined. If it was, it may have eroded beyond recognition.

Sixteen years later, this tendency to entropy via unnecessary complexity remains an issue. As Roger Sessions recently noted on his blog:

As IT systems increase in size, three things happen. Systems get more vulnerable to security breaches. Systems suffer more from reliability issues. And it becomes more expensive and time consuming to try to make modifications to those systems. This should not be a surprise. If you are like most IT professionals, you have seen this many times.

What you have probably not noticed is an underlying pattern. These three undesirable features invariably come in threes. Insecure systems are invariably unreliable and difficult to modify. Secure systems, on the other hand, are also reliable and easy to modify.

This tells us something important. Vulnerability, unreliability, and inflexibility are not independent issues; they are all symptoms of one common disease. It is the disease that is the problem, not the symptoms.

Kent Beck, in a post on FaceBook, “Taming Complexity with Reversibility”, recently noted the same:

As a system scales, whether it is a manufacturing plant or a service like ours, the enemy is complexity. If you don’t confront complexity in some way, it will eat you. However, complexity isn’t a blob monster, it has four distinct heads.

  • States. When there are many elements in the system and each can be in one of a large number of states, then figuring out what is going on and what you should do about it grows impossible.
  • Interdependencies. When each element in the system can affect each other element in unpredictable ways, it’s easy to induce harmonics and other non-linear responses, driving the system out of control.
  • Uncertainty. When outside stresses on the system are unpredictable, the system never settles down to an equilibrium.
  • Irreversibility. When the effects of decisions can’t be predicted and they can’t be easily undone, decisions grow prohibitively expensive.

If you have big ambitions but don’t address any of these factors, scale will wreck your system at some point.

Kent’s conclusion, “What changes–technical, organizational, or business–would you have to make to identify such decisions earlier and make reversing them routine?”, implies that addressing these factors requires systemic rather than localized response.

The counter-argument that’s frequently made is that the architecture can “emerge” from the implementation by doing the “simplest thing that can possibly work” and building on that. I touched briefly on this in my last post. To that, I’d add that not all changes are the same. Trying to bolt on fundamental, cross-cutting concerns (e.g. security, scalability, etc.) after the fact risks major (i.e. expensive) architectural refactoring. This type of refactoring is generally a hard sell and understandably so. That difficulty makes the Big Ball of Mud that much more likely.

This does not mean that the concept of emergence has no place in software architecture. As the various contexts making up the architecture of the problem are identified, challenges will emerge and need to be reconciled. With this naturally occurring emergence, it makes little sense to artificially generate challenges by refusing to “peek” at what’s ahead. As Ruth Malan has observed, architecture should be both “intentional and emergent”:

This need to contend with emergent issues continues for the entire lifetime of the system:

I’ve noted in the past that both planning and design share similarities. Regardless of the task, an appropriate design/plan provides a coherent direction that enhances the likelihood of success. Without it, Joe Dager’s question below is directly on point: