This is not a project

Gantt Chart

My apologies to René Magritte, as I appropriate his point, if not his iconic painting.

After I posted “Storming on Design”, it sparked a discussion with theslowdiyer around context and change. In that discussion, theslowdiyer commented:

‘you don’t adhere to a plan for any longer than it makes sense to.’
Heh, agree. I wonder if the “plan as a tool” vs. “plan as a goal in itself” discussion isn’t deserving of a post of its own 🙂

Indeed it is, even if it did take me nearly four months to get to it.

The key concept to understand, is that the plan is not the goal, merely a stated intention of how to achieve the goal (if this causes you to suspect that the words “plan” and “design” could be substituted for each other without changing the point, move to the head of the class). Magritte’s painting stated that the picture is not the thing. The map is not the territory (and if that concept seems a bit self-evident, consider the fact that Wikipedia considers it significant enough to devote over 1700 words, not counting footnotes and links, to the topic).

Conflating plan and goal is a common problem. To illustrate the difference, consider undergoing an operation. Is it your desire that the surgeon perform the procedure as planned or that your problem gets fixed? In the former scenario, your survival is optional.

This is not, however, to say that planning (or design) is useless. The output of an effective planning/design process is critical. As Joanna Young noted in her “Four Signs of Readiness – Or Not”:

I’m all for consigning the traditional 50+ pages of adminis-trivia on scope, schedule, budget, risks that requires signing in blood to the dustbin. However no organization should forego the thoughtful and hard work on determining what needs to be done, why, how, by whom, for how much – and how this will all be governed and measured as it is proceeds through sprints and/or waterfalls to delivery.

The information derived from the process (not the form, not the presentation, but the information) is critical tool for moving forward intelligently. If you have no idea of what to do, how to do it, who can do it when and for how much, you are adrift. You’re starting a trip with no idea of whether the gas tank has anything in it. Conversely, attempting to achieve 100% certainty from the outset is a fool’s errand. For any endeavor, more will be known nearer the destination. Plans without “wiggle room” are of limited usefulness as you will drift outside the cone of uncertainty from the start and never get back inside.

Having a reasonable idea of what’s acceptable variance helps determine when it’s time to abandon the current plan and go with a revised one. Planning and design are processes, not events or even phases. It’s a matter of continually monitoring context and whether our intentions are still in accordance with reality. Where the differ, reality wins. Always.

Execution isn’t blindly marching forward according to plan. It’s surfing the wave of context.

Advertisement

Square Pegs, Round Holes, and Silver Bullets

Werewolf

People like easy answers.

Why spend time analyzing and evaluating when you can just take some thing or some technique that someone else has already put to use and be done with it?

Why indeed?

I mean, “me too” is a valid strategy, right?

And we don’t want people to get off message, right?

And we can always find a low cost, minimal disruption way of dealing with issues, right?

I mean, after all, we’ve got data and algorithms, and stuff:

The thing is, actions need to make sense in context. Striking a match is probably a good idea in the dark, but it’s probably less so in daylight. In the presence of gasoline fumes, it’s a bad idea regardless of ambient light.

A recent post on Medium, “Design Sprints Are Snake Oil” is a good example. Erika Hall’s title was a bit click-baitish, but as she responded to one commenter:

The point is that the original snake oil was legitimate and effective. It ended up with a bad reputation from copycats who over-promised results under the same name while missing the essential ingredients.

Sprints are legitimate and effective. And now there is a lot of follow-up hype treating them as a panacea and a replacement for other types of work.

Good things (techniques, technologies, strategies, etc.) are “good”, not because they are innately right, but because they fit the context of the situation at hand. Those that don’t fit, cease being “good” for that very reason. Form absent function is just a facade. Whether it’s business strategy, management technique, innovation efforts, or process, there is no recipe. The hard work to match the action with the context has to be done.

Imitation might be the sincerest form of flattery, but it’s a really poor substitute for strategy.

Emergence: Babies and Bathwater, Plans and Planning

blueprints

 

“Emergent” is a word that I run into from time to time. When I do run into it, I’m reminded of an exchange from the movie Gallipoli:

Archy Hamilton: I’ll see you when I see you.
Frank Dunne: Yeah. Not if I see you first.

The reason for my ambivalent relationship with the word is that it’s frequently used in a sense that doesn’t actually fit its definition. Dictionary.com defines it like this:

adjective

1. coming into view or notice; issuing.
2. emerging; rising from a liquid or other surrounding medium.
3. coming into existence, especially with political independence: the emergent nations of Africa.
4. arising casually or unexpectedly.
5. calling for immediate action; urgent.
6. Evolution. displaying emergence.

noun

7. Ecology. an aquatic plant having its stem, leaves, etc., extending above the surface of the water.

Most of the adjective definitions apply to planning and design (which I consider to be a specialized form of planning). Number 3 is somewhat tenuous for that sense and and 5 only applies sometimes, but 6 is dead on.

My problem, however, starts when it’s used as a euphemism for a directionless. The idea that a cohesive, coherent result will “emerge” from responding tactically (whether in software development or in managing a business) is, in my opinion, a dangerous one. I’ve never heard an explanation of how strategic success emerges from uncoordinated tactical excellence that doesn’t eventually come down to faith. It’s why I started tagging posts on the subject “Intentional vs Accidental Architecture”. Success that arises from lack of coordination is accidental rather than by design (not to mention ironic when the lack of intentional coordination or planning/design is intentional itself):

If you don’t know where you are going, any road will get you there.

 

The problem, of course, is do you want to be at the “there” you wind up at? There’s also the issue of cost associated with a meandering path when a more direct route was available.

None of this, however, should be taken as a rejection of emergence. In fact, a dogmatic attachment to a plan in the face of emergent facts is as problematic as pursuing an accidental approach. Placing your faith in a plan that has been invalidated by circumstances is as blinkered an approach as refusing to plan at all. Neither extreme makes much sense.

We lack the ability to foresee everything that can occur, but that limit does not mean that we should ignore what we can foresee. A purely tactical focus can lead us down obvious blind alleys that will be more costly to back out of in the long run. Experience is an excellent teacher, but the tuition is expensive. In other words, learning from our mistakes is good, but learning from other’s mistakes is better.

Darwinian evolution can produce lead to some amazing things provided you can spare millions of years and lots of failed attempts. An intentional approach allows you to tip the scales in your favor.


Many thanks to Andrew Campbell and Adrian Campbell for the multi-day twitter conversation that spawned this post. Normally, I unplug from almost all social media on the weekends, but I enjoyed the discussion so much I bent the rules. Cheers gentlemen!

Storming on Design

From Wikimedia: VORTEX2 field command vehicle with tornado in sight. Wyoming, LaGrange.

My youngest son has recently fallen in love with the idea of being a storm chaser when he gets older. Tweet storms are more my speed. There was an interesting one last week from Sarah Mei regarding the contextual nature of assessing design quality:

Context is a recurring theme for me. While the oldest post with that tag is just under three years old, a search on the term finds hits going all the way back to my second post in October, 2011. Sarah’s tweets resonated with me because in my opinion, ignoring context is a fool’s game.

Both encapsulation and silos are forms of separation of concerns. What differentiates the two is the context that makes the one a good idea and the other a bad idea. Without the context, you can come up with two mutually exclusive “universal” principles.

A key component of architectural design, is navigating the fractals that make up the contexts in which a system exists. Ruth Malan has had this to say regarding the importance of designing “outside the box”:

Russell Ackoff urged that to design a system, it must be seen in the context of the larger system of which it is part. Any system functions in a larger system (various larger systems, for that matter), and the boundaries of the system — its interaction surfaces and the capabilities it offers — are design negotiations. That is, they entail making decisions with trade-off spaces, with implications and consequences for the system and its containing system of systems. We must architect across the boundaries, not just up to the boundaries. If we don’t, we naively accept some conception of the system boundary and the consequent constraints both on the system and on its containing systems (of systems) will become clear later. But by then much of the cast will have set. Relationships and expectations, dependencies and interdependencies will create inertia. Costs of change will be higher, perhaps too high.

This interrelationship can be seen from the diagram taken from the same post:

System Context illustration, Ruth Malan

It’s important to bear in mind that contexts are multi-dimensional. All but the very simplest of systems will likely have multiple types of stakeholders, leading to multiple, potentially conflicting contexts. Accounting for these contexts while defining the problem and while designing a solution appropriate to the problem space is critical to avoiding the high costs Ruth referred to above.

Another takeaway is that context can (and likely will) change over time. Whether it’s changes in terms of staffing (as Sarah noted) or changes in the needs of users or changes in technology, a design that was fit for yesterday’s context can become unfit for today’s and a disaster for tomorrow’s.

Pragmatic Application Architecture

I saw a tweet on Friday about a SlideShare deck that looked interesting, so I bookmarked it to read later. As I was reading it this morning, I found myself agreeing with the points being made. When I got to the next to the last slide, I found myself (or at least, this blog) listed alongside some very distinguished company under “Reading Material”.

Thanks, Bart Blommaerts and nice job!

Designing Communication, Communicating Design

The Simplest metamodel in the world ever!

We work in a communications industry.

We create and maintain systems to move information around in order to get things done. That information moves between people and systems in combinations and configurations too numerous to count. In spite of that, we don’t do that great a job of communicating what should be, for us, extremely important information. We tend to be really bad at communicating the architecture of our systems – structure, behavior, and most importantly, the reasons for the decisions made. It’s bad enough when we fail to adequately communicate that information to others, it’s really bad when we fail to communicate it to ourselves. I know I’ve let myself down more than once (“What was I thinking here?!”).

Over the past few days, I’ve been privileged to follow (and even contribute a bit to) a set of conversations on Twitter. Grady Booch, Ivar Jacobson, Ruth Malan, Simon Brown, and others have been discussing the need for architectural awareness and the state of communicating architecture.

This exchange between Simon, Chris Carroll and Eoin Woods sums it up well:

First and foremost, an understanding of what the role of a software architect is and why it’s important is needed. Any organization where the role is seen as either just a senior developer or (heaven help us!) some sort of Taylorist “thinker” who designs everything for the “worker bee” coders to implement, is almost guaranteed to be challenged in terms of application architecture. Resting on that foundation of shifting sand, the organization’s enterprise IT architecture (EITA) is likewise almost guaranteed to be challenged barring a remarkable series of “happy accidents”. The role (not necessarily position) of software architect is required, because software architecture is a distinct set of concerns that can either be addressed intentionally or left to emerge haphazardly out of the construction of the system.

Before we can communicate the architecture of a system, it’s necessary to understand what that is. In “Software Architecture: Central Concerns, Key Decisions”, Ruth Malan and Dana Bredemeyer defined it as high impact, systemic decisions involving (at a minimum):

  • system priority setting
  • system decomposition and composition
  • system properties, especially cross-cutting concerns
  • system fit to context
  • system integrity

I don’t think it’s possible to over-emphasize the use of “system” and “systemic” in the preceding paragraphs. That being said, it’s important to understand that architectural concerns do not exist in a void. There is a cyclic relationship between the architectural concerns of a system and the system’s code. The architectural concerns guide the implementation, while the implementation defines the current state of the architecture and constrains the evolution of future state of the architecture. Code is a necessary, but insufficient source of architectural knowledge – it’s not enough. As Ruth Malan noted in the Visual Design portion (part II) of her presentation at the Software Architect Conference in London a year ago:

Slide from Ruth Malan's presentation on Visual Design

While the code serves as a foundation of the system, it’s also important to realize that the system exists within a larger context. There is a fractal set of systems within systems within ecosystems. Ruth illustrated this in the Intention and Reflection portion (part III) of the presentation reference above:

Slide from Ruth Malan's presentation on Intention and Reflection

[Note: Take the time to view the entirety of the Intention and Reflection presentation. It’s an excellent overview of how to design the architecture of a system.]

The fractal nature of systems within systems within ecosystems is illustrated by the image at the top of the post (h/t to Ric Phillips for the reblog of it). Richard Sage‘s humorous (though only partly, I’m sure) suggestion of it as a meta-model goes a long way towards portraying the problem of a language to communicate architecture.

Not only are we dealing with a nested set of “things”, but the understanding of those things differ according to the stakeholder. For example, while the business owner might see a “web site” as one monolithic thing, the architect might see an application made up of code components depending on other applications and services running on a collection of servers. Maintaining a coherent, normalized object model of the system yet being able to present it in multiple ways (some of which might be difficult to relate) is not a trivial exercise.

Lower-level aspects of design lend themselves to automated solutions, which can increase reliability of the model by avoiding “documentation rot”. An interesting (in my opinion) aspect that can also be automated is the evolution of code over time. What can’t be parsed from the code, however, is intention and reasoning.

Another barrier to communication is the need to be both expressive and flexible (also well illustrated by Richard’s meta-model) while also being simple enough to use. UML works well on the former, but (rightly or wrongly) is perceived to fail on the latter. Simon Brown’s C4 model aims to achieve a better balance in that aspect.

At present, I don’t think we have one tool that does it all. I suspect that even with a suite of tools, that narrative documents will still be way some aspects are captured and communicated. Having a centralized store for the non-code bits (with a way to relate them back to the code) would be a great thing.

All in all, it is encouraging to see people talking about the need for architectural design and the need to communicate the aspects of that design.

Design for Life

Soundview, Bronx, NY

 

The underlying theme of my last post, “Babies, Bathwater, and Software Architects”, was that it’s necessary to understand the role of a software architect in order to understand the need for that role. If our understanding of the role is flawed, not just missing aspects of what the role should be focusing on, but also largely consisting of things the role should not be concerned with, then we can’t really effectively determine whether the role is needed or not. If a person drowning is told that an anvil is a life-preserver, that doesn’t mean that they don’t need a life-preserver. It does mean they need a better definition of “life-preserver”.

Ruth Malan, answering the question “Do we still need architects?”, captures the essence of what is unique about the concerns of the software architect role:

There’s paying attention to the structural integrity of the code when that competes with the urge to deliver value and all we’ve been able to accomplish in terms of the responsiveness of continuous integration and deployment. We don’t intend to let the code devolve as we respond to unfolding demands, but we have to intend not to, and that takes time — possibly time away from the next increment of user-perceived value. There’s watching for architecturally impactful, structurally and strategically significant decisions, and making sure they get the attention, reflection, expertise they require. Even when the team periodically takes stock, and spends time reflecting learning back into the design/code, the architect’s role is to facilitate, to nurture, the design integrity of the system as a system – as something coherent. Where coherence is about not just fit and function, but system properties. Non-trivial, mutually interacting properties that entail tradeoffs. Moreover, this coherence must be achieved across (microservice, or whatever, focused) teams. This takes technical know-how and know-when, and know-who to work with, to bring needed expertise to bear.

These system properties are crucial, because without a cohesive set of system properties (aka quality of service requirements), the quality of the system suffers:

Even if the parts are perfectly implemented and fly in perfect formation, the quality is still lacking.

A tweet from Charles T. Betz points out the missing ingredient:

Now, even though Frederick Brooks was writing about the architecture of hardware, and the nature of users has drastically evolved over the last fifty-four years, his point remains: “Architecture must include engineering considerations, so that the design will be economica1 and feasible; but the emphasis in architecture is upon the needs of the user, whereas in engineering the emphasis is upon the needs of the fabricator.”

In other words, habitability, the quality of being “livable”, is a critical condition for an architecture. Two systems providing the exact same functionality may not be equivalent. The system providing the “better” (from the user’s perspective) quality of service will most likely be seen as the superior system. In some cases, quality of service concerns can even outweigh functional concerns.

Who, if not the software architect, is looking out for the livability of your system as a whole?

Form Follows Function on SPaMCast 403

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast, number 403, features Tom’s essay on Agile practices at scale, Kim Pries on transformations, and a Form Follows Function installment based on my post “NPM, Tay, and the Need for Design”.

Although the specific controversies have died down since we recorded the segment, the issues remain. Designs that fail to account for foreseeable issues are born vulnerable. Fixing problems after they “emerge” can be much more expensive than a little defensive design.

You can find all my SPaMCast episodes using under the SPAMCast Appearances category on this blog. Enjoy!

Back to the OODA – Making Design Decisions

OODA Loop Diagram

A few weeks back, in my post “Enterprise Architecture and the Business of IT”, I mentioned that I was finding myself drawn more and more toward Enterprise Architecture (EA) as a discipline, given its impact on my work as a software architect. Rather than a top-down approach, seeking to design an enterprise as a whole, I find myself backing into it from the bottom-up, seeking to ensure that the systems I design fit the context in which they will live. This involves understanding not only the technology, but also how it interacts, or not, with multiple social systems in order to (ideally) carry out the purpose of the enterprise.

Tom Graves is currently in the middle of a series on whole-enterprise architecture on his Tetradian blog. The third post of the series, “Towards a whole-enterprise architecture standard – 3: Method”, focuses on the need for a flexible design method:

But as we move towards whole-enterprise architecture, we need architecture-methods that can self-adapt for any scope, any scale, any level, any domains, any forms of implementation – and that’s a whole new ball-game…

He further states:

the methods we need for the kind of architecture-work we’re facing now and, even more, into the future, will need not only to work well with any scope, any scale and so on, but must have deep support for non-linearity built right into the core – yet in a way that fully supports formal rigour and discipline as well.

To begin answering the question “But where do we start?”, Tom looks at the Plan-Do-Check-Act (PDCA) cycle, which he finds wanting because:

… the reality is that PDCA doesn’t go anything like far enough for what we need. In particular, it doesn’t really touch all that much on the human aspects of change

This is a weakness that I mentioned in “OODA vs PDCA – What’s the Difference?”. PDCA, by starting with “Plan” and without any reference to context, is flawed in my opinion. Even if one argues that assessing the context is implied, leaving it to an implication fails to give it the prominence it deserves. In his post, Tom refers to ‘the squiggle’, a visualization of the design process:

Damien Newman's squiggle diagram

In an environment of uncertainty (which pretty much includes anything humans are even peripherally involved with), exploration of the context space will be required to understand the gross architecture of the problem. In reconciling the multiple contexts involved, challenges will emerge and will need to be integrated into the architecture of the solution as well. This fractal and iterative exploratory process is well represented by ‘the squiggle’ and, in my opinion, well described by the OODA (Observe, Orient, Decide, and Act) loop.

In “Architecture and OODA Loops – Fast is not Enough”, I discussed how the OODA loops maps to this kind of messy, multi-level process of sense-making and decision-making. The “and” is extremely important in that while decision-making with little or no sense-making may be quick, it’s less likely to effective due to the disconnect from reality. On the other hand, filtering and prioritizing (parts of the Orient component of the loop) is also needed to prevent analysis paralysis.

In my opinion, its recognition and handling of the tension between informed decision-making and quick decision-making makes OODA an excellent candidate for a design meta-method. It is heavily subjective, relying on context to guide appropriate response. That being said, an objective method that’s divorced from context imposes a false sense of simplicity with no real benefit (and very real risks).

Reality is messy; our design methodology should work with, not against that.

[OODA Loop diagram by Patrick Edwin Moran via Wikimedia Commons]

Abstract Dangers – When ‘And’ Meets ‘Or’

There’s an old saying that if you put one foot in a bucket of ice and the other in a bucket of boiling water, on average you’re comfortable. Sometimes analyzing information in the aggregate obscures rather than enlightens.

A statistician named Francis Anscombe pointed out this same principle in a more visual (though less colorful) manner more than forty years ago:

It’s an idea that I’ve been meaning to write about for a while, but was brought back to mind last week while reading an article on the Austrian school of economic theory posted on a site about medical practice and health care in the U.S. (diversity of interests and a very broad reading list is something I find useful, but that’s a topic for another day). The relevant passage:

When Ludwig von Mises began to establish a systematic theory of economics, he insisted on what he called the principle of methodological dualism: the scientific methods of the hard sciences are great to study rocks, stars, atoms, and molecules, but they should not be applied to the study of human beings. In stating this principle, he was voicing opposition to the introduction into economics of concepts such as “market equilibrium,” which were largely inspired by the physical sciences, and were perhaps motivated by a desire on the part of some economists to establish their field as a science on par with physics.

Mises remarked that human beings distinguish themselves from other natural things by making intentional (and usually rational) choices when they act, which is not the case for stones falling to the ground or animals acting on instinct. The sciences of human affairs therefore deserve their own methods and should not be tempted to apply the tools of the physical sciences willy-nilly. In that respect, Mises agreed with Aristotle’s famous dictum that ” It is the mark of an educated man to look for precision in each class of things just so far as the nature of the subject admits.”

I find myself agreeing and disagreeing with this at the same time. Human behavior is far from being as predictable as gravity and I agree with this for exactly the reason I disagree (at least in part) with the second paragraph. It is a mistake to characterize human action as intentional and rational. That’s not to say that all our choices are irrational and reactionary, but that there is a blend. Not only will different people respond in different ways to the same circumstance for different reasons, but the same person may react differently with different motivations on another occasion. Human nature isn’t rigidly deterministic and we consider it so at our peril.

Tom Graves post “Control, complex, chaotic” makes the same observation:

Attempting to ‘control’ complexity just doesn’t work: we need to treat the complex as complex, not as a ‘controllable problem for which we don’t quite know all the rules (but will know them all Real Soon Now, honest…)’.

Yet I’m also noticing another deeper problem: misguided attempts to apply complexity-theory to things that are neither rule-bound control nor pattern-based complexity, but are inherently ‘chaotic’ – a ‘market-of-one‘. Although we can identify definite patterns in health and health-care – that’s the whole basis of epidemiology, for example – neither rules nor statistics can help us deal with the blunt fact the everyone is different. The kind of patterns that we’d use in a complexity-model – probabilities, Bell-curve distributions, outliers, all that kind of thing – can all too easily mask the real underlying fact of uniqueness, from which that supposed ‘pattern’ will actually arise: somewhat like the barely-visible deep-randomness that underlies the visible patterns of Brownian-motion.

Trying to force something into a mold which it doesn’t fit is unlikely to work well.

Abstraction can be useful in understanding the contexts that influence the architecture of the problem. Designing an effective solution, however, will involve not just integrating the concerns of those contexts, but also dealing with any emergent challenges. The variability of human nature (in other words, sometimes the members of those contexts will not all think and act alike) can be one such emergent challenge.

Tom Grave’s again, this time from his “On mass-uniqueness”:

In practice, the scope of every system will comprise a mix of sameness and uniqueness – of predictable and unpredictable, certain and uncertain. If we design only on an assumption of sameness – as IT-systems often are – we set ourselves up for guaranteed failure. The same applies if – as is all too common – we say that our IT-system will handle all of the ‘sameness’ part of the context, and that the ‘not-sameness’ will Somebody Else’s Problem – without giving any means for that supposed ‘somebody else’ to be able to address the rest of the problem, or to link it up with the parts of the context that our system does handle.

The first requirement to make something that works in the real-world is to design for uniqueness, not against it.

In other words, a solution based on a poorly understood problem is unlikely to be a good fit. Abstraction is one tool to understand the problem, but doesn’t provide the whole picture. Shades of gray (black and white) is more likely than black or white.