Systems Thinking Complicates Things

4th UK Rock Paper Scissors Championships by James Bamber via Wikimedia

 

I’ve had the honor and pleasure of appearing as a regular on Tom Cagley‘s SPaMCast podcast for almost three years now. Before I write one of my “Form Follows Function on SPaMCast x” posts, I always listen to the podcast to make sure that the summary is right (the implication being, relying purely on my memory won’t be right). I got a bonus while writing up last week’s appearance, because Tom asked an excellent question that deserved its own post: does thinking about a problem (legacy systems, in the instance of last week’s discussion) holistically/systematically complicate things?

Abso-freakin’-lutely.

It is much easier to avoid all the twists and turns and possibilities inherent in systems thinking. A simpler approach, picking one lever to pull/one button to push, makes it much easier to come up with a solution.

It just doesn’t work very well at coming up with solutions that actually work.

When there is a mismatch in complexity between problem and solution architectures, this mismatch will be an additional problem to deal with. This will apply when the solution is more complex than the problem space warrants and when the opposite is the case. Solutions that fail to account for the context they will encounter are vulnerable. This is the idea behind the quote attributed to Albert Einstein: “Everything should be made as simple as possible, but not simpler.”

Human nature can push us to fix problems quickly, and quick will generally equate to simple. It takes time to analyse the angles and consider the alternatives. How often have you seen people ask for “the best” way to do something absent any context? How often have you seen people ask “why would someone ever do that?”

I’ll answer that by asking 3 questions:

  • since Rock beats Scissors, why would anyone ever choose Scissors?
  • since Paper beats Rock, why would anyone ever choose Rock?
  • since Scissors beats Paper, why would anyone ever choose Paper?

Reality isn’t binary. It’s not what’s “best”, it’s what’s fit for purpose in a given context and there are lots and lots of contexts out there.

This isn’t to say that all quick, simple interventions are wrong. If you find yourself in a house fire, more action and less comprehensive deliberation may well be in order. The key is matching the cost (largely in terms of time) of defining the problem space with cost (in terms of both effort and risk that the intervention adds to the problem) of crafting the solution.

Rock, Paper, Scissor, Lizard, Spock rules diagram

It’s almost guaranteed that the system contexts we deal with (both technical and social) will evolve toward more and more complexity. Surprises will emerge as a matter of course. We don’t need to make more by failing to take a more holistic view when we have the time to do so.

Advertisement

I fought the law (of unintended consequences) and the law won

Sometimes, what seemed to be a really good idea just doesn’t turn out that way in the end.

In my opinion, a lack of a systems approach to problem solving makes that type of outcome much more likely. Simplistic responses to issues that fail to deal with problems holistically can backfire. Such ill-considered solutions not only fail to solve the original problem, but often set up perverse incentives that can lead to new problems.

An article on the Daily WTF last week, “Just the fax, Ma’am”, illustrates this perfectly. In the article, an inflexible and time-consuming database change process (layered on top of the standard change management process) leads to the “reuse” of an existing, but obsolete field in the database. Using a field labeled “Fax” for an entirely different purpose is far from “best practice”, but following the rules would lead to being seen as responsible for delaying a release. This is an example of a moral hazard, such as Tom Cagley discussed in his post “Some Moral Hazards In Software Development”. Where the cost of taking a risk is not borne by the party deciding whether to take it, potential for abuse abounds. This risk becomes particularly likely when the person taking shortcuts can claim a “moral” rationale for doing so (such as “getting it done” for the customer).

None of this is to suggest that change management isn’t a worthy goal. In fact, the worthier the goal, the greater the danger of creating an unintended consequence because it’s so easy to conflate argument over means with disagreement regarding the ends. If you’re not in favor of being strip-searched on arrival and departure from work, that doesn’t mean you’re anti-security. Nonetheless, the danger of that accusation being made will likely resonate for many. When the worthiness of the goal forestalls, or even just hinders, examination of the effectiveness of methods, then that effectiveness is likely to suffer.

Over the course of 2016, I’ve published twenty-two posts, counting this one, with the category Organizations as Systems. The fact that social systems are less deterministic than software systems only reinforces the need for intentional design. When foreseeable abuses are not accounted for, their incidence becomes more likely. Whether the abuse results from personal pettiness, doctrinal disagreements, or even just clumsy design like the change management process described above is irrelevant. In all of those cases, the problem is the same, decreased respect for institutional norms. Studies have found that “…corruption corrupts”:

Gächter has long been interested in honesty and how it manifests around the world. In 2008, he showed that students from 16 cities, from Riyadh to Boston, varied in how likely they were to punish cheaters in their midst, and how likely those cheaters were to then retaliate against their castigators. Both qualities were related to the values of the respective cities. Gächter found that the students were more likely to tolerate free-loaders and retaliate against do-gooders if they came from places whose citizens took a more relaxed view on tax evasion or fare-dodging, or had less trust in their courts and police.

If opinions around corruption and rule of law can affect people’s reactions to dishonesty, Gächter reasoned that they surely affect how honest people are themselves. If celebrities cheat, politicians rig elections, and business leaders engage in nepotism, surely common citizens would feel more justified in cutting corners themselves.

Taking a relaxed attitude toward the design of a social system can result in its constituents taking a relaxed attitude toward those aspects of the system that are inconvenient to them.

Babies, Bathwater, and Software Architects

'Fixing Problems' - XKCD 1739

I try to be disciplined about my writing (picking themes, creating a backlog, collecting notes and links on those topics, etc.), but it seems like serendipity won’t be denied, no matter what I do. On the same day that XKCD published this cartoon, Erik Dietrich published “Software Architect as a Developer Pension Plan”. While I agree with a lot that Erik says in the post, I think he also gets into a “throwing out the baby with the bathwater” situation by dismissing the need for the software architect role, part of which the cartoon illustrates.

Before I go any farther, I do need to point out that I’ve been following and interacting with Erik for about four years now. It’s a friendly disagreement; meant to be more debate than argument.

In the post, Erik makes the point (which I largely agree with) that companies see the role of software architect through a Taylorist lens. They see the software architect as a “thinker”, meant to drive a herd of “doers”:

The modern corporation, like Taylor before it, loosely divides into three categories of humans: owners/executives, managers, and grunts. Owners own and charge executives with executing their will. Executives delegate to managers. Managers think and assemble the workers into org charts designed to optimize efficiency. Workers keep their simple heads down and do as they’re told.

Theoretically, proponents would say, this applies to any domain and type of work. Corporations form themselves into these pyramids without considering any other options, whether they’re private military outfits, gigantic manufacturing facilities, or companies designing and selling software. Of course, the reality turns out to be that not all operations do well splitting thinking and labor. Let’s pick one at random. Oh, I dunno, say, writing software.

We try it. We hire junior developers to be directed by senior ones. And all of them fall in line under the watchful guise of a “tech lead,” who defers to a council of architects. And somewhere at the center of the organizational maelstrom stands The Lead Architect, directing elaborate bucket marches, water flows, and all sorts of magic, like Mickey Mouse in Fantasia. It’s a beautiful, cascading symphony of work flowing from the cleverest down to the simplest. Or… at least, that’s how it goes on paper.

Erik rightly argues that this model is fatally flawed. This is doubly so in the case of the example he gives, an individual offered a position as a software architect doing the thinking for a set of offshore “commodity” developers. Offshore development can be done well (that’s a post I’ll save for another day), but not using that model. Where Erik goes wrong, in my opinion, is in adopting this flawed definition of the software architect role.

Developers should be perfectly competent to do their own thinking. If they’re not, no software architect will be able to help. Even if that person were capable of handling the load of providing detailed directions for several developers, doing so would prevent them from attending to their own areas of concern. It would require being able to anticipate and make all the functional design decisions up front, as well as communicating them to multiple “typists” quicker than they could mindlessly hammer out the code, while still having time to consider all the requisite cross-cutting, quality of service considerations for an application. Do we really need to bother with experiments to determine that this is beyond improbable?

There is a separation between the developer role and software architect role, but it needs to be clearly understood as a difference in focus. A developer’s focus is more vertical, concentrating on implementing functionality while maintaining/improving/not harming the quality of service (aka the horrendously misnamed “non-functional”) aspects of the system. A software architect’s focus should be more horizontal, concentrating on the architecturally significant quality of service aspects that will enable the system to meet the needs of its stakeholders (or, at least, not unreasonably prevent the system from doing so).

The difference between the roles is a matter of thinking at different levels of abstraction, not of one being superior to the other. The two mindsets are both necessary and complementary. A high-quality architectural design without quality implementation cannot produce a high-quality system. Likewise, where there is implementation without architectural coordination, the quality of the resulting system is a matter of chance. For very small systems maintained by small, permanent teams, it may not be necessary for these roles to be distinct. In my experience, however, dealing with both the broadly general and the very specific at the same time scales poorly. Non-trivial systems quickly come to require architectural leadership, otherwise people are left fixing problems caused by problems fixed previously.

Leadership has been the main theme of almost all of my posts this month and has figured prominently in several of what’s been written this year. Many of these posts are assigned to the “Architectural Practice” category. This is because I absolutely believe that the software architect role is a leadership role. That being said, I don’t consider the Taylorist model to be effective leadership practice. In fact, I consider it an anti-pattern.

I’ve long been an advocate of a more collaborative model of practice. Rather than dictating, the software architect’s role should be one of consensus building and communication. Significant architectural disagreement indicates an issue with one or more members of the team. Significant uncertainty about the architecture indicates an issue with the software architect role.

Effective systems require both breadth and depth of effectiveness. This holds true for software systems as much as social systems. For the social systems producing software systems for use by other social systems (i.e. software development teams), it is imperative.

All Aboard the Innovation Band Wagon?

Bandwagon

 

It seems like everyone wants to be an innovator nowadays. Being “digital” is in – never mind what it means, you’ve just got to be “digital”. Being innovative, however, is more than being buzzword-compliant. Being innovative, particularly in a digital sense, means solving problems (for customers, not yourself) in a new way with technology. Being innovative means meeting a need in a sustainable way (eventually you have to make money). Being innovative means understanding your strategy, not just following the latest thing.

Casimir Artmann published a post this week, “Digital is not enough”, outlining Kodak’s failures in the digital photography space. As digital cameras entered the market, Kodak introduced ways to turn film into digital images. Kodak’s move into digital photography (which, ironically, they invented in 1975), coincided with the rise of camera phones. By concentrating more on perpetuating their film product line than their customers’ needs, Kodak wound up chasing the trend and losing out.

Customers’ cash follow products that meet customer needs (even needs that they didn’t know they had).

Sometimes a product or service can meet a need and still fail. A Business Insider article yesterday morning discussed the weakness of the peer-to-peer foreign exchange business model, saying it only works in “fair weather”. In the article, Richard Kimber, CEO of the foreign exchange company OFX Group, observes:

When you’ve got currency moving dramatically one way or the other, what you can have happen is it encourages asymmetric activity. As we saw in Brexit, you had lots and lots of sellers and very few buyers. That can lead to an inability to transact because you simply have all these sellers lined up and no buyers. That’s one of the reasons why the peer-to-peer players opted out of their model during this period of volatility because it wouldn’t have been sustainable.

While Brexit might be the latest event to expose the weakness of the peer-to-peer model, it’s not the first. The Business Insider article referenced another article from January on The Memo that made the same point. Small wonder, the concept of a market maker is a well established component of financial markets.

Disintermediation, cutting out the “middleman”, is only innovative when the “middleman” is, or can be made, superfluous.

Blindly following a trend can be another innovation anti-pattern. In an article for the Wharton Business School, “Rethinking Retail: When Location Is a Liability”, the authors discussed the pressures on brick and mortar retailers and the need to be “Digital-first”. The following was recommended:

  1. Identify some of your common habits and perspectives about how the retail sector should function, including guiding principles, time and capital allocation patterns, primary skills and capabilities, and the key metrics and outcomes that you track.
  2. Uncover the core beliefs about retailing that motivate your behaviors, and are the priorities of your firm and board. This step usually takes some ongoing reflection and added perspective from your peers. Industry best practices likely influence your thinking greatly.
  3. Invert your core beliefs about retailing and consider the implications for your firm and board. There are many possible inversions in each instance. For example, all retailers should ask themselves, ‘Is digital our first priority? How about our customer network — do we put them in front of merchandise and do we have an entire department dedicated to mobilizing them?’
  4. Extrapolate what implications these new core beliefs, and the various ripple effects, would have for your organization and board. Observe what is happening in your industry and, more broadly, how different core beliefs might help you get ahead of digital disruption by companies like Amazon.
  5. Act on your new retail core beliefs (preferably with digital as the center) by sharing them broadly with your customers, employees, suppliers and investors. Purposely changing your business actions, particularly when it comes to time and capital allocation, is an important part of the process and helps reinforce the changes in mental models you are trying to achieve.

Note the generous usage of “your” (retailer) instead of “their” (customer). Sharing “…your new retail core beliefs (preferably with digital as the center)…” with your customers will only be fruitful if those new beliefs align with those the customer has or can be convinced to adopt. Retail is a very broad segment and a very large part of it needs to be digital. That being said, over-focusing on it carries risk as well. Convenience stores, for example, catering to a “we’re out and need it now” market, is unlikely to benefit from a digital-first strategy in the same way big-box retailers might. Not having a one-size-fits-all strategy is why Amazon is opening physical stores.

We don’t drive customer behavior. We provide opportunities that hopefully makes it more like for them to choose us.

Innovation doesn’t come from a recipe. Digital isn’t the magic secret sauce for everything. Change occurs, but at different speeds in different areas. The future is not evenly distributed. As Joanna Young observed in “Obsolescence: Take With Grain Of Salt”:

I recall clearly in the mid-1990s hearing an executive say “by the year 2000, we will be paperless.” I signed, with a pen, four approval forms just today. Has technology failed us? No. The technology exists to make mailboxes obsolete and signatures purely ceremonial. However the willingness to change behavior and ergo retire old methods is up to humans, not technology.

Innovation is significant positive change, an improvement in our customers’ lives, not a recipe.

Abstract Dangers – When ‘And’ Meets ‘Or’

There’s an old saying that if you put one foot in a bucket of ice and the other in a bucket of boiling water, on average you’re comfortable. Sometimes analyzing information in the aggregate obscures rather than enlightens.

A statistician named Francis Anscombe pointed out this same principle in a more visual (though less colorful) manner more than forty years ago:

It’s an idea that I’ve been meaning to write about for a while, but was brought back to mind last week while reading an article on the Austrian school of economic theory posted on a site about medical practice and health care in the U.S. (diversity of interests and a very broad reading list is something I find useful, but that’s a topic for another day). The relevant passage:

When Ludwig von Mises began to establish a systematic theory of economics, he insisted on what he called the principle of methodological dualism: the scientific methods of the hard sciences are great to study rocks, stars, atoms, and molecules, but they should not be applied to the study of human beings. In stating this principle, he was voicing opposition to the introduction into economics of concepts such as “market equilibrium,” which were largely inspired by the physical sciences, and were perhaps motivated by a desire on the part of some economists to establish their field as a science on par with physics.

Mises remarked that human beings distinguish themselves from other natural things by making intentional (and usually rational) choices when they act, which is not the case for stones falling to the ground or animals acting on instinct. The sciences of human affairs therefore deserve their own methods and should not be tempted to apply the tools of the physical sciences willy-nilly. In that respect, Mises agreed with Aristotle’s famous dictum that ” It is the mark of an educated man to look for precision in each class of things just so far as the nature of the subject admits.”

I find myself agreeing and disagreeing with this at the same time. Human behavior is far from being as predictable as gravity and I agree with this for exactly the reason I disagree (at least in part) with the second paragraph. It is a mistake to characterize human action as intentional and rational. That’s not to say that all our choices are irrational and reactionary, but that there is a blend. Not only will different people respond in different ways to the same circumstance for different reasons, but the same person may react differently with different motivations on another occasion. Human nature isn’t rigidly deterministic and we consider it so at our peril.

Tom Graves post “Control, complex, chaotic” makes the same observation:

Attempting to ‘control’ complexity just doesn’t work: we need to treat the complex as complex, not as a ‘controllable problem for which we don’t quite know all the rules (but will know them all Real Soon Now, honest…)’.

Yet I’m also noticing another deeper problem: misguided attempts to apply complexity-theory to things that are neither rule-bound control nor pattern-based complexity, but are inherently ‘chaotic’ – a ‘market-of-one‘. Although we can identify definite patterns in health and health-care – that’s the whole basis of epidemiology, for example – neither rules nor statistics can help us deal with the blunt fact the everyone is different. The kind of patterns that we’d use in a complexity-model – probabilities, Bell-curve distributions, outliers, all that kind of thing – can all too easily mask the real underlying fact of uniqueness, from which that supposed ‘pattern’ will actually arise: somewhat like the barely-visible deep-randomness that underlies the visible patterns of Brownian-motion.

Trying to force something into a mold which it doesn’t fit is unlikely to work well.

Abstraction can be useful in understanding the contexts that influence the architecture of the problem. Designing an effective solution, however, will involve not just integrating the concerns of those contexts, but also dealing with any emergent challenges. The variability of human nature (in other words, sometimes the members of those contexts will not all think and act alike) can be one such emergent challenge.

Tom Grave’s again, this time from his “On mass-uniqueness”:

In practice, the scope of every system will comprise a mix of sameness and uniqueness – of predictable and unpredictable, certain and uncertain. If we design only on an assumption of sameness – as IT-systems often are – we set ourselves up for guaranteed failure. The same applies if – as is all too common – we say that our IT-system will handle all of the ‘sameness’ part of the context, and that the ‘not-sameness’ will Somebody Else’s Problem – without giving any means for that supposed ‘somebody else’ to be able to address the rest of the problem, or to link it up with the parts of the context that our system does handle.

The first requirement to make something that works in the real-world is to design for uniqueness, not against it.

In other words, a solution based on a poorly understood problem is unlikely to be a good fit. Abstraction is one tool to understand the problem, but doesn’t provide the whole picture. Shades of gray (black and white) is more likely than black or white.

Innovation – What’s Old can be New Again

Roman Road Ruins

There’s an old rhyme about what a bride should wear for luck on her wedding day: “Something old, something new, something borrowed, something blue…”. While reading an article on the origins of the US highway system, I thought about this rhyme in relation to the concept of innovation. Part of that article related the US Army’s interest in highways as a means of moving troops, etc.:

When the war ended in 1919, the Army sent an expedition from Camp Meade, Maryland, to San Francisco to study the feasibility of moving troops and equipment by truck, and it was a disaster. At an average speed of seven miles per hour, the trip took two months, and many of the vehicles broke down along the way. The rough journey convinced the officer in charge—a young Dwight D. Eisenhower—that impassable roads were a risk to national security and that building good roads should be the highest priority for the nation.

During World War II, Eisenhower’s views were confirmed by the Allies’ use of the autobahns in the invasion of Germany (ironically, the German military had been much less enthusiastic about their military potential, preferring the older, more established railroad system). Eisenhower would go on to spearhead the construction of the Interstate Highway System when elected President after the war.

While the construction and use of high-quality roads for military purposes was an innovation, it certainly wasn’t a new development. The Romans had “been there, done that” long before. It became innovative again due to the fact that the social and technological context had shifted, making it more effective than the previous transport innovation, railroads.

There’s an old saying that “history does not repeat itself, but it rhymes”. Recognizing when something old has regained its relevance can be a path to innovation. In my post “Innovation on Tap”, I talked about Amazon’s plan to open physical bookstores and embrace the concept of “showrooming”, something that traditional retailers have been struggling with for years. Other companies are jumping aboard. What I find interesting is that this concept was the business model of a company headquartered in my home town. They operated from 1957 to 1997, when they went bankrupt. Now, less than twenty years later, that model is seen as innovative. Once again, it’s the shift in social and technological contexts that has changed it into an effective solution. It’s the value that makes it innovative.

This is part 17 of an ongoing conversation with Greger Wikstrand on the topic of innovation.

[Roman Road image by Phr61 via Wikimedia Commons.]

Ignorance Isn’t Bliss, Just Good Tactics

Donkey

There’s an old saying about what happens when you assume.

The fast lane to asininity seems to run through the land of hubris. Anshu Sharma’s Tech Crunch article, “Why Big Companies Keep Failing: The Stack Fallacy”, illustrates this:

Stack fallacy has caused many companies to attempt to capture new markets and fail spectacularly. When you see a database company thinking apps are easy, or a VM company thinking big data is easy  — they are suffering from stack fallacy.

Stack fallacy is the mistaken belief that it is trivial to build the layer above yours.

Why do people fall prey to this fallacy?

The stack fallacy is a result of human nature  — we (over) value what we know. In real terms, imagine you work for a large database company  and the CEO asks , “Can we compete with Intel or SAP?” Very few people will imagine they can build a computer chip just because they can build relational database software, but because of our familiarity with building blocks of the layer up,  it is easy to believe you can build the ERP app. After all, we know tables and workflows.

The bottleneck for success often is not knowledge of the tools, but lack of understanding of the customer needs. Database engineers know almost nothing about what supply chain software customers want or need.

This kind of assumption can cost an ISV a significant amount of money and a lot of good will on the part of the customer(s) they attempt to disrupt. Assumptions about the needs of the customer (rather than the customer’s customer) can be even more expensive. The smaller your pool of customers, the more damage that’s likely to result. Absent a captive customer circumstance, incorrect assumptions in the world of bespoke software can be particularly costly (even if only in terms of good will). Even comprehensive requirements are of little benefit without the knowledge necessary to interpret them:

But, that being said:

This would seem to pose a dichotomy: domain knowledge as both something vital and an impediment. In reality, there’s no contradiction. As the old saying goes, “a little knowledge is a dangerous thing”. When we couple that with another cliche, “familiarity breeds contempt”, we wind up with Sharma’s stack fallacy, or as xkcd expressed it:

'Purity' on xkcd.com

In order to create and evolve effective systems, we obviously have a need for domain knowledge. We also have a need to understand that what we possess is not domain knowledge per se, but domain knowledge filtered through (and likely adulterated by) our own experiences and biases. Without that understanding, we risk what Richard Martin described in “The myopia of expertise”:

In the world of hyperspecialism, there is always a danger that we get stuck in the furrows we have ploughed. Digging ever deeper, we fail to pause to scan the skies or peer over the ridge of the trench. We lose context, forgetting the overall geography of the field in which we stand. Our connection to the surrounding region therefore breaks down. We construct our own localised, closed system. Until entropy inevitably has its way. Our system then fails, our specialism suddenly rendered redundant. The expertise we valued so highly has served to narrow and shorten our vision. It has blinded us to potential and opportunity.

The Clean Room pattern on CivicPatterns.org puts it this way:

Most people hate dealing with bureaucracies. You have to jump through lots of seemingly pointless hoops, just for the sake of the system. But the more you’re exposed to it, the more sense it starts to make, and the harder it is to see things through a beginner’s eyes.

So, how do we get those beginner’s eyes? Or, at least, how do we get closer to having a beginner’s eyes?

The first step is to reject the notion of our own understanding of the problem space. Lacking innate understanding, we must then do the hard work of determining what the architecture of the problem, our context, is. As Paul Preiss noted, this doesn’t happen at a desk:

Architecture happens in the field, the operating room, the sales floor. Architecture is business technology innovation turned to strategy and then executed in reality. Architecture is reducing the time it takes to produce a barrel of oil, decreasing mortality rates in the hospital, increasing product margin.

Being willing to ask “dumb” questions is just as important. Perception without validation may be just an assumption. Seeing isn’t believing. Seeing and validating what you’ve seen, is believing.

It’s equally important to understand that validating our assumptions goes beyond just asking for requirements. Stakeholders can be subject to biases and myopic viewpoints as well. It’s true that Henry Ford’s customers would probably have asked for faster horses, it’s also true that, in a way, that’s exactly what he delivered.

We earn our money best when we learn what’s needed and synthesize those needs into an effective solution. That learning is dependent on communication unimpeded by our pride or prejudice:

The Seductive Myth of Greenfield Development

Greger Wikstrand‘s tweet from earlier this week packed a wealth of inspiration into one image:

The second statement particularly resonated with me: “The present is built on the past.”

How often do we, or those around us, long for a chance to do things “from scratch”. The idea being, without the constraints of “legacy” code, we could do things “right”. While it’s a nice idea, it has no basis in reality.

Rewrites, of course, will involve dealing with existing data. I’ve yet to encounter a system where no one was interested in the data when it was replaced. I’ve shut down a few where there was no interest, but that’s a different story. The need for that existing data will serve as a potent influence on what can or cannot be done with the replacement system. Likewise, its structure. It’s not reasonable to assume that the data will be any less “legacy” than the code.

We might be tempted to believe that brand new systems escape this pitfall. In doing so, we fail to consider that new systems still must deal with the wants, needs, and attitudes of their stakeholders. People, processes, and organization form the ecosystem that new systems must fit into as surely as replacement systems must.

A crucial part of problem solving is having an adequate understanding of the problem. Everything has a backstory. Understanding the backstory is dependent on understanding the ecosystem the thing fits into. This what Sullivan was talking about when he said “…form ever follows function”.

Nothing’s Ex Nihilo.

We Deliver Decisions (Who Needs Architects?)

Broken Window

What do medicine, situational awareness, economics, confirmation bias, and value all have to do with all have to do with the architectural design of software systems?

Quite a lot, actually. To connect the dots, we need to start from the point of view that the architecture is essentially a set of design decisions intended to solve a problem. The architecture of that problem consists of a set of contexts. The fitness of a solution architecture will depend on how well it addresses the problem architecture. While challenges will emerge in the course of resolving a set of contexts, understanding up front what can be known provides reserves to deal with what cannot.

About two weeks ago, during a Twitter discussion with Greger Wikstrand, I mentioned that the topic (learning via observational studies rather than controlled experiment) coincided with a post I was publishing that day, “First Do No Harm – the Practice of Software Development” (comparing software development to medicine). That triggered the following exchange:

A few days later, I stumbled across a reference to Frédéric Bastiat‘s classic essay on economics, “What Is Seen and What Is Not Seen”. For those that aren’t motivated to read the work of 19th century French economists, it deals with the concepts of opportunity costs and the law of unintended consequences via a parable that attacks the notion that broken windows benefit the economy by putting glaziers to work.

A couple more days went by and Greger posted “Confirmation bias in software engineering” on the pitfalls of being too willing to believe information that conforms to our own preferences. That same day, I posted “Let’s Talk Value (Who Needs Architects?)”, discussing the effects of perception on determining value. Matt Ballantine made a comment on that post, and coincidentally, “confirmation bias” came up again:

I think it’s always going to be a balance of expediency and pragmatism when it comes to architecture. And to an extend it relates back to your initial point about value – I’m not sure that value is *anything* but perception, no matter what logical might tell us. Think about the things that you truly value in your life outside of work, and I’d wager that few of them would fit neatly into the equation…

So why should we expect the world of work to be any different? The reality is that it isn’t, and we just have a fashion these days in business for everything to be attributable to numbers that masks what is otherwise a bunch of activities happening under the cognitive process of confirmation bias.

So when it comes to arguing the case for architecture, despite the logic of the long-term gain, short term expedience will always win out. I’d argue that architectural approaches need to flex to accommodate that… http://mmitii.mattballantine.com/2012/11/27/the-joy-of-hindsight/

The common thread through all this is cognition. Perceiving and making sense of circumstances, then deciding how best to respond. The quality of the decision(s) will be directly related to the quality of the cognition. Failing to take a holistic view (big picture and details, not either-or) will impair our perception of the problem, sabotaging our ability to design effective solutions. Our biases can lead to embracing fallacies like the one in Bastiat’s parable, but stakeholders will likely be sensitive to the opportunity costs of avoidable architectural refactoring (the unintentional consequence of applying YAGNI at the architectural level). That sensitivity will color their perception of the value of the solution and their perception is the one that counts.

Making the argument that you did well by costing someone money is a lot easier in the abstract than it is in reality.

One Weird Trick to Design Perfect Applications (?)

What’s the right way to design an application?

Scanning the web, one might be tempted to believe that all sites must use AngularJS Backbone.js Ember.js some sort of javascript framework, all services must be RESTful, and if those services aren’t of the “micro” variety, well…

Is that really the case, though?

Is there a right way, or is it case of choosing the way which most closely matches our current context? We can say rules are rules, but experience will teach that rules lose meaning when divorced from the context they were formed in response to. There is no universal best practice. Without understanding the “why” behind a rule, you can’t determine whether it applies.

A conversation with John Evdemon captured principle this in regards to the latest application architecture “sensation” (AKA microservices):



A Kindle highlight shared by Tony DaSilva extends this principle to enterprise architecture:

There is no evidence that structure in and of itself affects the profitability of a company; different structures work best for different companies.

Meeting a need involves two architectures: the architecture of the problem and the architecture of the solution. Understanding the architecture of the problem is a necessary prerequisite to designing the architecture of the solution. Without that context providing definition of the desired end, imperfect though it may be, any proposed solution becomes a shot in the dark. Attempting to extend a technology or technique outside its range of utility risks harming its credibility (see SOA).

Thinking isn’t an option, it’s a requirement.