This is not a project

Gantt Chart

My apologies to René Magritte, as I appropriate his point, if not his iconic painting.

After I posted “Storming on Design”, it sparked a discussion with theslowdiyer around context and change. In that discussion, theslowdiyer commented:

‘you don’t adhere to a plan for any longer than it makes sense to.’
Heh, agree. I wonder if the “plan as a tool” vs. “plan as a goal in itself” discussion isn’t deserving of a post of its own 🙂

Indeed it is, even if it did take me nearly four months to get to it.

The key concept to understand, is that the plan is not the goal, merely a stated intention of how to achieve the goal (if this causes you to suspect that the words “plan” and “design” could be substituted for each other without changing the point, move to the head of the class). Magritte’s painting stated that the picture is not the thing. The map is not the territory (and if that concept seems a bit self-evident, consider the fact that Wikipedia considers it significant enough to devote over 1700 words, not counting footnotes and links, to the topic).

Conflating plan and goal is a common problem. To illustrate the difference, consider undergoing an operation. Is it your desire that the surgeon perform the procedure as planned or that your problem gets fixed? In the former scenario, your survival is optional.

This is not, however, to say that planning (or design) is useless. The output of an effective planning/design process is critical. As Joanna Young noted in her “Four Signs of Readiness – Or Not”:

I’m all for consigning the traditional 50+ pages of adminis-trivia on scope, schedule, budget, risks that requires signing in blood to the dustbin. However no organization should forego the thoughtful and hard work on determining what needs to be done, why, how, by whom, for how much – and how this will all be governed and measured as it is proceeds through sprints and/or waterfalls to delivery.

The information derived from the process (not the form, not the presentation, but the information) is critical tool for moving forward intelligently. If you have no idea of what to do, how to do it, who can do it when and for how much, you are adrift. You’re starting a trip with no idea of whether the gas tank has anything in it. Conversely, attempting to achieve 100% certainty from the outset is a fool’s errand. For any endeavor, more will be known nearer the destination. Plans without “wiggle room” are of limited usefulness as you will drift outside the cone of uncertainty from the start and never get back inside.

Having a reasonable idea of what’s acceptable variance helps determine when it’s time to abandon the current plan and go with a revised one. Planning and design are processes, not events or even phases. It’s a matter of continually monitoring context and whether our intentions are still in accordance with reality. Where the differ, reality wins. Always.

Execution isn’t blindly marching forward according to plan. It’s surfing the wave of context.

Advertisements

Designing Communication, Communicating Design

The Simplest metamodel in the world ever!

We work in a communications industry.

We create and maintain systems to move information around in order to get things done. That information moves between people and systems in combinations and configurations too numerous to count. In spite of that, we don’t do that great a job of communicating what should be, for us, extremely important information. We tend to be really bad at communicating the architecture of our systems – structure, behavior, and most importantly, the reasons for the decisions made. It’s bad enough when we fail to adequately communicate that information to others, it’s really bad when we fail to communicate it to ourselves. I know I’ve let myself down more than once (“What was I thinking here?!”).

Over the past few days, I’ve been privileged to follow (and even contribute a bit to) a set of conversations on Twitter. Grady Booch, Ivar Jacobson, Ruth Malan, Simon Brown, and others have been discussing the need for architectural awareness and the state of communicating architecture.

This exchange between Simon, Chris Carroll and Eoin Woods sums it up well:

First and foremost, an understanding of what the role of a software architect is and why it’s important is needed. Any organization where the role is seen as either just a senior developer or (heaven help us!) some sort of Taylorist “thinker” who designs everything for the “worker bee” coders to implement, is almost guaranteed to be challenged in terms of application architecture. Resting on that foundation of shifting sand, the organization’s enterprise IT architecture (EITA) is likewise almost guaranteed to be challenged barring a remarkable series of “happy accidents”. The role (not necessarily position) of software architect is required, because software architecture is a distinct set of concerns that can either be addressed intentionally or left to emerge haphazardly out of the construction of the system.

Before we can communicate the architecture of a system, it’s necessary to understand what that is. In “Software Architecture: Central Concerns, Key Decisions”, Ruth Malan and Dana Bredemeyer defined it as high impact, systemic decisions involving (at a minimum):

  • system priority setting
  • system decomposition and composition
  • system properties, especially cross-cutting concerns
  • system fit to context
  • system integrity

I don’t think it’s possible to over-emphasize the use of “system” and “systemic” in the preceding paragraphs. That being said, it’s important to understand that architectural concerns do not exist in a void. There is a cyclic relationship between the architectural concerns of a system and the system’s code. The architectural concerns guide the implementation, while the implementation defines the current state of the architecture and constrains the evolution of future state of the architecture. Code is a necessary, but insufficient source of architectural knowledge – it’s not enough. As Ruth Malan noted in the Visual Design portion (part II) of her presentation at the Software Architect Conference in London a year ago:

Slide from Ruth Malan's presentation on Visual Design

While the code serves as a foundation of the system, it’s also important to realize that the system exists within a larger context. There is a fractal set of systems within systems within ecosystems. Ruth illustrated this in the Intention and Reflection portion (part III) of the presentation reference above:

Slide from Ruth Malan's presentation on Intention and Reflection

[Note: Take the time to view the entirety of the Intention and Reflection presentation. It’s an excellent overview of how to design the architecture of a system.]

The fractal nature of systems within systems within ecosystems is illustrated by the image at the top of the post (h/t to Ric Phillips for the reblog of it). Richard Sage‘s humorous (though only partly, I’m sure) suggestion of it as a meta-model goes a long way towards portraying the problem of a language to communicate architecture.

Not only are we dealing with a nested set of “things”, but the understanding of those things differ according to the stakeholder. For example, while the business owner might see a “web site” as one monolithic thing, the architect might see an application made up of code components depending on other applications and services running on a collection of servers. Maintaining a coherent, normalized object model of the system yet being able to present it in multiple ways (some of which might be difficult to relate) is not a trivial exercise.

Lower-level aspects of design lend themselves to automated solutions, which can increase reliability of the model by avoiding “documentation rot”. An interesting (in my opinion) aspect that can also be automated is the evolution of code over time. What can’t be parsed from the code, however, is intention and reasoning.

Another barrier to communication is the need to be both expressive and flexible (also well illustrated by Richard’s meta-model) while also being simple enough to use. UML works well on the former, but (rightly or wrongly) is perceived to fail on the latter. Simon Brown’s C4 model aims to achieve a better balance in that aspect.

At present, I don’t think we have one tool that does it all. I suspect that even with a suite of tools, that narrative documents will still be way some aspects are captured and communicated. Having a centralized store for the non-code bits (with a way to relate them back to the code) would be a great thing.

All in all, it is encouraging to see people talking about the need for architectural design and the need to communicate the aspects of that design.

Technical Debt and Rolling Re-writes (Who Needs Architects?)

If you think building a system is challenging, try maintaining one.

Tom Cagley‘s recent post “Plan to Throw One Away Re-Read Saturday: The Mythical Man-Month, Part 11”, was a good reminder that while “technical debt” may be something currently on the radar for many, it’s far from a new phenomenon. The concept of instant legacy applications was in place when forty years ago when Frederick Brooks wrote his masterpiece, even if they weren’t called that. As Tom observed in the post:

Rarely is the first attempt useful to the end consumer, and the usefulness of that first attempt is less in the code than in the feedback it generates. Software development is no different. The initial conceptual design and anticipated technical architecture of a large project rarely stands up to the rigors of the discovery process, and those designs should be learned from and then thrown away.

The faulty assumptions and design flaws accumulate not only from sprint to sprint leading up to the initial release, but also from release to release. In spite of the fact that a product can be so seriously flawed, throwing it away and starting over is easier said than done. While sunk costs cannot be recovered, too sanguine an attitude towards them may not enhance your credibility with the customer. Having to pay for the same thing over and over can make them grumpy.

This sets up a dilemma, one that frequently leads to living with technical debt and attempting to incrementally patch it up. There are limits, however, to the number of band-aids that can be applied. This might make it tempting to propose a rewrite, but as Erik Dietrich stated in “The Myth of the Software Rewrite”:

Sure, they know things now that they didn’t know when they started on this code 3 years ago. But won’t the same thing be true in 3 years? Won’t the developers then be looking at the code and saying, “this is a mess — if only we knew in 2015 what we now know in 2018!” And, beyond that, what makes you think that giving the same group of people the same marching orders won’t result in the same kind of code?

The “big rewrite from scratch because this is a mess” is a losing strategy.

Fortunately, there is an alternative. Quoting Tom Cagley again from the same post as above:

If change is both inevitable and good (within limits), then both systems and organizations (a type of system) need to be engineered to support and facilitate change. Architecturally, techniques such as modularization, object-oriented design and other processes that foster simplification and incremental change create an environment in which change isn’t avoided, but rather encouraged.

While we may laugh at the image of changing a tire while the vehicle is in motion, it is an accurate metaphor. Customers expect flexibility and change on the go; waiting equals lost business. The keys to evolving in place are having an intentionally designed, modular architecture and an understanding of where the weaknesses lie. Both of these are concerns that reside squarely on the architect’s plate.

Modularity not only makes an application more easily maintainable via separation of concerns, but it also embraces change by making components replaceable. This is one of the qualities that has made microservices such a hot topic, although it would be a mistake to think that microservices are the only way (or best way in all cases) to achieve modularity.

Modularity brings benefits beyond the purely technical as well. Rewrites of a fraction of an application are more easily sold than big-bang efforts. Demonstrating forethought (while you can’t predict what the change will be, predicting the need for change is more of a sure thing) demonstrates concern for the customer’s welfare, which should make for a better relationship.

Being able to throw a system away a little at a time allows us to keep the car on the road while it changes and adapts to changing conditions.

“Laziness as a Virtue in Software Architecture” on Iasa Global

Laziness may be one of the Seven Deadly Sins, but it can be a virtue in software development. As Matt Osbun observed:

Robert Heinlein noted the benefits of laziness:

See the full post on the Iasa Global Site (a re-post, originally published here).

Microservices – Sharpening the Focus

Motion Blurred London Bus

While it was not the genesis of the architectural style known as microservices, the March 2014 post by James Lewis and Martin Fowler certainly put it on the software development community’s radar. Although the level of interest generated has been considerable, the article was far from an unqualified endorsement:

Despite these positive experiences, however, we aren’t arguing that we are certain that microservices are the future direction for software architectures. While our experiences so far are positive compared to monolithic applications, we’re conscious of the fact that not enough time has passed for us to make a full judgement.

One reasonable argument we’ve heard is that you shouldn’t start with a microservices architecture. Instead begin with a monolith, keep it modular, and split it into microservices once the monolith becomes a problem. (Although this advice isn’t ideal, since a good in-process interface is usually not a good service interface.)

So we write this with cautious optimism. So far, we’ve seen enough about the microservice style to feel that it can be a worthwhile road to tread. We can’t say for sure where we’ll end up, but one of the challenges of software development is that you can only make decisions based on the imperfect information that you currently have to hand.

In the course of roughly fourteen months, Fowler’s opinion has gelled around the “reasonable argument”:

So my primary guideline would be don’t even consider microservices unless you have a system that’s too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don’t try to separate it into separate services.

This mirrors what Sam Newman stated in “Microservices For Greenfield?”:

I remain convinced that it is much easier to partition an existing, “brownfield” system than to do so up front with a new, greenfield system. You have more to work with. You have code you can examine, you can speak to people who use and maintain the system. You also know what ‘good’ looks like – you have a working system to change, making it easier for you to know when you may have got something wrong or been too aggressive in your decision making process.

You also have a system that is actually running. You understand how it operates, how it behaves in production. Decomposition into microservices can cause some nasty performance issues for example, but with a brownfield system you have a chance to establish a healthy baseline before making potentially performance-impacting changes.

I’m certainly not saying ‘never do microservices for greenfield’, but I am saying that the factors above lead me to conclude that you should be cautious. Only split around those boundaries that are very clear at the beginning, and keep the rest on the more monolithic side. This will also give you time to assess how how mature you are from an operational point of view – if you struggle to manage two services, managing 10 is going to be difficult.

In short, the application architectural style known as microservice architecture (MSA), is unlikely to be an appropriate choice for the early stages of an application. Rather it is a style that is most likely migrated to from a more monolithic beginning. Some subset of applications may benefit from that form of distributed componentization at some point, but distribution, at any degree of granularity, should be based on need. Separation of concerns and modularity does not imply a need for distribution. In fact, poorly planned distribution may actually increase complexity and coupling while destroying encapsulation. Dependencies must be managed whether local or remote.

This is probably a good point to note that there is a great deal of room between a purely monolithic approach and a full-blown MSA. Rather than a binary choice, there is a wide range of options between the two. The fractal nature of the environment we inhabit means that responsibilities can be described as singular and separate without their being required to share the same granularity. Monoliths can be carved up and the resulting component parts still be considered monolithic compared to an extremely fine-grained sub-application microservice and that’s okay. The granularity of the partitioning (and the associated complexity) can be tailored to the desired outcome (such as making components reusable across multiple applications or more easily replaceable).

The moral of the story, at least in my opinion, is that intentional design concentrating on separation of concerns, loose coupling, and high cohesion is beneficial from the very start. Vertical (functional) slices, perhaps combined with layers (what I call “dicing”), can be used to achieve these ends. Regardless of whether the components are to be distributed at first, designing them with that in mind from the start will ease any transition that comes in the future without ill effects for the present. Neglecting these issues, risks hampering, if not outright preventing, breaking them out at a later date without resorting to a re-write.

These same concerns apply higher levels of abstraction as well. Rather than blindly growing a monolith that is all things to all people, adding new features should be treated as an opportunity to evaluate whether that functionality coheres with the existing application or is better suited to being a service from an external provider. Just as the application architecture should aim for modularity, so too should the solution architecture.

A modular design is a flexible design. While we cannot know up front the extent of change an application will undergo over its lifetime, we can be sure that there will be change. Designing with flexibility in mind means that change, when it comes, is less likely to be an existential crisis. As Hayim Makabee noted in his write-up of Rotem Hermon’s talk, “Change Driven Design”: “Change should entail extending the system rather than refactoring.”

A full-blown MSA architecture is one possible outcome for an application. It is, however, not the most likely outcome for most applications. What is important is to avoid unnecessary constraints and retain sufficient flexibility to deal with the needs that arise.

[London Bus Image by E01 via Wikimedia Commons.]

Laziness as a Virtue in Software Architecture

Laziness may be one of the Seven Deadly Sins, but it can be a virtue in software development. As Matt Osbun observed:

Robert Heinlein noted the benefits of laziness:

Even in the military, laziness carries potential greatness (emphasis mine):

I divide my officers into four groups. There are clever, diligent, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and diligent — their place is the General Staff. The next lot are stupid and lazy — they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the intellectual clarity and the composure necessary for difficult decisions. One must beware of anyone who is stupid and diligent — he must not be entrusted with any responsibility because he will always cause only mischief.

Generaloberst Kurt von Hammerstein-Equord

The lazy architect will ignore slogans like YAGNI and the Rule of Three when experience and/or information tells them that it’s far more likely that the need will arise than not. As Matt stated in “Foreseeable and Imaginary Design”, architects must ask “What changes are possible and which of those changes are foreseeable”. The slogans point out that engineering costs but the reality is that so does re-work resulting from decisions deferred. Avoiding that extra work (laziness) avoids the cost associated with it (frugality).

Likewise, lazy architects are less likely cave when confronted with the sayings of some notable person. Rather than resign themselves to the extra work, they’re more likely to examine the statement as Kevlin Henney did:

It’s far cheaper to design and build a system according to its context than re-build a system to fix issues that were foreseeable. The re-work born of being too focused on the tactical to the detriment of the strategic is as much a form of technical debt as cutting corners. The lazy architect knows that time spent identifying and reconciling contexts can allow them to avoid the extra work caused by blind incremental design.

No Structure Services

Amoeba sketch

Some people seem to think that flexibility is universally a virtue. Flexibility, in their opinion, is key to interoperability. Postel’s Principle, “…be conservative in what you do, be liberal in what you accept from others”, is often used to justify this belief. While this sounds wonderful in theory, in practice it’s problematic. As Tom Stuart pointed out in “Postel’s Principle is a Bad Idea”:

Postel’s Principle is wrong, or perhaps wrongly applied. The problem is that although implementations will handle well formed messages consistently, they all handle errors differently. If some data means two different things to different parts of your program or network, it can be exploited—Interoperability is achieved at the expense of security.

These problems exist in TCP, the poster child for Postel’s principle. It is possible to make different machines see different input, by building packets that one machine accepts and the other rejects. In Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection, the authors use features like IP fragmentation, corrupt packets, and other ambiguous bits of the standard, to smuggle attacks through firewalls and early warning systems.

In his defense, the environment in which Postel proposed this principle is far different from what we have now. Eric Allman, writing for the ACM Queue, noted in “The Robustness Principle Reconsidered”:

The Robustness Principle was formulated in an Internet of cooperators. The world has changed a lot since then. Everything, even services that you may think you control, is suspect.

Flexibility, often sold as extensibility, too often introduces ambiguity and uncertainty. Ambiguity and uncertainty are antithetical to APIs. This is why 2 of John Sonmez’s “3 Simple Techniques to Make APIs Easier to Use and Understand” are “Using enumerations to limit choices” and “Using default values to reduce required parameters”. Constraints provide structure and structure simplifies.

Taken to the extreme, I’ve seen flexibility used to justify “string in, string out” service method signatures. “Send us a string containing XML and we’ll send you one back”. There’s no need to worry about versioning, etc. because all the versions for all the clients are handled by a single endpoint. Of course, behind the scenes there’s a lot of conditional logic and “hope for the best” parsing. For the client, there’s no automated generation of messages nor even guarantee of structure. Validation of the structure can only occur at runtime.

Does this really sound robust?

I often suspect the reluctance to tie endpoints to defined contracts is due to excessive coupling between the code exposing the service and the code performing the function of the service. If domain logic is intermingled with presentation logic (which a service is), then a strict versioning scheme, an application of the Open/Closed Principle to services, now violates Don’t Repeat Yourself (DRY). If, however, the two concerns are kept separate within the application, multiple endpoints can be handled without duplicating business logic. This provides flexibility for both divergent client needs and client migrations from one message format to another with less complexity and ambiguity.

Stable interfaces don’t buy you much when they’re achieved by unsustainable complexity on the back end. The effect of ambiguity on ease of use doesn’t help either.