Who Needs Architects? – Monoliths as Systems of Stuff

Platypus

In my experience, IT is not a “one size fits all” operation. In both their latest two-speed vision and their older three-speed one, Gartner’s opinion is the same – there is no one process that works for every system across the enterprise (for what it’s worth, I agree with Simon Wardley that Bimodal IT is still too restrictive and three modes comes closer to reflecting the types of systems in use). Process and governance that is appropriate to one system may be too strict for another and too loose for a third. In this light, attempting to find one compromise ensures that all are poorly served. Consequently, more than one mode of governance just makes sense.

The problem is more complex, however, than just picking trimodal or bimodal and dividing applications up according to whether they are systems of record, systems of differentiation, or systems of innovation (or digital versus traditional). Just as “accidental architecture” can result in a “Big Ball of Mud” at the application level, it can also do so in terms of enterprise IT architecture. Monoliths that have grown organically may cross boundaries of the multimodal framework taxonomy, essentially becoming incoherent systems of “stuff”. This complicates their assignment to a process that fits their nature. When the application fits more than one category, do you force it into the more restrictive category or the least restrictive? No matter which way you choose, the answer will be problematic.

Given the fractal nature of IT, it should not be a surprise that design decisions made at the level of individual applications can bubble up to affect the IT architecture of the enterprise as a whole. Separation of concerns (logical) and modularity (physical) remain important from the lowest level to the top. Without a strategic direction, tactical excellence can lead to waste from lack of focus.

Monolithic architectures trade simplicity for modularity at the application architecture level, which may be a valid trade at that level. If, however, a monolith crosses framework category boundaries, then major architectural refactoring may be required to avoid making ugly compromises. Separation of concerns within a monolith can ease the pain of this kind of refactoring, but avoidance of the need for refactoring is even more painless. Paying attention to cohesion across all levels of granularity and designing with extra-application as well as intra-application concerns in mind is necessary to achieve this avoidance.

Knowing the issues and being able to say why you made the choices you did is key.

, , , , , , ,

Leave a comment

Microservices – Sharpening the Focus

Motion Blurred London Bus

While it was not the genesis of the architectural style known as microservices, the March 2014 post by James Lewis and Martin Fowler certainly put it on the software development community’s radar. Although the level of interest generated has been considerable, the article was far from an unqualified endorsement:

Despite these positive experiences, however, we aren’t arguing that we are certain that microservices are the future direction for software architectures. While our experiences so far are positive compared to monolithic applications, we’re conscious of the fact that not enough time has passed for us to make a full judgement.

One reasonable argument we’ve heard is that you shouldn’t start with a microservices architecture. Instead begin with a monolith, keep it modular, and split it into microservices once the monolith becomes a problem. (Although this advice isn’t ideal, since a good in-process interface is usually not a good service interface.)

So we write this with cautious optimism. So far, we’ve seen enough about the microservice style to feel that it can be a worthwhile road to tread. We can’t say for sure where we’ll end up, but one of the challenges of software development is that you can only make decisions based on the imperfect information that you currently have to hand.

In the course of roughly fourteen months, Fowler’s opinion has gelled around the “reasonable argument”:

So my primary guideline would be don’t even consider microservices unless you have a system that’s too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don’t try to separate it into separate services.

This mirrors what Sam Newman stated in “Microservices For Greenfield?”:

I remain convinced that it is much easier to partition an existing, “brownfield” system than to do so up front with a new, greenfield system. You have more to work with. You have code you can examine, you can speak to people who use and maintain the system. You also know what ‘good’ looks like – you have a working system to change, making it easier for you to know when you may have got something wrong or been too aggressive in your decision making process.

You also have a system that is actually running. You understand how it operates, how it behaves in production. Decomposition into microservices can cause some nasty performance issues for example, but with a brownfield system you have a chance to establish a healthy baseline before making potentially performance-impacting changes.

I’m certainly not saying ‘never do microservices for greenfield’, but I am saying that the factors above lead me to conclude that you should be cautious. Only split around those boundaries that are very clear at the beginning, and keep the rest on the more monolithic side. This will also give you time to assess how how mature you are from an operational point of view – if you struggle to manage two services, managing 10 is going to be difficult.

In short, the application architectural style known as microservice architecture (MSA), is unlikely to be an appropriate choice for the early stages of an application. Rather it is a style that is most likely migrated to from a more monolithic beginning. Some subset of applications may benefit from that form of distributed componentization at some point, but distribution, at any degree of granularity, should be based on need. Separation of concerns and modularity does not imply a need for distribution. In fact, poorly planned distribution may actually increase complexity and coupling while destroying encapsulation. Dependencies must be managed whether local or remote.

This is probably a good point to note that there is a great deal of room between a purely monolithic approach and a full-blown MSA. Rather than a binary choice, there is a wide range of options between the two. The fractal nature of the environment we inhabit means that responsibilities can be described as singular and separate without their being required to share the same granularity. Monoliths can be carved up and the resulting component parts still be considered monolithic compared to an extremely fine-grained sub-application microservice and that’s okay. The granularity of the partitioning (and the associated complexity) can be tailored to the desired outcome (such as making components reusable across multiple applications or more easily replaceable).

The moral of the story, at least in my opinion, is that intentional design concentrating on separation of concerns, loose coupling, and high cohesion is beneficial from the very start. Vertical (functional) slices, perhaps combined with layers (what I call “dicing”), can be used to achieve these ends. Regardless of whether the components are to be distributed at first, designing them with that in mind from the start will ease any transition that comes in the future without ill effects for the present. Neglecting these issues, risks hampering, if not outright preventing, breaking them out at a later date without resorting to a re-write.

These same concerns apply higher levels of abstraction as well. Rather than blindly growing a monolith that is all things to all people, adding new features should be treated as an opportunity to evaluate whether that functionality coheres with the existing application or is better suited to being a service from an external provider. Just as the application architecture should aim for modularity, so too should the solution architecture.

A modular design is a flexible design. While we cannot know up front the extent of change an application will undergo over its lifetime, we can be sure that there will be change. Designing with flexibility in mind means that change, when it comes, is less likely to be an existential crisis. As Hayim Makabee noted in his write-up of Rotem Hermon’s talk, “Change Driven Design”: “Change should entail extending the system rather than refactoring.”

A full-blown MSA architecture is one possible outcome for an application. It is, however, not the most likely outcome for most applications. What is important is to avoid unnecessary constraints and retain sufficient flexibility to deal with the needs that arise.

[London Bus Image by E01 via Wikimedia Commons.]

, , , , , , , ,

3 Comments

Fixing IT – Remembering What They’re Paying For

Dan Creswell‘s tweet said it well:

Max Roser‘s tweet, however, illustrated it in unmistakable fashion:

Success looks like that.

Not everyone gets to create the kind of magic that lets a blind woman “see” her unborn child. Nearly everyone in IT, however, has the ability to influence customer satisfaction. Development, infrastructure, support, all play a part in making someone’s life better or worse. It’s not just what we produce, but also the process, management and governance that determines how we produce. The product is irrelevant, it’s the service that counts. The woman in the picture above isn’t happy about 3D printing, she’s overjoyed at the experience it enabled.

Customer service isn’t just a concern for software vendors. It’s remarkably easy to destroy a relationship and remarkably hard to repair one. The most important alignment is the alignment of concerns between those providing the service and those consuming it. Mistrust between the two is a common source of “Shadow IT” problems, and far from helping, cracking down may well make things worse.

Building trust by meeting needs creates a virtuous circle. Satisfaction breeds appreciation. Appreciation breeds motivation. Motivation, in turns, yields more satisfaction.

As the saying goes, you catch more flies with honey.

, , , , , ,

2 Comments

Law of Unintended Consequences – Security Edition

Bank Vault

More isn’t always better. When it comes to security, more can even be worse.

As the use of encryption has increased, management of encryption keys has emerged as a pain point for many organizations. The amount of encrypted data passing through corporate firewalls, which has doubled over the last year, poses a severe challenge to security professionals responsible for protecting corporate data. The mechanism that’s intended to protect information in transit does so regardless of whether the transmission is legitimate or not.

Greater complexity, which means greater inconvenience, can lead to decreased security. Usability increases security by increasing compliance. Alarm fatigue means that as the number of warnings increase, so does the likelihood of their being ignored

Like any design issue, security should be approached from a systems thinking viewpoint (at least in my opinion). Rather than a one-dimensional, naive approach, a holistic one that recognizes and deals with the interrelationships is more likely to get it right. Thinking solely in terms of actions while ignoring the reactions that result from them hampers effective decision-making.

To be effective, security should be comprehensive, coordinated, collaborative, and contextual.

Comprehensive security is security that involves the entire range of security concerns: application, network, platform (OS, etc.), and physical. Strength in one or more of these areas means little if only one of the others is fatally compromised. Coordination of the efforts of those responsible for these aspects is essential to ensure that the various security enhance rather than hinder security. This coordination is better achieved via a collaborative process that reconciles the costs and benefits systemically than a prescriptive one imposed without regard to those factors. Lastly, practices should be tailored to the context of the problem at hand. Value at risk and amount of exposure are two factors that should help determine the effort expended. Putting a bank vault door on the garden shed not only wastes money, but also hinders security by taking those resources away from an area of greater need.

As with most quality of service concerns, security is not a binary toggle but a continuum. Matching the response to the need is a good way to stay on the right side of the law of unintended consequences.

, , , , , ,

1 Comment

Who Needs Architects? Who’s Minding the Architecture?

shearing layers diagram

Shearing layers are an important concept in building architecture. Essentially, the idea is that a building is not a unary thing with a single lifecycle, but a composition of several layers comprising elements with different concerns (site, structure, skin, services, space plan, and contents stuff) and with varying lifecycles based on their amenability to change. This same concept can be applied to software systems. Between platform components and the fractal nature of systems and solutions, groupings of concerns with diverse lifecycles are readily apparent.

Richard Veryard, in his post “Agile and Wilful Blindness”, used the concept of shearing layers to help illustrate a weakness of development styles that stress emergence over deliberate design:

Some things are easier to change than others. The architect Frank Duffy proposed a theory of Shearing Layers, which was further developed and popularized by Stuart Brand. In this theory, the site and structure of a building are the most difficult to change, while skin and services are easier.

Let’s suppose Agile developers know how to optimize some of the aspects of a system, perhaps including skin and services. So it doesn’t matter if they get the skin and services wrong, because these can be changed later. This is the basic for @swardley’s point that you don’t need to know beforehand exactly what you are building.

But if they get the fundamentals wrong, such as site and structure, these are more difficult to change later. This is the basis for John Seddon’s point that Agile may simply build the wrong things faster.

And this is where @ruthmalan takes the argument to the next level. Because Agile developers are paying attention to the things they know how to change (skin, services), they may fail to pay attention to the things they don’t know how to change (site, structure). So they can throw themselves into refining and improving a system until it looks satisfactory (in their eyes), without ever seeing that it’s the wrong system in the wrong place.

Although I disagree that all agile developers fall into this category, Richard’s and Ruth Malan’s point is an important one. We must be aware of a need in order to attend to it. As Seth Godin noted in “I didn’t see it because I wasn’t looking”: “This is one reason we feel the need to yell ‘surprise’ at a surprise party.”

Even when we’re aware of something, failing to understand the true nature of it can hamper our ability to deal with it effectively. Too many consider the practice of architectural design to be just a matter of drawing pictures and creating documents. The important aspect, however, is the decisions and the rationales for those decisions that these artifacts should capture. As Ipek Ozkaya tweeted from Gregor Hohpe‘s SATURN 2015 keynote:

Architectural design is essentially about making (albeit collaboratively) and communicating decisions. The decisions that are architecturally significant are the decisions that affect the longer-lived, harder to change aspects of a system. Intentionally addressing these concerns is a better strategy than hoping that something coherent just “emerges”.

, ,

3 Comments

Form Follows Function on SPaMCast 339

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast features Tom’s essay on demonstrations and a Form Follows Function installment on microservices, SOA, and Enterprise IT Architecture.

For SPaMCast 339, Tom and I discuss my “Microservices, SOA, and EITA: Where To Draw the Line? Why to Draw the Line?” post.

, , , , , , , , ,

Leave a comment

Laziness as a Virtue in Software Architecture

Laziness may be one of the Seven Deadly Sins, but it can be a virtue in software development. As Matt Osbun observed:

Robert Heinlein noted the benefits of laziness:

Even in the military, laziness carries potential greatness (emphasis mine):

I divide my officers into four groups. There are clever, diligent, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and diligent — their place is the General Staff. The next lot are stupid and lazy — they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the intellectual clarity and the composure necessary for difficult decisions. One must beware of anyone who is stupid and diligent — he must not be entrusted with any responsibility because he will always cause only mischief.

Generaloberst Kurt von Hammerstein-Equord

The lazy architect will ignore slogans like YAGNI and the Rule of Three when experience and/or information tells them that it’s far more likely that the need will arise than not. As Matt stated in “Foreseeable and Imaginary Design”, architects must ask “What changes are possible and which of those changes are foreseeable”. The slogans point out that engineering costs but the reality is that so does re-work resulting from decisions deferred. Avoiding that extra work (laziness) avoids the cost associated with it (frugality).

Likewise, lazy architects are less likely cave when confronted with the sayings of some notable person. Rather than resign themselves to the extra work, they’re more likely to examine the statement as Kevlin Henney did:

It’s far cheaper to design and build a system according to its context than re-build a system to fix issues that were foreseeable. The re-work born of being too focused on the tactical to the detriment of the strategic is as much a form of technical debt as cutting corners. The lazy architect knows that time spent identifying and reconciling contexts can allow them to avoid the extra work caused by blind incremental design.

, , , , , ,

Leave a comment

Follow

Get every new post delivered to your Inbox.

Join 161 other followers

%d bloggers like this: