When Silos Make Sense

Separation of Concerns in the real world

Separation of Concerns is a well-known concept in application architecture. Over the years, application structures have evolved from monolithic to modular, using techniques such as encapsulation and abstraction to reduce coupling and increase cohesion. The purpose of doing so is quite simple – it yields software systems that are easier to understand, change, and enhance.

Within a monolith, all changes must be made in context of the whole. As that whole increases in size, the ability to understand the effect of a given change decreases and risk increases, even in the absence of complex logic. Complex processes, obviously, compound these effects.

With all this in mind, competent and responsible architects encapsulate, modularize, and partition code. Likewise, databases shared across multiple applications are seen as the anti-pattern that they are. Unfortunately, some seem to forget the principles and benefits of separation of concerns when it comes to infrastructure.

Without a doubt, server consolidation can save on licensing, administration, and hardware costs. However, the concepts of coupling and cohesion apply to the platform as much as the applications it supports. All applications sharing the same hardware environment are subject to a form of coupling. Whether that coupling is logically consistent (cohesive) or not can impact operations and customer satisfaction. It might be uncomfortable explaining why a critical line of business application is unavailable (if only briefly) because of a bad release of the intranet suggestion box.

Maintaining application cohesion on the platform also enhances your ability to keep the platform current. An environment hosting a family of applications maintained by the same team should be more easily maintained than a grab bag that’s home to a wide variety of systems. As with any chore, the more painful it is, the less likely it will be done. Failing to keep the platform up to date is a fast track to legacy status.

Paying attention to the following factors can assist in keeping platforms cohesive, maintainable and healthy:

  • Organizational Concerns: As noted above, systems sharing a homogeneous audience as well as the same (or at close on the org chart) support team will make for better roommates.
  • Technology: Some applications (particularly platform applications such as database software) don’t play as well with others, preferring to control the lion’s share of resources. These are poor candidates to be deployed to the same host/cluster as others applications. Likewise, dependency conflicts will affect hosting decisions. Stability is also an important factor – less stable systems should be isolated.
  • Security: Not hosting applications with different security profiles on the same machine(s) should be self-evident.
  • Criticality: Those responsible for disaster recovery will be much happier if the high-priority systems are not intermingled with the low-priority ones.
  • Resource Utilization: Usage of CPU, memory, storage, and network bandwidth should all be accounted for to avoid overloading the infrastructure. The fact that this utilization isn’t a steady state should also be borne in mind. Trends may edge up or down over time and some applications may have periods of higher than normal use.

While it doesn’t help with licensing and administration costs, virtualization can provide isolation without driving up hardware costs. Where appropriate, cloud offerings may also help with managing licensing, hardware, and (depending on the solution) administration costs. In all cases, savings must be evaluated against risks in order to get the full picture.

Advertisement

Who’s Your Predator?

Just for the howl of it

Have you ever worked with someone with a talent for asking inconvenient questions? They’re the kind of person who asks “Why?” when you really don’t have a good reason or “How?” when you’re really not sure. Even worse, they’re always capable of finding those scenarios where your otherwise foolproof plan unravels.

Do you have someone like that?

If so, treasure them!

In a Twitter conversation with Charlie Alfred, he pointed out that you need a predator, “…an entity that seeks the weakest of the designs that evolve from a base to ensure survival of fittest”. You need this predator, because “…without a predator or three, there’s no limit to the number of unfit designs that can evolve”.

Preach it, brother. Sometimes the best friend you can have is the person who tells you what you don’t want to hear. It’s easier (and far cheaper) to deal with problems early rather than late.

I’ve posted previously regarding the benefits of designing collaboratively and the pitfalls of too much self reliance, but it’s one of those topics that merits an occasional reminder. As Richard Feynman noted, “The first principle is that you must not fool yourself – and you are the easiest person to fool.”

Self-confidence is a natural and desirable trait. Obviously, if you’re not confident in your decision, you should probably defer committing to it. That confidence, however, can blind us to potential flaws. Just as innovators overestimate consumer interest in their product, we can place too much faith in our own decisions and beliefs. Our lack of emotional distance can make it easy to fool ourselves. In that case, having someone to challenge our assumptions can be invaluable.

Working collaboratively increases the odds that flaws will be found, but does not guarantee it. Encouraging questions, even challenges is a good start – you don’t want to cause a failure because people were afraid to question the design. However, groups can be as subject to cognitive biases as individuals (for a great overview, see Thomas Cagley Jr.’s excellent series on the subject: July 8th, July 9th, July 10th, July 11th, and July 12th). No bad news is not necessarily good news.

Some times you have to be your own predator.

Getting ideas out of your head is helpful in getting the distance you need to evaluate your ideas in a more objective manner. Likewise, writing and/or diagramming forces a bit of rigor and organization, making it easier to spot gaps in the design. The more scrutiny a design can withstand, the more likely it is to survive in the wild.

When the wolf’s at the door, you’ll want to rely something that’s been proven, not pampered.

“How Black & Decker Questioned Success and Discovered a New Market” on CitizenTekk

Opportunity

In my previous post, “Faster Horses – Henry Ford and Customer Development”, I pointed out the importance of understanding the problem space in terms of customer needs and wants. While listening to customers is a valuable technique, Black & Decker learned that observing them can be just as valuable, if not more so, nearly seventy years before the term “Big Data” was coined.

Read more at CitizenTekk

Emergence versus Evolution

You lookin' at me?

Hayim Makabee’s recent post, “The Myth of Emergent Design and the Big Ball of Mud”, encountered a relatively critical reception on two of the LinkedIn groups we’re both members of. Much of that resistance seemed to stem from a belief that the choice was between Big Design Up Front (BDUF) and Emergent Design. Hayim’s position, with which I agree, is that there is continuum of design with BDUF and Emergent Design representing the extremes. His position, with which I also agree, is that both extremes are unlikely to produce good results, and that the answer lies in between.

The Wikipedia definition of Emergent Design cited by Hayim, taken nearly a word for word from the Agile Sherpa site, outlines a No Design Up Front (NDUF) philosophy:

With Emergent Design, a development organization starts delivering functionality and lets the design emerge. Development will take a piece of functionality A and implement it using best practices and proper test coverage and then move on to delivering functionality B. Once B is built, or while it is being built, the organization will look at what A and B have in common and refactor out the commonality, allowing the design to emerge. This process continues as the organization continually delivers functionality. At the end of an agile or scrum release cycle, Development is left with the smallest set of the design needed, as opposed to the design that could have been anticipated in advance. The end result is a smaller code base, which naturally has less room for defects and a lower cost of maintenance.

Rather than being an unrealistically extreme statement, this definition meshes with ideas that people hold and even advocate:

“You need an overarching vision, a “big picture” design or architecture. TDD won’t give you that.” Wrong. TDD will give you precisely that: when you’re working on a large project, TDD allows you to build the code in small steps, where each step is the simplest thing that can possibly work. The architecture follows immediately from that: the architecture is just the accumulation of these small steps. The architecture is a product of TDD, not a pre-designed constraint.

Portion of a comment to Dan North’s “PUBLISHED: THE ART OF MISDIRECTION”

Aspects of a design will undoubtedly emerge as it evolves. Differing interpretations of requirements as well as information deficits between the various parties, not to mention changing circumstances all conspire to make it so. However, that does not mean the act of design is wholly emergent. Design connotes activity whereas emergence implies passivity. A passive approach to design is, in my opinion, unlikely to succeed in resolving the conflicts inherent in software development. In my opinion, it is the resolution of those conflicts which allows a system to adapt and evolve.

I’ve previously posted on the concept of expecting a coherent architecture to emerge from this type of blinkered approach. Both BDUF and NDUF hold out tremendous risk of wasted effort. It is as naive to expect good results from ignoring information (NDUF) as it is to think you possess all the information (BDUF). Assuming a relatively simple system, ignoring obvious commonality and obvious need for flexibility in order to do the “simplest thing that could possibly work, then refactor” guarantees needless rework. As the scale grows, the likelihood of conflicting requirements will grow. Resolving those conflicts after code for one or more features is in place will be more likely to yield unsatisfactory compromises.

The biggest weakness of relying on refactoring is that there are well-documented limits to what people can process. As the level of abstraction goes down, the number of concerns goes up. This same limit that dooms BDUF to failure limits the ability to refactor large systems into a coherent whole.

Quality of service issues are yet another problem area for the “simplest thing that could possibly work” method. By definition, that concentrates on functionality to the exclusion of non-functional concerns. Security and scalability are just two concerns that typically fare poorly when bolted on after the fact. Premature optimization is to be avoided, but being aware of the expected performance environment can help you avoid blind alleys.

One area where I do agree with the TDD advocate quoted above, is that active design imposes constraints. The act of design involves defining structure. As Ruth Malan has said, “negative space is telling; as is what it places emphasis on”. Too little structure poses as much risk as too much.

An evolutionary design process, such as Hayim’s Adaptable Design Up Front (ADUF), recognizes the futility of predicting the future in minute detail (BDUF) without surrendering to formlessness (NDUF). Experience about what parts of a system are most likely to change is invaluable. Coupled with reasonable planning based on what is known about the big picture of the current release and what’s known about follow-up releases can be used to drive a design that strikes the right balance – flexible, without being over-engineered.

[Photograph by Jose Luis Martinez Alvarez via Wikimedia Commons.]