Why would you want to constrain creativity by controlling (Note: controlling does not necessarily imply dictating) the architecture of a system?
As Roger Sessions recently tweeted:
Simplicity requires structure.—
Roger Sessions (@RSessions) July 28, 2015
Another reason came out in an exchange between Roger and me:
Simplicity: You can't buy it, you can't rent it, you can't outsource it. You must create it.—
Roger Sessions (@RSessions) July 14, 2015
And then maintain it twitter.com/RSessions/stat…—
Gene Hughson (@GeneHughson) July 14, 2015
@GeneHughson Yes. Complexity is like entropy. It requires constant energy to keep it at bay.—
Roger Sessions (@RSessions) July 14, 2015
Simplicity (“…as simple as possible, but not simpler”) is certainly a desirable system quality. Unnecessary complexity directly impairs maintainability and can indirectly affect qualities such as testability, security, extensibility, availability, and ease of deployment just to name a few. The coherent structure necessary to create and maintain simplicity when multiple people are involved is unlikely to happen accidentally. When it’s lacking, the result is often what Brian Foote and Joseph Yoder described in their 1999 classic “Big Ball of Mud”:
A BIG BALL OF MUD is haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle. We’ve all seen them. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated. The overall structure of the system may never have been well defined. If it was, it may have eroded beyond recognition.
Sixteen years later, this tendency to entropy via unnecessary complexity remains an issue. As Roger Sessions recently noted on his blog:
As IT systems increase in size, three things happen. Systems get more vulnerable to security breaches. Systems suffer more from reliability issues. And it becomes more expensive and time consuming to try to make modifications to those systems. This should not be a surprise. If you are like most IT professionals, you have seen this many times.
What you have probably not noticed is an underlying pattern. These three undesirable features invariably come in threes. Insecure systems are invariably unreliable and difficult to modify. Secure systems, on the other hand, are also reliable and easy to modify.
This tells us something important. Vulnerability, unreliability, and inflexibility are not independent issues; they are all symptoms of one common disease. It is the disease that is the problem, not the symptoms.
Kent Beck, in a post on FaceBook, “Taming Complexity with Reversibility”, recently noted the same:
As a system scales, whether it is a manufacturing plant or a service like ours, the enemy is complexity. If you don’t confront complexity in some way, it will eat you. However, complexity isn’t a blob monster, it has four distinct heads.
- States. When there are many elements in the system and each can be in one of a large number of states, then figuring out what is going on and what you should do about it grows impossible.
- Interdependencies. When each element in the system can affect each other element in unpredictable ways, it’s easy to induce harmonics and other non-linear responses, driving the system out of control.
- Uncertainty. When outside stresses on the system are unpredictable, the system never settles down to an equilibrium.
- Irreversibility. When the effects of decisions can’t be predicted and they can’t be easily undone, decisions grow prohibitively expensive.
If you have big ambitions but don’t address any of these factors, scale will wreck your system at some point.
Kent’s conclusion, “What changes–technical, organizational, or business–would you have to make to identify such decisions earlier and make reversing them routine?”, implies that addressing these factors requires systemic rather than localized response.
The counter-argument that’s frequently made is that the architecture can “emerge” from the implementation by doing the “simplest thing that can possibly work” and building on that. I touched briefly on this in my last post. To that, I’d add that not all changes are the same. Trying to bolt on fundamental, cross-cutting concerns (e.g. security, scalability, etc.) after the fact risks major (i.e. expensive) architectural refactoring. This type of refactoring is generally a hard sell and understandably so. That difficulty makes the Big Ball of Mud that much more likely.
This does not mean that the concept of emergence has no place in software architecture. As the various contexts making up the architecture of the problem are identified, challenges will emerge and need to be reconciled. With this naturally occurring emergence, it makes little sense to artificially generate challenges by refusing to “peek” at what’s ahead. As Ruth Malan has observed, architecture should be both “intentional and emergent”:
As if http://t.co/CCHicMs1cw—
ruth malan (@ruthmalan) July 28, 2015
This need to contend with emergent issues continues for the entire lifetime of the system:
"No engineered structure is designed to be built and then neglected or ignored." -- Henry Petroski—
Scientific Python (@SciPyTip) July 29, 2015
I’ve noted in the past that both planning and design share similarities. Regardless of the task, an appropriate design/plan provides a coherent direction that enhances the likelihood of success. Without it, Joe Dager’s question below is directly on point:
Someone explain; If we skip the Plan Stage, what is the difference between iterative development and winging it? #PDCA—
Joe Dager (@business901) August 03, 2015