“Bumper Sticker Philosophy” on Iasa Global Blog

Yeah!  Wait, what?

YAGNI: You Ain’t Gonna Need It.

Sound bites are great – short, sweet, clear, and simple. Just like real life, right?

Seductive simple certainty is what makes slogans so problematic. Uncertainty and ambiguity occur far more frequently in the real world. Context and nuance add complexity, but not all complexity can be avoided. In fact, removing essential complexity risks misapplication of otherwise valid principles.

See the full post on the Iasa Global Blog (a re-post, originally published here).

Accidental Architecture

Hillside Slum

I’m not sure if it’s ironic or fitting that my very first post on Form Follows Function, “Like it or not, you have an architecture (in fact, you may have several)”, dealt with the concept of accidental architecture. A blog dedicated to software and solution architecture starts off by discussing the fact that architecture exists even in the absence of intentional design? It is, however, a theme that seems to recur.

The latest recurrence was a Twitter exchange with Ruth Malan, in which she stated:

Design is the act and the outcome. We design a system. The system has a design.

This prompted Arnon Rotem-Gal-Oz to observe that architecture need not be intentional and “…even areas you neglect well [sic] have design and then you’d have to deal with its implications”. To this I added “accidental architecture is still architecture – whether it’s good architecture or not is another thing”.

Ruth closed with a reference to a passage by Grady Booch:

Every software-intensive system has an architecture. In some cases that architecture is intentional, while in others it is accidental. Most of the time it is both, born of the consequences of a myriad of design decisions made by its architects and its developers over the lifetime of a system, from its inception through its evolution.

The idea that an architecture can “emerge” out of skillful construction rather than as a result of purposeful design, is trivially true. The “Big Ball of Mud”, an ad hoc arrangement of code that grows organically, remains a popular design pattern (yes, it’s a pattern rather than an anti-pattern – see the Introduction of “Big Ball of Mud” for an explanation of why). What remains in question is how effective is an architecture that largely or even entirely “emerges”.

Even the current architectural style of the day, microservices, can fall prey to the Big Ball of Mud syndrome. A plethora of small service applications developed without a unifying vision of how they will make up a coherent whole can easily turn muddy (if not already born muddy). The tagline of Simon Brown’s “Distributed big balls of mud” sums it up: “If you can’t build a monolith, what makes you think microservices are the answer?”.

Someone building a house using this theory might purchase the finest of building materials and fixtures. They might construct and finish each room with the greatest of care. If, however, the bathroom is built opening into the dining room and kitchen, some might question the design. Software, solution, and even enterprise IT architectures exist as systems of systems. The execution of a system’s components is extremely important, but you cannot ignore the context of the larger ecosystem in which those components will exist.

Too much design up front, architects attempting to make decisions below the level of granularity for which they have sufficient information, is obviously wrong. It’s like attempting to drive while blindfolded using only a GPS. By the same token, jumping in the car and driving without any idea of a destination beyond what’s at the end of your hood is unlikely to be successful either. Finding a workable balance between the two seems to be the optimal solution.

[Shanty Town Image by Otsogey via Wikimedia Commons.]

“When Silos Make Sense” on Iasa Global Blog

silos

Separation of Concerns is a well-known concept in application architecture. Over the years, application structures have evolved from monolithic to modular, using techniques such as encapsulation and abstraction to reduce coupling and increase cohesion. The purpose of doing so is quite simple – it yields software systems that are easier to understand, change, and enhance.

See the full post on the Iasa Global Blog (a re-post, originally published here).

“Finding the Balance” on Iasa Global Blog

Evening it out

One of my earliest posts on Form Follows Function, “There is no right way (though there are plenty of wrong ones)”, dealt with the subject of trade-offs. Whether dealing with the architecture of a solution or the architecture of an enterprise, there will be competing forces at work. Resolving these conflicts in an optimal manner involves finding the balance between individual forces and the system as whole (consistent with the priorities of the stakeholders).

See the full post on the Iasa Global Blog (a re-post, originally published here).

There is No “Best”

You're the best

What is the best architectural style/process/language/platform/framework/etc.?

A question posed that way can ignite a war as easily as Helen of Troy. The problem is, however, that it’s impossible to answer in that form. It’s a bit like asking which fastener (nail, screw, bolt, glue) is best. Without knowing the context to which it will be applied, we cannot possibly form a rational answer. “Best” without context is nonsense; like “right”, it’s a word that triggers much heat, but very little light.

People tend to like rules as there is a level of comfort in the certainty associated with them. The problem is that this certainty can be both deceptive and dangerous. Rules, patterns, and practices have underlying principles and contexts which give them value (or not). Understanding these is key to effective application. Without this understanding, usage becomes an act of faith rather than a rational choice.

Best practices and design patterns are two examples of useful techniques that have come to be regarded as silver bullets by some. Design patterns are useful for categorizing and communicating elements of design. Employing design patterns, however, is no guarantee of effective design. Likewise, understanding the principles that lie beneath a given practice is key to successfully applying that practice in another situation. Context is king.

Prior to applying a technique, it’s useful to ask why? Why this technique? Why do we think it will be effective? Rather than suggest a hard and fast number (say…5 maybe?), I’d recommend asking until you’re comfortable that the decision is based on reason rather than hope or tradition. Designing the architecture of systems requires evaluation and deliberation. Leave the following of recipes to the cooks.

Architecture – Finding Simple Solutions Over a Lifetime of Problems

On Roger Sessions’ LinkedIn group, Simpler IT, the discussion “What do I mean by Simplifying” talks about finding simple solutions to problems. Roger’s premise is that every problem has its own inherent complexity:

Let’s say P is some problem that we need to solve. For example, P could be the earthquake in Tom’s example or P could be the need of a bank to process credit cards or P could be my car that needs its oil changed. P may range in complexity from low (my car needs its oil changed) to high (a devastating earthquake has occurred.)

For a given P, the complexity of P is a constant. There is no strategy that will change the complexity of P.

Complexity and Effectiveness

Roger goes on to say that for any given problem, there will be a set of solutions to that problem. He further states “…if P is non-trivial, then the cardinality of the solution set of P is very very large”. Each solution can be characterized by how well it solves the problem at hand and how complex the solution is. These attributes can be graphed, as in the image to the right, yielding quadrants that range from the most effective and least complex (best) to least effective and most complex (worst). Thus, simplifying means:

The best possible s in the solution set is the one that lives in the upper left corner of the graph, as high on the Y axis as possible and as low on the X axis as possible.

When I talk about simplifying, I am talking about finding that one specific s out of all the possible solutions in the solution set.

Simplification, as a strategy, makes a great deal of sense in my opinion. There is, however, another aspect to be considered. While the complexity of a given problem P is constant, P represents the problem space of a system at a given time, not the entire lifecycle. The lifecycle of a system will consist of a set of problem spaces over time, from first release to decommissioning. An architect must take this lifecycle into consideration or risk introducing an ill-considered constraint on the future direction of the product. This is complicated by the fact that there will be uncertainty in how the problem space evolves over time, with the uncertainty being the greatest at the point furthest from the present (as represented by the image below).

product timeline

Some information regarding the transition from one problem space to the next will be available. Product roadmaps and deferred issues provide some insight into what will be needed next. That being said, emergent circumstances (everything from unforeseen changes in business direction to unexpected increases in traffic) will conspire to prevent the trajectory of the set of problem spaces from being completely predictable.

Excessive complexity will certainly constrain the options for evolving a system. However, flexibility can come with a certain amount of complexity as well. The simplest solution may also complicate the evolution of a system.

Not All Gold Glitters

ooh, shiny

After two back-to-back posts, I thought I was done with YAGNI, simplicity, and economy of design – at least for a while. But then Jef Claes published “But I already wrote it”.

Jef’s post dealt with how a colleague had implemented a new feature in a much richer manner than anticipated. When the analyst confirmed that the implementation was more than what was needed, Jef recommended trimming out the extra, while his colleague argued that since it was done, it should be left as is. After pointing out the risks and costs of the additional complexity, Jef’s colleague came around (which is, in my opinion, the correct way to do YAGNI – a consideration of the costs and benefits, rather than a reflex). Then came the comments.

One commenter took exception to Jef’s statement that “code is just a means to an end; the side product of creating a solution or learning about a problem”. For that commenter, that attitude would “inevitably” lead to writing bad code. “The way you write good code is by loving good code”.

Another suggested that the situation taught his colleague never to take initiative and had ruined his/her job satisfaction. “From now on, he should consider himself to be a code monkey whose job is to accept the designer’s vision, regardless of how short-sighted or limited it is, and produce a working program.” This commenter stated that Jef should have waited to see if additional maintenance costs had materialized before deciding.

Needless to say, I disagree with both.

The first commenter above needs to understand that the application belongs to the customer. For functionality, substituting your judgment for the customer’s is unprofessional. If I ask for a garage and you build a mansion while my back’s turned, you don’t get paid. Taking pride in how you deliver value is a virtue provided that you remember that the customer is the one who determines what they value.

The second commenter assumes that Jef’s colleague will be discouraged because his initiative wasn’t accepted. If that’s the case, perhaps another line of work would be appropriate. As noted above, the customer determines what is needed and we should be taking pride in fulfilling those needs. They also assume that the design was short-sighted and limited, but the basis for that is never provided.

The second commenter’s suggestion that the code should have been left as is and only changed if a problem emerged is even more problematic. Taking on risk and potential expense on the customer’s behalf is not responsible behavior. Additionally, decisions are not made in a vacuum – each choice builds on earlier choices to enable or constrain (often both). Making those decisions without a rational basis is equally irresponsible.

On a personal level, I can sympathize that someone has expended effort and is proud of what they’ve accomplished. However, putting our own wants above the needs of our customers does not advance the profession. Delivering requested value, without surprises, does.

Bumper Sticker Philosophy

Yeah!  Wait, what?

YAGNI: You Ain’t Gonna Need It.

Sound bites are great – short, sweet, clear, and simple. Just like real life, right?

Seductive simple certainty is what makes slogans so problematic. Uncertainty and ambiguity occur far more frequently in the real world. Context and nuance add complexity, but not all complexity can be avoided. In fact, removing essential complexity risks misapplication of otherwise valid principles.

Under the right circumstances, YAGNI makes perfect sense. Features added on the basis of speculation (“we might want to do this someday down the road”) carry costs and risks. Flexibility typically comes at the cost of complexity, which brings with it the risk of increased defects and more difficult maintenance. Even when perfectly implemented, this complexity poses the risk of making your code harder to use by its consumers due to the proliferation of options. Where the work meets a real need, as opposed to just a potential one, the costs and benefits can be assessed in a rational manner.

Where YAGNI runs into trouble is when it’s taken purely at face value. A naive application of the principle leads to design that ignores known needs that are not part of the immediate work at hand, trusting that a coherent architecture will “emerge” from implementing the simplest thing that could possibly work and then refactoring the results. As the scale of the application increases, the ability to do this across the entire application becomes more and more unlikely. One reason for employing abstraction is the difficulty in reasoning in detail about a large number of things, which you would need to do in order to refactor across an entire codebase. Another weakness of taking this principle beyond its realm of relevance is both the cost and difficulty of attempting to inject and/or modify cross-cutting architectural concerns (e.g. security, scalability, auditability, etc.) on an ad hoc basis. Snap decisions about pervasive aspects of a system ratchets up the risk level.

One argument for YAGNI (per the Wikipedia article) is that each new feature imposes constraints on the system, a position I agree with. When starting from scratch, form can follow function. However, as a system evolves, strategy tends to follow structure. Obviously, unnecessary constraints are undesirable. That being said, constraints which provide stability and consistency serve a purpose. The key is to be able to determine which is which and commit when appropriate.

In “Simplicity in Software Design: KISS, YAGNI and Occam’s Razor”, Hayim Makabee captures an important point – simple is not the same as simplistic. Adding unnecessary complexity adds risk without any attendant benefits. One good way to avoid the unnecessary that he lists is “…as possible avoid basing our design on assumptions”. At the same time, he cautions that we should avoid focusing only on the present.

It should be obvious at this point that I dislike the term YAGNI, even as I agree with it in principle. This has everything to do with the way some misuse it. My philosophy of design can be summed up as “the most important question for an architect is ‘why?'”. Relying on slogans gets in the way of a deliberate, well-considered design.

When Silos Make Sense

Separation of Concerns in the real world

Separation of Concerns is a well-known concept in application architecture. Over the years, application structures have evolved from monolithic to modular, using techniques such as encapsulation and abstraction to reduce coupling and increase cohesion. The purpose of doing so is quite simple – it yields software systems that are easier to understand, change, and enhance.

Within a monolith, all changes must be made in context of the whole. As that whole increases in size, the ability to understand the effect of a given change decreases and risk increases, even in the absence of complex logic. Complex processes, obviously, compound these effects.

With all this in mind, competent and responsible architects encapsulate, modularize, and partition code. Likewise, databases shared across multiple applications are seen as the anti-pattern that they are. Unfortunately, some seem to forget the principles and benefits of separation of concerns when it comes to infrastructure.

Without a doubt, server consolidation can save on licensing, administration, and hardware costs. However, the concepts of coupling and cohesion apply to the platform as much as the applications it supports. All applications sharing the same hardware environment are subject to a form of coupling. Whether that coupling is logically consistent (cohesive) or not can impact operations and customer satisfaction. It might be uncomfortable explaining why a critical line of business application is unavailable (if only briefly) because of a bad release of the intranet suggestion box.

Maintaining application cohesion on the platform also enhances your ability to keep the platform current. An environment hosting a family of applications maintained by the same team should be more easily maintained than a grab bag that’s home to a wide variety of systems. As with any chore, the more painful it is, the less likely it will be done. Failing to keep the platform up to date is a fast track to legacy status.

Paying attention to the following factors can assist in keeping platforms cohesive, maintainable and healthy:

  • Organizational Concerns: As noted above, systems sharing a homogeneous audience as well as the same (or at close on the org chart) support team will make for better roommates.
  • Technology: Some applications (particularly platform applications such as database software) don’t play as well with others, preferring to control the lion’s share of resources. These are poor candidates to be deployed to the same host/cluster as others applications. Likewise, dependency conflicts will affect hosting decisions. Stability is also an important factor – less stable systems should be isolated.
  • Security: Not hosting applications with different security profiles on the same machine(s) should be self-evident.
  • Criticality: Those responsible for disaster recovery will be much happier if the high-priority systems are not intermingled with the low-priority ones.
  • Resource Utilization: Usage of CPU, memory, storage, and network bandwidth should all be accounted for to avoid overloading the infrastructure. The fact that this utilization isn’t a steady state should also be borne in mind. Trends may edge up or down over time and some applications may have periods of higher than normal use.

While it doesn’t help with licensing and administration costs, virtualization can provide isolation without driving up hardware costs. Where appropriate, cloud offerings may also help with managing licensing, hardware, and (depending on the solution) administration costs. In all cases, savings must be evaluated against risks in order to get the full picture.

Emergence versus Evolution

You lookin' at me?

Hayim Makabee’s recent post, “The Myth of Emergent Design and the Big Ball of Mud”, encountered a relatively critical reception on two of the LinkedIn groups we’re both members of. Much of that resistance seemed to stem from a belief that the choice was between Big Design Up Front (BDUF) and Emergent Design. Hayim’s position, with which I agree, is that there is continuum of design with BDUF and Emergent Design representing the extremes. His position, with which I also agree, is that both extremes are unlikely to produce good results, and that the answer lies in between.

The Wikipedia definition of Emergent Design cited by Hayim, taken nearly a word for word from the Agile Sherpa site, outlines a No Design Up Front (NDUF) philosophy:

With Emergent Design, a development organization starts delivering functionality and lets the design emerge. Development will take a piece of functionality A and implement it using best practices and proper test coverage and then move on to delivering functionality B. Once B is built, or while it is being built, the organization will look at what A and B have in common and refactor out the commonality, allowing the design to emerge. This process continues as the organization continually delivers functionality. At the end of an agile or scrum release cycle, Development is left with the smallest set of the design needed, as opposed to the design that could have been anticipated in advance. The end result is a smaller code base, which naturally has less room for defects and a lower cost of maintenance.

Rather than being an unrealistically extreme statement, this definition meshes with ideas that people hold and even advocate:

“You need an overarching vision, a “big picture” design or architecture. TDD won’t give you that.” Wrong. TDD will give you precisely that: when you’re working on a large project, TDD allows you to build the code in small steps, where each step is the simplest thing that can possibly work. The architecture follows immediately from that: the architecture is just the accumulation of these small steps. The architecture is a product of TDD, not a pre-designed constraint.

Portion of a comment to Dan North’s “PUBLISHED: THE ART OF MISDIRECTION”

Aspects of a design will undoubtedly emerge as it evolves. Differing interpretations of requirements as well as information deficits between the various parties, not to mention changing circumstances all conspire to make it so. However, that does not mean the act of design is wholly emergent. Design connotes activity whereas emergence implies passivity. A passive approach to design is, in my opinion, unlikely to succeed in resolving the conflicts inherent in software development. In my opinion, it is the resolution of those conflicts which allows a system to adapt and evolve.

I’ve previously posted on the concept of expecting a coherent architecture to emerge from this type of blinkered approach. Both BDUF and NDUF hold out tremendous risk of wasted effort. It is as naive to expect good results from ignoring information (NDUF) as it is to think you possess all the information (BDUF). Assuming a relatively simple system, ignoring obvious commonality and obvious need for flexibility in order to do the “simplest thing that could possibly work, then refactor” guarantees needless rework. As the scale grows, the likelihood of conflicting requirements will grow. Resolving those conflicts after code for one or more features is in place will be more likely to yield unsatisfactory compromises.

The biggest weakness of relying on refactoring is that there are well-documented limits to what people can process. As the level of abstraction goes down, the number of concerns goes up. This same limit that dooms BDUF to failure limits the ability to refactor large systems into a coherent whole.

Quality of service issues are yet another problem area for the “simplest thing that could possibly work” method. By definition, that concentrates on functionality to the exclusion of non-functional concerns. Security and scalability are just two concerns that typically fare poorly when bolted on after the fact. Premature optimization is to be avoided, but being aware of the expected performance environment can help you avoid blind alleys.

One area where I do agree with the TDD advocate quoted above, is that active design imposes constraints. The act of design involves defining structure. As Ruth Malan has said, “negative space is telling; as is what it places emphasis on”. Too little structure poses as much risk as too much.

An evolutionary design process, such as Hayim’s Adaptable Design Up Front (ADUF), recognizes the futility of predicting the future in minute detail (BDUF) without surrendering to formlessness (NDUF). Experience about what parts of a system are most likely to change is invaluable. Coupled with reasonable planning based on what is known about the big picture of the current release and what’s known about follow-up releases can be used to drive a design that strikes the right balance – flexible, without being over-engineered.

[Photograph by Jose Luis Martinez Alvarez via Wikimedia Commons.]