“When Silos Make Sense” on Iasa Global Blog

silos

Separation of Concerns is a well-known concept in application architecture. Over the years, application structures have evolved from monolithic to modular, using techniques such as encapsulation and abstraction to reduce coupling and increase cohesion. The purpose of doing so is quite simple – it yields software systems that are easier to understand, change, and enhance.

See the full post on the Iasa Global Blog (a re-post, originally published here).

There is No “Best”

You're the best

What is the best architectural style/process/language/platform/framework/etc.?

A question posed that way can ignite a war as easily as Helen of Troy. The problem is, however, that it’s impossible to answer in that form. It’s a bit like asking which fastener (nail, screw, bolt, glue) is best. Without knowing the context to which it will be applied, we cannot possibly form a rational answer. “Best” without context is nonsense; like “right”, it’s a word that triggers much heat, but very little light.

People tend to like rules as there is a level of comfort in the certainty associated with them. The problem is that this certainty can be both deceptive and dangerous. Rules, patterns, and practices have underlying principles and contexts which give them value (or not). Understanding these is key to effective application. Without this understanding, usage becomes an act of faith rather than a rational choice.

Best practices and design patterns are two examples of useful techniques that have come to be regarded as silver bullets by some. Design patterns are useful for categorizing and communicating elements of design. Employing design patterns, however, is no guarantee of effective design. Likewise, understanding the principles that lie beneath a given practice is key to successfully applying that practice in another situation. Context is king.

Prior to applying a technique, it’s useful to ask why? Why this technique? Why do we think it will be effective? Rather than suggest a hard and fast number (say…5 maybe?), I’d recommend asking until you’re comfortable that the decision is based on reason rather than hope or tradition. Designing the architecture of systems requires evaluation and deliberation. Leave the following of recipes to the cooks.

Architecture – Finding Simple Solutions Over a Lifetime of Problems

On Roger Sessions’ LinkedIn group, Simpler IT, the discussion “What do I mean by Simplifying” talks about finding simple solutions to problems. Roger’s premise is that every problem has its own inherent complexity:

Let’s say P is some problem that we need to solve. For example, P could be the earthquake in Tom’s example or P could be the need of a bank to process credit cards or P could be my car that needs its oil changed. P may range in complexity from low (my car needs its oil changed) to high (a devastating earthquake has occurred.)

For a given P, the complexity of P is a constant. There is no strategy that will change the complexity of P.

Complexity and Effectiveness

Roger goes on to say that for any given problem, there will be a set of solutions to that problem. He further states “…if P is non-trivial, then the cardinality of the solution set of P is very very large”. Each solution can be characterized by how well it solves the problem at hand and how complex the solution is. These attributes can be graphed, as in the image to the right, yielding quadrants that range from the most effective and least complex (best) to least effective and most complex (worst). Thus, simplifying means:

The best possible s in the solution set is the one that lives in the upper left corner of the graph, as high on the Y axis as possible and as low on the X axis as possible.

When I talk about simplifying, I am talking about finding that one specific s out of all the possible solutions in the solution set.

Simplification, as a strategy, makes a great deal of sense in my opinion. There is, however, another aspect to be considered. While the complexity of a given problem P is constant, P represents the problem space of a system at a given time, not the entire lifecycle. The lifecycle of a system will consist of a set of problem spaces over time, from first release to decommissioning. An architect must take this lifecycle into consideration or risk introducing an ill-considered constraint on the future direction of the product. This is complicated by the fact that there will be uncertainty in how the problem space evolves over time, with the uncertainty being the greatest at the point furthest from the present (as represented by the image below).

product timeline

Some information regarding the transition from one problem space to the next will be available. Product roadmaps and deferred issues provide some insight into what will be needed next. That being said, emergent circumstances (everything from unforeseen changes in business direction to unexpected increases in traffic) will conspire to prevent the trajectory of the set of problem spaces from being completely predictable.

Excessive complexity will certainly constrain the options for evolving a system. However, flexibility can come with a certain amount of complexity as well. The simplest solution may also complicate the evolution of a system.

Bumper Sticker Philosophy

Yeah!  Wait, what?

YAGNI: You Ain’t Gonna Need It.

Sound bites are great – short, sweet, clear, and simple. Just like real life, right?

Seductive simple certainty is what makes slogans so problematic. Uncertainty and ambiguity occur far more frequently in the real world. Context and nuance add complexity, but not all complexity can be avoided. In fact, removing essential complexity risks misapplication of otherwise valid principles.

Under the right circumstances, YAGNI makes perfect sense. Features added on the basis of speculation (“we might want to do this someday down the road”) carry costs and risks. Flexibility typically comes at the cost of complexity, which brings with it the risk of increased defects and more difficult maintenance. Even when perfectly implemented, this complexity poses the risk of making your code harder to use by its consumers due to the proliferation of options. Where the work meets a real need, as opposed to just a potential one, the costs and benefits can be assessed in a rational manner.

Where YAGNI runs into trouble is when it’s taken purely at face value. A naive application of the principle leads to design that ignores known needs that are not part of the immediate work at hand, trusting that a coherent architecture will “emerge” from implementing the simplest thing that could possibly work and then refactoring the results. As the scale of the application increases, the ability to do this across the entire application becomes more and more unlikely. One reason for employing abstraction is the difficulty in reasoning in detail about a large number of things, which you would need to do in order to refactor across an entire codebase. Another weakness of taking this principle beyond its realm of relevance is both the cost and difficulty of attempting to inject and/or modify cross-cutting architectural concerns (e.g. security, scalability, auditability, etc.) on an ad hoc basis. Snap decisions about pervasive aspects of a system ratchets up the risk level.

One argument for YAGNI (per the Wikipedia article) is that each new feature imposes constraints on the system, a position I agree with. When starting from scratch, form can follow function. However, as a system evolves, strategy tends to follow structure. Obviously, unnecessary constraints are undesirable. That being said, constraints which provide stability and consistency serve a purpose. The key is to be able to determine which is which and commit when appropriate.

In “Simplicity in Software Design: KISS, YAGNI and Occam’s Razor”, Hayim Makabee captures an important point – simple is not the same as simplistic. Adding unnecessary complexity adds risk without any attendant benefits. One good way to avoid the unnecessary that he lists is “…as possible avoid basing our design on assumptions”. At the same time, he cautions that we should avoid focusing only on the present.

It should be obvious at this point that I dislike the term YAGNI, even as I agree with it in principle. This has everything to do with the way some misuse it. My philosophy of design can be summed up as “the most important question for an architect is ‘why?'”. Relying on slogans gets in the way of a deliberate, well-considered design.

Emergence versus Evolution

You lookin' at me?

Hayim Makabee’s recent post, “The Myth of Emergent Design and the Big Ball of Mud”, encountered a relatively critical reception on two of the LinkedIn groups we’re both members of. Much of that resistance seemed to stem from a belief that the choice was between Big Design Up Front (BDUF) and Emergent Design. Hayim’s position, with which I agree, is that there is continuum of design with BDUF and Emergent Design representing the extremes. His position, with which I also agree, is that both extremes are unlikely to produce good results, and that the answer lies in between.

The Wikipedia definition of Emergent Design cited by Hayim, taken nearly a word for word from the Agile Sherpa site, outlines a No Design Up Front (NDUF) philosophy:

With Emergent Design, a development organization starts delivering functionality and lets the design emerge. Development will take a piece of functionality A and implement it using best practices and proper test coverage and then move on to delivering functionality B. Once B is built, or while it is being built, the organization will look at what A and B have in common and refactor out the commonality, allowing the design to emerge. This process continues as the organization continually delivers functionality. At the end of an agile or scrum release cycle, Development is left with the smallest set of the design needed, as opposed to the design that could have been anticipated in advance. The end result is a smaller code base, which naturally has less room for defects and a lower cost of maintenance.

Rather than being an unrealistically extreme statement, this definition meshes with ideas that people hold and even advocate:

“You need an overarching vision, a “big picture” design or architecture. TDD won’t give you that.” Wrong. TDD will give you precisely that: when you’re working on a large project, TDD allows you to build the code in small steps, where each step is the simplest thing that can possibly work. The architecture follows immediately from that: the architecture is just the accumulation of these small steps. The architecture is a product of TDD, not a pre-designed constraint.

Portion of a comment to Dan North’s “PUBLISHED: THE ART OF MISDIRECTION”

Aspects of a design will undoubtedly emerge as it evolves. Differing interpretations of requirements as well as information deficits between the various parties, not to mention changing circumstances all conspire to make it so. However, that does not mean the act of design is wholly emergent. Design connotes activity whereas emergence implies passivity. A passive approach to design is, in my opinion, unlikely to succeed in resolving the conflicts inherent in software development. In my opinion, it is the resolution of those conflicts which allows a system to adapt and evolve.

I’ve previously posted on the concept of expecting a coherent architecture to emerge from this type of blinkered approach. Both BDUF and NDUF hold out tremendous risk of wasted effort. It is as naive to expect good results from ignoring information (NDUF) as it is to think you possess all the information (BDUF). Assuming a relatively simple system, ignoring obvious commonality and obvious need for flexibility in order to do the “simplest thing that could possibly work, then refactor” guarantees needless rework. As the scale grows, the likelihood of conflicting requirements will grow. Resolving those conflicts after code for one or more features is in place will be more likely to yield unsatisfactory compromises.

The biggest weakness of relying on refactoring is that there are well-documented limits to what people can process. As the level of abstraction goes down, the number of concerns goes up. This same limit that dooms BDUF to failure limits the ability to refactor large systems into a coherent whole.

Quality of service issues are yet another problem area for the “simplest thing that could possibly work” method. By definition, that concentrates on functionality to the exclusion of non-functional concerns. Security and scalability are just two concerns that typically fare poorly when bolted on after the fact. Premature optimization is to be avoided, but being aware of the expected performance environment can help you avoid blind alleys.

One area where I do agree with the TDD advocate quoted above, is that active design imposes constraints. The act of design involves defining structure. As Ruth Malan has said, “negative space is telling; as is what it places emphasis on”. Too little structure poses as much risk as too much.

An evolutionary design process, such as Hayim’s Adaptable Design Up Front (ADUF), recognizes the futility of predicting the future in minute detail (BDUF) without surrendering to formlessness (NDUF). Experience about what parts of a system are most likely to change is invaluable. Coupled with reasonable planning based on what is known about the big picture of the current release and what’s known about follow-up releases can be used to drive a design that strikes the right balance – flexible, without being over-engineered.

[Photograph by Jose Luis Martinez Alvarez via Wikimedia Commons.]

Finding the Balance

Evening it out

One of my earliest posts on Form Follows Function, “There is no right way (though there are plenty of wrong ones)”, dealt with the subject of trade-offs. Whether dealing with the architecture of a solution or the architecture of an enterprise, there will be competing forces at work. Resolving these conflicts in an optimal manner involves finding the balance between individual forces and the system as whole (consistent with the priorities of the stakeholders).

At the scale of an individual solution, concerns such as performance, scalability, simplicity, and technical elegance (to mention only a few) can all serve as conflicting forces. Gold-plating, whether in the form of piling on features or technical excess affects both budget and schedule. Squeezing the last drop of performance out of a system will most likely increase complexity, making the system more difficult to change in the future and possibly impacting reliability in the present. As noted by Jimmy Bogard, “performance optimization without a clear definition of success just leads down the path of obfuscation and unmaintainability”.

At the scale of an enterprise’s IT operations, the same principles apply. Here the competing forces are the various business units as well as the enterprise as a whole. Governance is needed to insure that some units are not over-served while others are under-served. Likewise, enterprise-level needs (e.g. network security, compliance, business continuity, etc.) must be accommodated. Too little governance could lead to security breaches, legal liability, and/or runaway costs while too much can stifle innovation or encourage shadow IT initiatives.

A fundamental role of an architect is to identify and understand the forces in play. In doing so, the architect is then positioned to present available options and the consequences of those options. Additionally, this allows for contingency planning for when priorities shift, requiring a re-balance. In a highly variable environment, having fragile balance is almost as bad as no balance at all.

Reduce, Reuse, Recycle

Reduce, Reuse, Recycle

Reuse is one of those concepts that periodically rears up to sing its seductive siren song. Like that in the legend, it is exceedingly attractive, whether in the form of object-orientation, design patterns, or services. Unfortunately, it also shares the quality of tempting the unwary onto the rocks to have their hopes (if not their ships) dashed.

The idea of lowering costs via writing something once, the “right way”, then reusing it everywhere, is a powerful one. The simplicity inherent in it is breathtaking. We even have a saying that illustrates the wisdom of reuse – “don’t reinvent the wheel”. And yet, as James Muren pointed out in a discussion on LinkedIn, we do just that every day. The wheels on rollerblades differ greatly from those on a bus. Each reuses the concept, yet it would be ludicrous to suggest that either could make do with the other’s implementation of that concept. This is not to say that reusable implementations (i.e. code reuse) are not possible, only that they are more tightly constrained than we might imagine at first thought.

Working within a given system, code reuse is merely the Don’t Repeat Yourself (DRY) principle in action. The use cases for the shared code are known. Breaking changes can be made with relatively limited consequences given that the clients are under the control of the same team as the shared component(s). Once components move outside of the team, much more in the way of planning and control is necessary and agility becomes more and more constrained.

Reusable code needs to possess a certain level of flexibility in order to be broadly useful. The more widely shared, the more flexible it must be. By the same token, the more widely used the code is, the more stability is required of the interface so as to maintain compatibility across versions. The price of flexibility is technical complexity. The price of stability is overhead and governance – administrative complexity. This administrative complexity not only affects the developing team, but the consuming one also in the form of another dependency to manage.

Last week, Tony DaSilva published a collection of quotes about code reuse from various big names (Steve McConnell, Larry Constantine, etc.), all of which stated the need for governance, planning and control in order to achieve reuse. In the post, he noted: “Planned? Top-down? Upfront? In this age of “agile“, these words border on blasphemy.” If blasphemy, it’s blasphemy with distinguished credentials.

In a blog post (the subject of the LinkedIn discussion I mentioned above) named “The Misuse of Reuse”, Roger Sessions touches on many of the problems noted above. Additionally, he notes security issues, infrastructure overhead, and the potential for a single point of failure that can come from poorly planned reuse. His most important point, however is this (emphasis mine):

Complexity trumps reuse. Reuse is not our goal, it is a possible path to our goal. And more often than not, it isn’t even a path, it is a distraction. Our real goal is not more reusable IT systems, it is simpler IT systems. Simpler systems are cheaper to build, easier to maintain, more secure, and more reliable. That is something you can bank on. Unlike reuse.

While I disagree that simplicity is our goal (value, in my opinion, is the goal; simplicity is just another tool to achieve that value), the highlighted portion is key. Reuse is not an end in itself, merely a technique. Where the technique does not achieve the goal, it should not be used. Rather than naively assuming that code reuse always lowers costs, it must be evaluated taking the costs and risks noted above into account. Reuse should only be pursued where the actual costs are outweighed by the benefits.

Following this to its logical conclusion, two categories emerge as best candidates for code reuse:

  • Components with a static feature set that are relatively generic (e.g. Java/.Net Framework classes, 3rd party UI controls)
  • Complex, uniform and specific processes, particularly where redundant implementations could be harmful (e.g. pricing services, application integrations)

It’s not an accident that the two examples given for generic components are commercially developed code intended for a wide audience. Designing and developing these types of components is more typical of a software vendor than an in-house development team. Corporate development teams would tend to have better results (subject to a context-specific evaluation) with the second category.

Code reuse, however, is not the only type of reuse available. Participants in the LinkedIn discussion above identified design patterns, models, business rules, requirements, processes and standards as potentially reusable artifacts. Remy Fannader has written extensively about the use of models as reusable artifacts. Two of his posts in particular, “The Cases for Reuse” and “The Economics of Reuse”, provide valuable insight into reuse of models and model elements as well as knowledge reuse across different architectural layers. As the example of the wheel points out, reuse of higher levels of abstraction may be more feasible.

Reuse of a concept as opposed to an implementation may allow you to avoid technical complexity. It definitely allows you to avoid administrative complexity. In an environment where a component’s signature is in flux, it makes little sense to try to reuse a concrete implementation. In this circumstance, DRY at the organizational level may be less of a virtue in that it will impede multiple teams ability to respond to change.

Reuse at a higher level of abstraction also allows for recycling instead of reuse. Breaking the concept into parts and transforming its implementation to fit new or different contexts may well yield better results than attempting to make one size fit all.

It would be a mistake to assume that reuse is either unattainable or completely without merit. The key question is whether the technique yields the value desired. As with any other architecturally significant decision, the most important question to ask yourself is “why”.

Search Engine Serendipity

One of the nice features of WordPress is the “Site Stats” page. In addition to presenting information about hit counts, it also shows search word combinations used to find pages on your site. The combination that was displayed the other day piqued my interest: “form follows function and structure follows strategy”. If you’ve read the Why “Form Follows Function” page, you know know the provenance of the phrase “form follows function” and why I felt it made a fitting title for the blog. The second phrase, “structure follows strategy”, was unfamiliar, but apropos. A quick session with Google provided the background.

It turns out that the phrase was a quote from Alfred D. Chandler’s Strategy and Structure: Chapters in the History of the American Industrial Enterprise. Wikipedia led to the site ProvenModels, which summarized Chandler’s thesis as follows:

He described strategy as the determination of long-term goals and objectives, the adoption of courses of action and associated allocation of resources required to achieve goals; he defined structure as the design of the organisation through which strategy is administered. Changes in an organisation’s strategy led to new administrative problems which, in turn, required a new or refashioned structure for the successful implementation of the new strategy.

The same search yielded dissenting views as well. One counterpoint was titled “Strategy Follows Structure”. The thrust of this paper was that the existing structure of an enterprise would restrict its strategic options.

In my opinion, both viewpoints are correct. Both creation and major change imply extensive structural work. Once established, architecture will then constrain future changes. Strategic shifts will require considerable justification for the effort and cost involved.

Although the context of both Chandler’s work and the opposing view is in the realm of enterprise architecture, these same principles apply to solution and application architecture as well. Investment yields inertia. The takeaway from this is that flexibility is critical. The more agility that an architecture can provide without extensive, expensive, and disruptive re-work, the better that architecture serves the needs of its users.

There is no right way (though there are plenty of wrong ones)

The idea of functional perfection is an intriguing concept. Would it not be wonderful if the things we use functioned perfectly? Well … it certainly would … but, then, what do we actually mean by perfectly? On the one hand we tend to consider functional perfection to be an obvious, if difficult, aim of the engineer’s and designer’s effort. But on the other hand we have a creeping suspicion that such an aim is hopelessly out of reach.

Jan Michl, “On the Rumor of Functional Perfection”

One of my pet peeves is being asked for the “right” way to do something. The word “right” only has meaning relative to your perception of the current state of your needs. Zero defects sounds like a great deal until the time and cost is accounted for. Ditto for “five nines” high availability. And eking out every little bit of performance will likely leave you with a system that is difficult to maintain. There is a reason that some drive Volvos, some Lamborghinis, and others Fords.

A major aspect of an architect’s job is to understand the trade-offs that go into design decisions and then with the customer’s wants and needs in mind, creating a solution that best meets those requirements. Where requirements are contradictory, resolutions must be negotiated. It is almost assured that what suffices for today will not be good enough tomorrow. It is equally likely that, despite your best efforts to anticipate future needs, something unexpected will come up. Therefore, the aim should not be for everlasting perfection, but for flexibility.

Aiming for some sort of illusory technical perfection loses sight of the customer’s needs. As difficult as it may be to process, the customer is not interested in your skill. The customer is interested in what your skill can do to improve their business. They are not interested in paying extra for work that does not address that goal. As Dan North noted:

There is a difference between the mindset of a master stonemason sculpting the expression on the face of a gargoyle and someone using the commodity blocks that make up a multi-storey car park. In the latter case the last thing I want is someone’s “personality” causing some of the blocks to be different sizes and no longer interchangeable, never mind the added expense of having someone manually hew the stone rather than using machine tools. In the former the stonemason’s attitude is indulgent. He is putting his signature (and his ego, and his reputation) into this magnificent representation of Hell’s best. If you just wanted an oil-pouring spout you could get one from a DIY store.

This does not, however, mean that the architect’s role is passive. Understanding the costs and benefits associated with design decisions places the architect in a position of responsibility for communicating the trade-offs to the customer. As stated by Rebecca Parsons:

From a business context, the business has a debt that it may or may not have made a conscious decision to take on. It is the responsibility of the development team to make sure the business understands if they are compromising the long-term costs and value of the code by taking a particular decision in the short term.

It is the architect’s job to help guide the customer in choosing a path that provides the optimum system for their needs now and in the future. This is where the architect provides value.

After all, at the end of the day, if there were only one right way to design a system, would anyone need an architect?