“Microservices, SOA, and EITA: Where To Draw the Line? Why to Draw the Line?” on Iasa Global

In my part of the world, it’s not uncommon for people to say that someone wouldn’t recognize something if it “bit them in the [rude rump reference]”. For many organizations, that seems to be the explanation for the state of their enterprise IT architecture. For while we might claim to understand terms like “design”, “encapsulation”, and “separation of concerns”, the facts on the ground fail to show it. Just as software systems can degenerate into a “Big Ball of Mud”, so too can the systems of systems comprising our enterprise IT architectures. If we look at organizations as the systems they are, it should be obvious that this same entropy can infect the organization as well.

See the full post on the Iasa Global site (a re-post, originally published here).

Who Needs Architects? – Monoliths as Systems of Stuff

Platypus

In my experience, IT is not a “one size fits all” operation. In both their latest two-speed vision and their older three-speed one, Gartner’s opinion is the same – there is no one process that works for every system across the enterprise (for what it’s worth, I agree with Simon Wardley that Bimodal IT is still too restrictive and three modes comes closer to reflecting the types of systems in use). Process and governance that is appropriate to one system may be too strict for another and too loose for a third. In this light, attempting to find one compromise ensures that all are poorly served. Consequently, more than one mode of governance just makes sense.

The problem is more complex, however, than just picking trimodal or bimodal and dividing applications up according to whether they are systems of record, systems of differentiation, or systems of innovation (or digital versus traditional). Just as “accidental architecture” can result in a “Big Ball of Mud” at the application level, it can also do so in terms of enterprise IT architecture. Monoliths that have grown organically may cross boundaries of the multimodal framework taxonomy, essentially becoming incoherent systems of “stuff”. This complicates their assignment to a process that fits their nature. When the application fits more than one category, do you force it into the more restrictive category or the least restrictive? No matter which way you choose, the answer will be problematic.

Given the fractal nature of IT, it should not be a surprise that design decisions made at the level of individual applications can bubble up to affect the IT architecture of the enterprise as a whole. Separation of concerns (logical) and modularity (physical) remain important from the lowest level to the top. Without a strategic direction, tactical excellence can lead to waste from lack of focus.

Monolithic architectures trade simplicity for modularity at the application architecture level, which may be a valid trade at that level. If, however, a monolith crosses framework category boundaries, then major architectural refactoring may be required to avoid making ugly compromises. Separation of concerns within a monolith can ease the pain of this kind of refactoring, but avoidance of the need for refactoring is even more painless. Paying attention to cohesion across all levels of granularity and designing with extra-application as well as intra-application concerns in mind is necessary to achieve this avoidance.

Knowing the issues and being able to say why you made the choices you did is key.

Form Follows Function on SPaMCast 339

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast features Tom’s essay on demonstrations and a Form Follows Function installment on microservices, SOA, and Enterprise IT Architecture.

For SPaMCast 339, Tom and I discuss my “Microservices, SOA, and EITA: Where To Draw the Line? Why to Draw the Line?” post.

Microservice Principles, Technical Debt, and Legacy Systems

Is there a circumstance where the answer to Architect Clippy‘s question is “yes”? In “Microservice Architectures aren’t for Everyone” I used this tweet to underscore the observation that a team that can’t produce a well-modularized monolith is unlikely to be helped by trying to distribute the problem. On the other hand, a team (or teams) tasked with rehabilitating a “Big Ball of Mud” might well find some value in the principles behind microservice architectures.

Some of the relevant principles are cohesion and replaceability. As Dan North noted in “Microservices: software that fits in your head”:

One way to manage the mess is to maximise the likelihood that everyone knows what’s going on in the codebase. This requires two things: consistency and replaceability. Consistency implies you can make reasonable assumptions about unfamiliar parts of the application. Replaceability means you can kill code easily and replace it with something better.

Without achieving separation of concerns, any architectural refactoring effort will be an exercise in chasing fires across the codebase. A divide and conquer strategy that applies the single responsibility principle at a macro level will be more likely to facilitate identification and remediation of lower-level technical debt. Monoliths can benefit from being carved up, not because small is inherently better, but because they reach a point where independence of their components becomes beneficial, even crucial. Components that share fewer dependencies (such as a shared data store) and have independent release cycles offer a great deal of flexibility in structuring an application and the team(s) that develop it.

In “Microservices allow for localized tech debt”, Jim Plush stated: “It’s much easier mentally to tackle $10,000 of debt across 4 credit cards at $2500 each than 1 card at the full $10,000.” Even more to the point, it’s much easier to tackle that debt when you split it with three other people (teams) each working independently.

Re-writes have a well-deserved bad reputation. Shared platforms and shared data stores will often mean that the transition from the legacy system to the re-written one will be a high-risk “big bang” affair. As Edmond Lau observed in “How to Avoid One of the Costliest Mistakes in Software Engineering”, you want to “…get as quickly as possible to a state where you’re again making incremental improvements”. Getting to this state may well happen quicker when the parts are separated.

Microservices, SOA, and EITA: Where To Draw the Line? Why to Draw the Line?

Surveying and drafting instruments and examples

In my part of the world, it’s not uncommon for people to say that someone wouldn’t recognize something if it “bit them in the [rude rump reference]”. For many organizations, that seems to be the explanation for the state of their enterprise IT architecture. For while we might claim to understand terms like “design”, “encapsulation”, and “separation of concerns”, the facts on the ground fail to show it. Just as software systems can degenerate into a “Big Ball of Mud”, so too can the systems of systems comprising our enterprise IT architectures. If we look at organizations as the systems they are, it should be obvious that this same entropy can infect the organization as well.

One symptom of this entropy is when the dividing lines blur, weakening or even removing constraints. While the term “constraint” commonly has a negative connotation, constraints provide the structure and definition of a system. Separation of concerns, encapsulation, and DRY are all constraints that are intended to provide benefit. We accept limits on concerns addressed, accessibility of internals and/or instances of code or data in order to reduce complexity, not just check off a philosophical box. If we remove or even just relax these types of constraints too much, we incur risk.

This blurring of lines can occur at any level and on multiple levels. Additionally, architectural weakness at a higher level of abstraction can negate strengths at lower levels. A collection of well-designed systems will not ensure a coherent enterprise IT architecture if there is overlap and redundancy without a clear understanding of which ones are authoritative. Accidental architecture is no more likely to work at higher levels of abstraction than lower ones.

Architectural design, at each level of granularity, should be intentional and appropriate to that level. The ideal, is not to over-regulate, but to strike a balance. Micromanaging internals wastes effort better spent on something beneficial; abdicating design responsibility practically guarantees chaos. An additional consideration is the fit between the human and technological aspects. Conway’s law is more than just an observation, it can be used as a tool to align applications to a specific business concern as well as aligning development teams to specific applications/application components.

Just as a carver takes note of the grain of the wood being shaped, so should an architect work with rather than against the grain of the organization.

Jessica Kerr’s post, “Microservices, Microbusinesses”, captures these concepts from the viewpoint of microservice architectures. Partitioning application concerns into microservices allows for internal flexibility at the cost of external conformance to necessary governance. As Kerr puts it, “…everybody is a responsible adult”:

That’s a lot of overhead and glue code. Every service has to do translation from input to its internal format, and then to whatever output format someone else requires. Error handling, caching or throttling, failover and load balancing and monitoring, contract testing, maintaining multiple interface versions, database interaction details. Most of the code is glue, layers of glue protecting a small core of business logic. These strong boundaries allow healthy relationships with other services, including new interactions that weren’t designed into the architecture from the beginning. For all this work, we get freedom on the inside.

Kerr also recognizes the applicability of this trade-off to the architecture of the organization:

Still, a team exists as a citizen inside a larger organization. There are interfaces to fulfill. Management really does need to know about progress. Outward collaboration is essential. We can do this the same way our code does: with glue. Glue made of people. One team member, taking the responsibility of translating cards on the wall into JIRA, can free the team to optimize communication while filling management’s strongest needs.

Management defines an API. Encapsulate the inner workings of the team, and expose an interface that makes sense outside. By all means, provide a reference implementation: “Other teams use Target Process to track work.” Have default recommendations: “We use Clojure here unless there’s some better solution. SQL Server is our first choice database for these reasons.” Give teams a strong process and technology stack to start from, and let them innovate from there.

“Good fences make good neighbors” not by keeping out, but by channeling traffic into commonly understood and commonly accepted directions. We recognize lines so as to influence those aspects we truly need to influence. More importantly, we recognize lines to prevent needless conflict and waste. The key is to draw the lines so that they work for us, not against us.

Innovation, Agility, and the Big Ball of Mud in Meatspace

French infantry in a trench, Verdun 1916

Although the main focus of my blog is application and solution architecture, I sometimes write about process and management issues as well. Conway’s law dictates that the organizational environment strongly influences software systems. While talking with a colleague recently, I stated that I see organizations as systems – social systems operating on “hardware” that’s more complex and less predictable than that which hosts software systems (i.e. people, hence the use of “Meatspace” in the title). The entangling of social and software systems means that we should be aware of the architecture of the enterprise at the least in so far as it will affect the IT architecture of the enterprise.

Innovation and agility are hot topics. Large corporations, by virtue of their very size are at a disadvantage in this respect. In a recent article for Harvard Business Review, “The Core Incompetencies of the Corporation”, Gary Hamel discussed this issue. Describing corporations as “inertial”, “incremental” and “insipid”, he notes that “As the winds of creative destruction continue to strengthen, these infirmities will become even more debilitating”.

The wonderful thing (at least in my mind) about Twitter is that it makes it very easy for two people in the UK and two people in the US to hold an impromptu discussion of enterprise architecture in general and leadership and management issues in particular (on a Saturday morning, no less). Dan Cresswell started the ball rolling with a quote from a Hamel’s article: “most leaders still over-value alignment and conformance and under-value heterodoxy and heresy”. Tom Graves replied that he would suggest that “…heresy is a _necessary_ element of ‘working together’…”. My contribution was that I suspect that “together” is part of the problem; they don’t know how to integrate rebels and followers, therefore the heretics are relegated to a skunkworks or given the sack. Ruth Malan cautioned about limits, noting that “A perpetual devil’s advocate can hold team in perpetual churn; that judgment thing…by which I simply mean, sometimes dissent can be pugnacious, sometimes respectful and sometimes playful; so it depends”.

Ruth’s points re: “…that judgment thing…” and “…it depends” are, in my opinion, extremely important to understanding the issue. I noted that “that judgment thing” was a critical part of management and leadership. This is not in the sense that managers and leaders should only be the ones to exercise judgment, but that they should use their judgment to integrate, rather than eliminate, the heresies so that the organization does not stagnate. There is a need for a “predator”, someone to challenge assumptions, in the management realm as much as there is a need for one in the design and development realm. Likewise, an understanding of “it depends” is key. Neither software systems nor social systems are created and maintained via following a recipe.

While management practices are part of the problem, it’s naive to concentrate on that to the exclusion of all else. Tom Graves is fond of saying, “Things work better when they work together, on purpose”. This is a fundamental point. As he observed in “Dotting the joins (the JEA version)”:

Every enterprise is a system – an ‘ecosystem with purpose’ – constrained mainly by its core vision, values and other drivers. Within that system, everything ultimately connects with everything else, and depends on everything else: if it’s in the system, it’s part of the system, and, by definition, the system can’t operate without it.

The system must be structured to manage, not ignore complexity. Without an intentional design, things fall through the cracks. Tom again, from the same post:

To do something – to do anything, really – we need to know enough to get it to work right down in the detail of real-world practice. When there’s a lot of detail to learn, or a lot of complexity, we specialise: we choose one part of the problem, one part of the context, and concentrate on that. We get better at doing that one thing; and then better; and better again. And everyone can be a specialist in something – hence, given enough specialists, it seems that between us we should be able to do anything. In that sense, specialisation seems to be the way to get things done – the right way, the only way.

Yet there’s a catch. What specialisation really does is that it clusters all of its attention in one small area, and all but ignores the rest as Somebody Else’s Problem. It makes a dot, somewhere within what was previously a joined-up whole. And then someone else makes their own dot, and someone else carves out a space to claim to make their dot. But there’s nothing to link those dots together, to link between the dots – that’s the problem here.

Hamel’s use of the word “incremental” points the way to diagnosing the problem – enterprises have grown organically, rather than springing to life fully formed. Like a software system that has grown by sticking on bits and pieces without refactoring, social systems can become an example of Foote and Yoder’s “Big Ball of Mud” as well. Uncoordinated changes made without considering the larger system leads to a sclerotic mess, regardless of whether the system in question is social or software. My very first post on this blog, “Like it or not, you have an architecture (in fact, you may have several)”, sums it up. The question, is whether that architecture is intentional or not.

Accidental Architecture

Hillside Slum

I’m not sure if it’s ironic or fitting that my very first post on Form Follows Function, “Like it or not, you have an architecture (in fact, you may have several)”, dealt with the concept of accidental architecture. A blog dedicated to software and solution architecture starts off by discussing the fact that architecture exists even in the absence of intentional design? It is, however, a theme that seems to recur.

The latest recurrence was a Twitter exchange with Ruth Malan, in which she stated:

Design is the act and the outcome. We design a system. The system has a design.

This prompted Arnon Rotem-Gal-Oz to observe that architecture need not be intentional and “…even areas you neglect well [sic] have design and then you’d have to deal with its implications”. To this I added “accidental architecture is still architecture – whether it’s good architecture or not is another thing”.

Ruth closed with a reference to a passage by Grady Booch:

Every software-intensive system has an architecture. In some cases that architecture is intentional, while in others it is accidental. Most of the time it is both, born of the consequences of a myriad of design decisions made by its architects and its developers over the lifetime of a system, from its inception through its evolution.

The idea that an architecture can “emerge” out of skillful construction rather than as a result of purposeful design, is trivially true. The “Big Ball of Mud”, an ad hoc arrangement of code that grows organically, remains a popular design pattern (yes, it’s a pattern rather than an anti-pattern – see the Introduction of “Big Ball of Mud” for an explanation of why). What remains in question is how effective is an architecture that largely or even entirely “emerges”.

Even the current architectural style of the day, microservices, can fall prey to the Big Ball of Mud syndrome. A plethora of small service applications developed without a unifying vision of how they will make up a coherent whole can easily turn muddy (if not already born muddy). The tagline of Simon Brown’s “Distributed big balls of mud” sums it up: “If you can’t build a monolith, what makes you think microservices are the answer?”.

Someone building a house using this theory might purchase the finest of building materials and fixtures. They might construct and finish each room with the greatest of care. If, however, the bathroom is built opening into the dining room and kitchen, some might question the design. Software, solution, and even enterprise IT architectures exist as systems of systems. The execution of a system’s components is extremely important, but you cannot ignore the context of the larger ecosystem in which those components will exist.

Too much design up front, architects attempting to make decisions below the level of granularity for which they have sufficient information, is obviously wrong. It’s like attempting to drive while blindfolded using only a GPS. By the same token, jumping in the car and driving without any idea of a destination beyond what’s at the end of your hood is unlikely to be successful either. Finding a workable balance between the two seems to be the optimal solution.

[Shanty Town Image by Otsogey via Wikimedia Commons.]

Design by Committee

Who's driving?

Can a team of experienced, empowered developers successfully design the architecture of a product? Sure.

Can a team of experienced, empowered developers successfully design the architecture of a product without a unified vision of how the architecture should be structured to accomplish its purpose? Probably not.

Can a team of experienced, empowered developers successfully design the architecture of a product while staying within the bounds of their role as developers? Absolutely not.

A lot of ink, both physical and virtual, has been spilled arguing over the utility of architecture and architects. A common misunderstanding among the anti-architect faction is that an application is the sum of its parts. This is particularly prevalent with those espousing the view that if the team does the “simplest thing that could possibly work”, the architecture will just emerge. To be honest, an architecture will emerge from this method. Whether that architecture is coherent, performant, scalable, secure, etc. is quite another matter.

In order to have an informed opinion on whether or not a particular role is need, it is necessary to understand the nature of that role. For example, while the ability to code is a critical qualification for both application and solution architects, it is not the sole qualification. While architects are concerned with the structure of the code, that structure is a means to meet the needs of a customer, not the end in itself.

The purpose of an architect is to understand the needs of the stakeholders and meld his/her knowledge of code, platform (OS, database and web server, etc.), and environment (machines and network) into a cohesive whole that accomplishes the mission. This means the architect’s focus must be on the product as whole, not just a particular project or release and also not on just one aspect (code) of the product. The evolution of both code and platform (hardware, supporting software, and network) needs to be managed across the lifecycle of the product in order for it to remain useful.

For smaller products and teams, it may well be possible for the team to undertake the tasks listed above in addition to their duties as developers. As the scale of the product grows, however, dealing with multiple levels of abstraction and achieving timely consensus across multiple feature teams becomes problematic. At this point, coordination and specialization becomes more important. The architect role then becomes responsible for collaboratively developing and maintaining a unified high-level vision, providing sufficient guidance to keep the product coherent as it evolves while avoiding the trap of Big Design Up Front.

Whether the role is fulfilled by an individual or multiple individuals or the entire team as whole is less important than whether the role is handled effectively. Both “Whiteboard Architecture” and the classic “Big Ball of Mud” need to be avoided.

Managing Dependencies

In Layered Architectures – Sculpting the Big Ball of Mud, I mentioned in passing the topics of dependency injection and inversion of control. These topics roll up to the larger consideration of managing dependencies, which, as a key architectural concern, deserves further attention here.

Dependencies are, for all practical purposes, unavoidable. This is a good thing, unless you are in favor of having to embed operating system functionality into your code just to be able to write “hello world”. As stated by Microsoft MVP Steve Smith, in Insidious Dependencies:

Let’s assume you’re working with a .NET application. Guess what? You’re dependent on the .NET Framework, a particular CLR version, and most likely the Windows platform unless you’ve tested your application on Mono. Are you working with a database? If so, then you likely have a dependency on your database platform, and depending on how flexibly you architect your application, you’ll either be tightly or loosely coupled to your database implementation.

In that same article, Steve identified other common dependencies:

  • The file system (System.IO namespace)
  • Email (System.Net.Mail namespace)
  • Web Service and Service References
  • Dates
  • Configuration

The list above is also far from exhaustive. I’d add authentication, authorization, and caching as additional examples that leap to mind. Steve himself added logging (System.Diagnostics) in another post, Avoiding Dependencies. All of these represent basic application services that are useful (if not indispensable). In most cases, it makes little or no sense to try to reproduce the functionality when a ready made (and tested) alternative is available.

By the same token, dependencies represent a vulnerability. They are things you rely on, and in many cases you have no control over their evolution. For example, there have been changes from one version of the .Net framework to another in both email and configuration. Providers of third party components can introduce breaking changes or even go out of business, leaving you without support. Since we can’t live without dependencies, but living with them is problematic, then the answer is to manage them.

Managing dependencies can (and should) take place on both the macro and micro level. At the lowest level, Steve Smith recommends two common patterns: the Facade Pattern (where the dependency is wrapped within another class, which serves as the unchanging point of reference to consuming code, allowing the underlying implementation dependency to be changed in one location) and the Strategy Pattern (which combines the Facade pattern with a common interface, allowing for multiple implementations selected at runtime). The Strategy Pattern would be used to enable dependency injection, which is useful for plug-in functionality scenarios as well as for automated unit testing. There are limits, however, to how far this should be taken. In Avoid Entrenched Dependencies, Steve notes:

There are costs associated with abstracting dependencies. In the extreme, every “new” in your code is a dependency on a particular implementation, and thus could be replaced with an interface and injected in. In practice, the main things you want to worry about abstracting are the ones that are most likely to change, and over which you have the least control. Things that need to change in order to test your code count as things that are likely to change, and thus should always be abstracted if you plan on writing good unit tests! Otherwise, don’t go overboard. Wait until your dependence on a particular implementation causes pain before you start extracting interfaces and DI-ing them into all of your classes. All things in moderation.

Managing dependencies at the macro level ties in with layered architectures. Dependencies can be broken down into those that have an affinity to a particular layer and those that are more cross-cutting in usage. Those that can be segregated to a particular layer, should be. In Dependency Management – Anti-Patterns as Art from The Daily WTF, I highlighted three articles on that site describing self-inflicted dependency nightmares. While comic, they also serve to underscore an important point: some of the dependencies your code relies on are your code as well. A poorly organized system where concerns are inter-mingled quickly turns into a maintenance nightmare. Partitioning classes by responsibility and enforcing communication between layers in a controlled manner will lead to a more understandable and maintainable system.

Another macro level concern lies in determining what external dependencies to allow. Standard buy versus build considerations will apply here: time to develop, test, and maintain as opposed to license and maintenance fees, stability of the vendor, level of control over the component that’s required, etc. Type of license of can also be critically important. Open Source components can be great additions to a system, but make sure you understand your obligations under the license. If you have access to a legal adviser, their input will be valuable.

Eric N. Bush, in the { End Bracket } column of the August 2007 MSDN Magazine laid out the following guidelines for dependency management:

Authority Who makes what decisions? Does anyone have the final say? What is the escalation process?
Roles Clarify the expectations and determine who is responsible for what.
Goals/Value Define what success looks like. Drive for alignment. Enumerate and explain the risks.
Communication How and what will be communicated? Establish meeting schedules, e-mail protocols, and Web site access.
Accountability How will you track progress? What is the process for fixing mistakes?
Engineering System Create an issue database, source code control, build, setup, drop points, and quality measures.

The level of formality involved will vary depending on the size your organization, but the principles remain the same. Properly understood and managed dependencies contribute to the success of an application. Left to their own devices, however, your dependencies can manage you.

Layered Architectures – Sculpting the Big Ball of Mud

The notion of SHEARING LAYERS is one of the centerpieces of Brand’s How Buildings Learn [Brand 1994]. Brand, in turn synthesized his ideas from a variety of sources, including British designer Frank Duffy, and ecologist R. V. O’Neill.

Brand quotes Duffy as saying: “Our basic argument is that there isn’t any such thing as a building. A building properly conceived is several layers of longevity of built components”.

Brand distilled Duffy’s proposed layers into these six: Site, Structure, Skin, Services, Space Plan, and Stuff. Site is geographical setting. Structure is the load bearing elements, such as the foundation and skeleton. Skin is the exterior surface, such as siding and windows. Services are the circulatory and nervous systems of a building, such as its heating plant, wiring, and plumbing. The Space Plan includes walls, flooring, and ceilings. Stuff includes lamps, chairs, appliances, bulletin boards, and paintings.

These layers change at different rates. Site, they say, is eternal. Structure may last from 30 to 300 years. Skin lasts for around 20 years, as it responds to the elements, and to the whims of fashion. Services succumb to wear and technical obsolescence more quickly, in 7 to 15 years. Commercial Space Plans may turn over every 3 years. Stuff, is, of course, subject to unrelenting flux [Brand 1994].

One of the first treatments of application architecture and design patterns that I ever read was the Foote and Yoder classic “Big Ball of Mud”, from which the quote above comes. It, along with some unsavory characters on the VISBAS-L mailing list, fed my growing interest in the discipline of software architecture. Like many, I at first thought of it as just an anti-pattern. However, as the authors noted:

Some of these patterns might appear at first to be antipatterns [Brown et al. 1998] or straw men, but they are not, at least in the customary sense. Instead, they seek to examine the gap between what we preach and what we practice.

This was an eye-opener. In order to build (and maintain) good systems, one needed to understand how dysfunctional systems evolved. A poorly designed system obviously leads to trouble. Failure to manage change is just as dangerous. Quoting again from “Big Ball of Mud”:

Even systems with well-defined architectures are prone to structural erosion. The relentless onslaught of changing requirements that any successful system attracts can gradually undermine its structure. Systems that were once tidy become overgrown as PIECEMEAL GROWTH gradually allows elements of the system to sprawl in an uncontrolled fashion.

If such sprawl continues unabated, the structure of the system can become so badly compromised that it must be abandoned. As with a decaying neighborhood, a downward spiral ensues. Since the system becomes harder and harder to understand, maintenance becomes more expensive, and more difficult. Good programmers refuse to work there.

This understanding of the danger of chaotic system evolution, coupled with Brand’s notion of Shearing Layers, points the way to avoiding the Big Ball of Mud pattern. Structuring system designs using the principle of separation of concerns into cohesive layers that communicate in a disciplined manner, AKA Layered Architecture, can be used to prevent and/or correct system decay. Ideally, the initial design of a system would incorporate these principles, but it is even more important that they be used to manage change as the system evolves.

There are many variations on the theme, such as Alistair Cockburn’s Hexagonal Architecture, Jeffrey Palermo’s Onion Architecture, and Microsoft’s Layered Application Guidelines, to name just a few. My own style (to be covered in a future post) is similar to the Microsoft model, with some differences. All share the common features of separating presentation, business (process), and data access logic.

Using a layered approach has become increasingly important over the years as the scope of applications has expanded. Applications that were once just a web site fronting a database have expanded to include a variety of additional front ends such as smart clients, web parts, mobile sites, and apps (often for multiple OSs), as well as services for use by third parties. Additionally, many applications are dependent on not only their own data store, but also integrate with other systems as well. A layered approach allows for managing this component proliferation while minimizing redundancy.

Some additional advantages to structuring an application in this manner are:

  • Promoting flexibility in deployment: Components grouped in logical layers can be composed in different physical tiers based on the needs of the situation. For example, a web application may combine all layers on one tier (data persistence, of course, still residing on a separate physical tier). A SharePoint web part, smart client, or the mobile apps may be distributed across multiple tiers (business and data access, exposed via a service facade, sharing one physical tier).
  • Enabling dependency injection: Layers provide an excellent point for making use of inversion of control. For example, the business layer could make use of any one of multiple data access layer implementations (based on the underlying database product) in a manner transparent to it, so long as those implementations shared the same interfaces.
  • Enhancing unit testing: The ability to unit test layers is similarly enhanced in that mock objects can be injected to replace dependencies of the layer being tested.

Just as you wouldn’t buy a house that needed to be demolished in order to change the furniture, you shouldn’t be stuck with a system that must be virtually re-written in order to make relatively modest changes. Change happens; architecture should facilitate that.