Managing Dependencies

In Layered Architectures – Sculpting the Big Ball of Mud, I mentioned in passing the topics of dependency injection and inversion of control. These topics roll up to the larger consideration of managing dependencies, which, as a key architectural concern, deserves further attention here.

Dependencies are, for all practical purposes, unavoidable. This is a good thing, unless you are in favor of having to embed operating system functionality into your code just to be able to write “hello world”. As stated by Microsoft MVP Steve Smith, in Insidious Dependencies:

Let’s assume you’re working with a .NET application. Guess what? You’re dependent on the .NET Framework, a particular CLR version, and most likely the Windows platform unless you’ve tested your application on Mono. Are you working with a database? If so, then you likely have a dependency on your database platform, and depending on how flexibly you architect your application, you’ll either be tightly or loosely coupled to your database implementation.

In that same article, Steve identified other common dependencies:

  • The file system (System.IO namespace)
  • Email (System.Net.Mail namespace)
  • Web Service and Service References
  • Dates
  • Configuration

The list above is also far from exhaustive. I’d add authentication, authorization, and caching as additional examples that leap to mind. Steve himself added logging (System.Diagnostics) in another post, Avoiding Dependencies. All of these represent basic application services that are useful (if not indispensable). In most cases, it makes little or no sense to try to reproduce the functionality when a ready made (and tested) alternative is available.

By the same token, dependencies represent a vulnerability. They are things you rely on, and in many cases you have no control over their evolution. For example, there have been changes from one version of the .Net framework to another in both email and configuration. Providers of third party components can introduce breaking changes or even go out of business, leaving you without support. Since we can’t live without dependencies, but living with them is problematic, then the answer is to manage them.

Managing dependencies can (and should) take place on both the macro and micro level. At the lowest level, Steve Smith recommends two common patterns: the Facade Pattern (where the dependency is wrapped within another class, which serves as the unchanging point of reference to consuming code, allowing the underlying implementation dependency to be changed in one location) and the Strategy Pattern (which combines the Facade pattern with a common interface, allowing for multiple implementations selected at runtime). The Strategy Pattern would be used to enable dependency injection, which is useful for plug-in functionality scenarios as well as for automated unit testing. There are limits, however, to how far this should be taken. In Avoid Entrenched Dependencies, Steve notes:

There are costs associated with abstracting dependencies. In the extreme, every “new” in your code is a dependency on a particular implementation, and thus could be replaced with an interface and injected in. In practice, the main things you want to worry about abstracting are the ones that are most likely to change, and over which you have the least control. Things that need to change in order to test your code count as things that are likely to change, and thus should always be abstracted if you plan on writing good unit tests! Otherwise, don’t go overboard. Wait until your dependence on a particular implementation causes pain before you start extracting interfaces and DI-ing them into all of your classes. All things in moderation.

Managing dependencies at the macro level ties in with layered architectures. Dependencies can be broken down into those that have an affinity to a particular layer and those that are more cross-cutting in usage. Those that can be segregated to a particular layer, should be. In Dependency Management – Anti-Patterns as Art from The Daily WTF, I highlighted three articles on that site describing self-inflicted dependency nightmares. While comic, they also serve to underscore an important point: some of the dependencies your code relies on are your code as well. A poorly organized system where concerns are inter-mingled quickly turns into a maintenance nightmare. Partitioning classes by responsibility and enforcing communication between layers in a controlled manner will lead to a more understandable and maintainable system.

Another macro level concern lies in determining what external dependencies to allow. Standard buy versus build considerations will apply here: time to develop, test, and maintain as opposed to license and maintenance fees, stability of the vendor, level of control over the component that’s required, etc. Type of license of can also be critically important. Open Source components can be great additions to a system, but make sure you understand your obligations under the license. If you have access to a legal adviser, their input will be valuable.

Eric N. Bush, in the { End Bracket } column of the August 2007 MSDN Magazine laid out the following guidelines for dependency management:

Authority Who makes what decisions? Does anyone have the final say? What is the escalation process?
Roles Clarify the expectations and determine who is responsible for what.
Goals/Value Define what success looks like. Drive for alignment. Enumerate and explain the risks.
Communication How and what will be communicated? Establish meeting schedules, e-mail protocols, and Web site access.
Accountability How will you track progress? What is the process for fixing mistakes?
Engineering System Create an issue database, source code control, build, setup, drop points, and quality measures.

The level of formality involved will vary depending on the size your organization, but the principles remain the same. Properly understood and managed dependencies contribute to the success of an application. Left to their own devices, however, your dependencies can manage you.


It’s a system, not a family member

A funny thing happens to some people when they become involved with software: the system becomes something more than just work and an emotional bond forms. Like many loved ones, there are times when it generates pride and times when it causes you to cringe. Some relationships are nurturing, some dysfunctional and others abusive. Regardless, it is your system.

There is a definite upside to this phenomena. Those responsible for the care and feeding benefit by having a committed user base. Likewise, users benefit when those maintaining the system are invested. After all, who wants their livelihood in the hands of someone who is indifferent to the outcome?

The downside, however, is that people forget that lifecycles have an end. Technology changes. Business processes change. Better things come along. Unfortunately, too many companies are overrun with vampire systems that refuse to die, sucking up resources that could be put to better use.

Capgemini’s 2011 Edition of their Application Landscape Report contained some very bad news: 85% of the IT executives surveyed reported that they had redundant applications and 17% felt that ten percent or less of their applications were mission critical. Maintenance of obsolete and/or redundant systems represents time and money lost to those systems that do provide business value.

The Application Landscape Report identified four culprits:

  1. Mergers and acquisitions result in many redundant systems with duplicate functionality.
  2. Custom legacy applications are becoming obsolete and are difficult to maintain, support, and integrate into the new, modern IT infrastructure.
  3. Companies continue to support applications that no longer deliver full business value and do not support current business processes.
  4. Most organizations have a data-retention and archiving policy, but in reality the majority of companies are not willing to archive application data for fear of violating industry and government retention requirements.

Mergers and acquisitions will, of course, lead to duplicate systems (if only in common back office functions like accounting). How long that situation is allowed to last is key. Identifying redundancy and planning for its remediation should be accomplished as soon as possible (ideally as part of the acquisition planning). It should be noted that the acquired system need not always be the victim. A post from November advocated periodic re-evaluation of processes. This is an example of when that should take place, with an eye toward keeping the system that provides the best value and best matches the current business process.

The Application Landscape Report also identified reasons why nothing is done about applications accumulating:

  • Cost of retirement projects. IT budgets are typically allocated based on the costs of maintaining existing applications or continuity costs and new projects. Finding additional funding for application retirement can be challenging – especially in difficult economic times.
  • Lack of immediate ROI. It is often difficult to get buy-in for application initiatives because time horizons for investment decisions tend to be short term and rarely over one year. Many businesses expect to see ROI on projects in six months or less.  With rationalization initiatives, it is not easy to show quick ROI – especially if rationalization projects involve different types of applications. “In the last two years, we have reduced the number of data centers from 20 to five,” says an IT executive from a global French telecom company. “Data center rationalization takes much longer than changing small applications.”
  • Company culture and behavior. Employees are often resistant to change. People become comfortable using certain technology and processes, and as a result, become reluctant to any changes to the familiar and consistent environment.
  • Differences between regions. Different regions, subsidiaries and even groups within an organization may have different opinions regarding application retirement. Without their buy-in, any retirement initiative is likely to fall short, as IT would still have to support redundant and de-centralized applications.
  • Lack of qualified developers to migrate retired application data and functions. This is especially true for custom-designed systems. The people who were involved in building them are no longer with the company, and nobody fully understands the underlying processes and complex relationships between application components to successfully migrate the data and safely decommission an application.
  • Some companies do not consider retirement a priority. As we mentioned earlier, some IT leaders simply do not see the value in application retirement and therefore choose to focus efforts on other areas.

The bullet points on cost, ROI, and priority are inter-related. In order to get a true picture of the costs of maintaining versus retiring, you need to go beyond the obvious costs of hardware and application maintenance hours. Other costs associated with redundant and/or obsolete systems include:

  • Platform maintenance:  Hours spent maintaining the OS, database and other plumbing for the system must be added to the cost of the hardware.
  • Backups:  Time and storage costs add up as well.
  • Reporting:  Whether you have scheduled ETL  jobs feeding a data warehouse or run ad-hoc data query scripts when questions arise, those costs need to be assessed.
  • Licensing:  Not just for the system itself, but also for the OS, database, and other components and utilities needed to keep it running.
  • Training and Experience:  Each system supported represents a need for experienced personnel (with appropriate backstop for vacation, illness, etc).  Developers are the obvious source of cost here.  Additionally, help desk staff and infrastructure support staff have to be accounted for.
  • Productivity:  How many inefficient processes exist to work around the limitations of the current application landscape?

In addition to cost, risk needs to be taken into account. Parts for legacy hardware can be difficult to find, regardless of what you’re willing to pay. If a company is resorting to acquiring boxes via auction sites in order to cannibalize them, there is a significant risk to their ability to maintain operations. Likewise, as technologies age, those with experience become harder to find and more expensive to engage. Just parts shortages pose a risk, so do shortages of experience.

Differences between regions and other groups within an enterprise can lead to redundancy. Careful analysis must take place to determine how significant those differences are and whether consolidation can make sense. Function, cost, and risk all must be taken into account.

Resistance to change is the hardest factor to deal with in that quantifiable factors like cost and risk aren’t the issue. Emotional, cultural, and political factors are more likely the problem. Additionally, you need to be aware of your own investment in the systems under examination. A healthy dose of soft skills, as much engagement as education, will be needed to overcome others’ attachment to the old system. Self-awareness and objectivity are needed on your part.

After all, it’s a system, not a family member.

Layered Architectures – Sculpting the Big Ball of Mud

The notion of SHEARING LAYERS is one of the centerpieces of Brand’s How Buildings Learn [Brand 1994]. Brand, in turn synthesized his ideas from a variety of sources, including British designer Frank Duffy, and ecologist R. V. O’Neill.

Brand quotes Duffy as saying: “Our basic argument is that there isn’t any such thing as a building. A building properly conceived is several layers of longevity of built components”.

Brand distilled Duffy’s proposed layers into these six: Site, Structure, Skin, Services, Space Plan, and Stuff. Site is geographical setting. Structure is the load bearing elements, such as the foundation and skeleton. Skin is the exterior surface, such as siding and windows. Services are the circulatory and nervous systems of a building, such as its heating plant, wiring, and plumbing. The Space Plan includes walls, flooring, and ceilings. Stuff includes lamps, chairs, appliances, bulletin boards, and paintings.

These layers change at different rates. Site, they say, is eternal. Structure may last from 30 to 300 years. Skin lasts for around 20 years, as it responds to the elements, and to the whims of fashion. Services succumb to wear and technical obsolescence more quickly, in 7 to 15 years. Commercial Space Plans may turn over every 3 years. Stuff, is, of course, subject to unrelenting flux [Brand 1994].

One of the first treatments of application architecture and design patterns that I ever read was the Foote and Yoder classic “Big Ball of Mud”, from which the quote above comes. It, along with some unsavory characters on the VISBAS-L mailing list, fed my growing interest in the discipline of software architecture. Like many, I at first thought of it as just an anti-pattern. However, as the authors noted:

Some of these patterns might appear at first to be antipatterns [Brown et al. 1998] or straw men, but they are not, at least in the customary sense. Instead, they seek to examine the gap between what we preach and what we practice.

This was an eye-opener. In order to build (and maintain) good systems, one needed to understand how dysfunctional systems evolved. A poorly designed system obviously leads to trouble. Failure to manage change is just as dangerous. Quoting again from “Big Ball of Mud”:

Even systems with well-defined architectures are prone to structural erosion. The relentless onslaught of changing requirements that any successful system attracts can gradually undermine its structure. Systems that were once tidy become overgrown as PIECEMEAL GROWTH gradually allows elements of the system to sprawl in an uncontrolled fashion.

If such sprawl continues unabated, the structure of the system can become so badly compromised that it must be abandoned. As with a decaying neighborhood, a downward spiral ensues. Since the system becomes harder and harder to understand, maintenance becomes more expensive, and more difficult. Good programmers refuse to work there.

This understanding of the danger of chaotic system evolution, coupled with Brand’s notion of Shearing Layers, points the way to avoiding the Big Ball of Mud pattern. Structuring system designs using the principle of separation of concerns into cohesive layers that communicate in a disciplined manner, AKA Layered Architecture, can be used to prevent and/or correct system decay. Ideally, the initial design of a system would incorporate these principles, but it is even more important that they be used to manage change as the system evolves.

There are many variations on the theme, such as Alistair Cockburn’s Hexagonal Architecture, Jeffrey Palermo’s Onion Architecture, and Microsoft’s Layered Application Guidelines, to name just a few. My own style (to be covered in a future post) is similar to the Microsoft model, with some differences. All share the common features of separating presentation, business (process), and data access logic.

Using a layered approach has become increasingly important over the years as the scope of applications has expanded. Applications that were once just a web site fronting a database have expanded to include a variety of additional front ends such as smart clients, web parts, mobile sites, and apps (often for multiple OSs), as well as services for use by third parties. Additionally, many applications are dependent on not only their own data store, but also integrate with other systems as well. A layered approach allows for managing this component proliferation while minimizing redundancy.

Some additional advantages to structuring an application in this manner are:

  • Promoting flexibility in deployment: Components grouped in logical layers can be composed in different physical tiers based on the needs of the situation. For example, a web application may combine all layers on one tier (data persistence, of course, still residing on a separate physical tier). A SharePoint web part, smart client, or the mobile apps may be distributed across multiple tiers (business and data access, exposed via a service facade, sharing one physical tier).
  • Enabling dependency injection: Layers provide an excellent point for making use of inversion of control. For example, the business layer could make use of any one of multiple data access layer implementations (based on the underlying database product) in a manner transparent to it, so long as those implementations shared the same interfaces.
  • Enhancing unit testing: The ability to unit test layers is similarly enhanced in that mock objects can be injected to replace dependencies of the layer being tested.

Just as you wouldn’t buy a house that needed to be demolished in order to change the furniture, you shouldn’t be stuck with a system that must be virtually re-written in order to make relatively modest changes. Change happens; architecture should facilitate that.