Application Lifecycle Management Revisited

Although I just touched on the subject of ALM two weeks ago, an article published last week in Dr. Dobb’s journal highlighted an important aspect that I had not covered. “Pace-Layered Application Strategies, Fact or Fiction?” discusses Gartner’s Pace-Layered Application Strategy, which is described as a ” methodology for categorizing applications and developing a differentiated management and governance process that reflects how they are used and their rate of change”.

Gartner identifies three classes of application in order to align governance with business priorities:

Systems of Record — Established packaged applications or legacy home-grown systems that support core transaction processing and manage the organization’s critical master data. The rate of change is low, because the processes are well-established, common to most organizations, and often are subject to regulatory requirements.
Systems of Differentiation — Applications that enable unique company processes or industry-specific capabilities. They have a medium lifecycle (one to three years), but need to be reconfigured frequently to accommodate changing business practices or customer requirements.
Systems of Innovation — New applications that are built on an ad hoc basis to address new business requirements or opportunities. These are typically short lifecycle projects (zero to 12 months) using departmental or outside resources and consumer-grade technologies.

I can argue with the name System of Record as that term already has a specific meaning unrelated to their usage. For example, a System of Differentiation can be a system of record in the commonly accepted sense. Likewise, I believe the range for the medium lifecycle is too long. In my experience, Systems of Differentiation can have release cycles as low as three months. Aside from those minor issues, the underlying message is solid.

The nature of these different types of applications clearly precludes a “one size fits all” approach. Attempting to govern all systems with too strict a process will stifle innovation. By the same token, if a publicly traded corporation’s financial systems are managed (mis-managed) in an ad hoc manner, the fallout will be significant. As with any process, selecting the right tool for the job at hand is key.


Search Engine Serendipity

One of the nice features of WordPress is the “Site Stats” page. In addition to presenting information about hit counts, it also shows search word combinations used to find pages on your site. The combination that was displayed the other day piqued my interest: “form follows function and structure follows strategy”. If you’ve read the Why “Form Follows Function” page, you know know the provenance of the phrase “form follows function” and why I felt it made a fitting title for the blog. The second phrase, “structure follows strategy”, was unfamiliar, but apropos. A quick session with Google provided the background.

It turns out that the phrase was a quote from Alfred D. Chandler’s Strategy and Structure: Chapters in the History of the American Industrial Enterprise. Wikipedia led to the site ProvenModels, which summarized Chandler’s thesis as follows:

He described strategy as the determination of long-term goals and objectives, the adoption of courses of action and associated allocation of resources required to achieve goals; he defined structure as the design of the organisation through which strategy is administered. Changes in an organisation’s strategy led to new administrative problems which, in turn, required a new or refashioned structure for the successful implementation of the new strategy.

The same search yielded dissenting views as well. One counterpoint was titled “Strategy Follows Structure”. The thrust of this paper was that the existing structure of an enterprise would restrict its strategic options.

In my opinion, both viewpoints are correct. Both creation and major change imply extensive structural work. Once established, architecture will then constrain future changes. Strategic shifts will require considerable justification for the effort and cost involved.

Although the context of both Chandler’s work and the opposing view is in the realm of enterprise architecture, these same principles apply to solution and application architecture as well. Investment yields inertia. The takeaway from this is that flexibility is critical. The more agility that an architecture can provide without extensive, expensive, and disruptive re-work, the better that architecture serves the needs of its users.

Fluent Interfaces

A few weeks back, a friend asked my opinion of fluent interfaces. My impression, based on what I’d read here and there, was not favorable. Expending that kind of design effort to avoid a few keystrokes while coding and to make code read more like natural language never struck me as a worthwhile trade for most cases. I should note here that this is in the context of .Net development. Other languages have different characteristics that make this style less of an imposition of additional work.

However, in spite of a negative initial impression, I decided that it was time to explore the subject in a more disciplined manner in order to fairly evaluate the value of the technique. In my research, I found it interesting that I found much more about how to implement fluent interfaces than I found about why to use them in my designs. Step one was to go to the source to get a good definition. Martin Fowler, one of the originators of the term, provided an example that illustrates it:

I’ll continue with the common example of making out an order for a customer. The order has line-items, with quantities and products. A line item can be skippable, meaning I’d prefer to deliver without this line item rather than delay the whole order. I can give the whole order a rush status.

The most common way I see this kind of thing built up is like this:

private void makeNormal(Customer customer) {
Order o1 = new Order();
OrderLine line1 = new OrderLine(6, Product.find(“TAL”));
OrderLine line2 = new OrderLine(5, Product.find(“HPK”));
OrderLine line3 = new OrderLine(3, Product.find(“LGV”));

In essence we create the various objects and wire them up together. If we can’t set up everything in thnote constructor, then we need to make temporary variables to help us complete the wiring – this is particularly the case where you’re adding items into collections.

Here’s the same assembly done in a fluent style:

private void makeFluent(Customer customer) {
.with(6, “TAL”)
.with(5, “HPK”).skippable()
.with(3, “LGV”)

Probably the most important thing to notice about this style is that the intent is to do something along the lines of an internal DomainSpecificLanguage. Indeed this is why we chose the term ‘fluent’ to describe it, in many ways the two terms are synonyms.

Fowler himself admits that there is a cost involved:

The price of this fluency is more effort, both in thinking and in the API construction itself. The simple API of constructor, setter, and addition methods is much easier to write. Coming up with a nice fluent API requires a good bit of thought.

In addition to confirming that the technique injected extra work into the development, Fowler also points out another issue: “One of the problems of methods in a fluent interface is that they don’t make much sense on their own”. He admits that, in isolation, With() is a “badly named method that doesn’t communicate its intent at all well”. So far, not so good.

Martin Fowler’s article linked to another by Piers Cawley, that defined fluent interfaces as “essentially interfaces that do a good job of removing hoopage (James Duncan’s handy term for all the jumping through hoops you have to do in order to achieve something that ought to be a lot simpler)”. I would question whether the technique has simplified anything, however. I’ve seen no data to justify that the effort expended in designing and coding the fluent interface is recaptured in the code using it. Additionally, we have Fowler’s admission that the methods aren’t always intuitive.

Further research continued to confirm my impressions. Paul Jones and Scott Hanselman both came to the conclusion that the technique works best in very specialized cases. Some of those cases seen in my research were test frameworks, mocking frameworks and entity factories. The fact that those first two cases involve public APIs is instructive. The investment for an internal API will likely not be worth it.

In most cases, the effort spent on designing and coding a fluent interface for internal use could be better directed to providing business value. While I am a proponent of investing in maintainability, the information above leads me to believe that the “bang for the buck” is lacking.

Application Lifecycle Management and Architecture

While discussing system retirement in the post It’s a system, not a family member, I touched on the subject of Application Lifecycle Management (ALM) from the end of life point of view. It’s an extremely important topic that deserves a fuller treatment, both in terms of its definition and the role of the architect in the process.

David Chappell, in his paper “What is Application Lifecycle Management?”, provides an excellent definition of ALM. He points out that it goes beyond the development process, to include both governance and operations. Chappell defines and scopes these components as follows:

Governance, which encompasses all of the decision making and project management for this application, extends over this entire time. Development, the process of actually creating the application, happens first between idea and deployment. For most applications, the development process reappears again several more times in the application’s lifetime, both for upgrades and for wholly new versions. Operations, the work required to run and manage the application, typically begins shortly before deployment, then runs continuously until the application is removed from service.

According to Chappell, the purpose of governance is to ensure that the application is providing business value and Application portfolio management (APM), is a key tool to provide metrics to validate that value. Regular evaluation is likewise critical to ensure that the costs attendant to maintaining an application do not exceed the value provided. This is where the architect can influence the process.

One of the factors leading to IT landscape complexity in Capgemini’s Application Landscape Report 2011 Edition was “Custom legacy applications are becoming obsolete and are difficult to maintain, support, and integrate into the new, modern IT infrastructure”. This is particularly troubling when you consider that, per those surveyed, approximately half of their application portfolios consisted of custom applications. It does, however, identify some areas where architectural practices can help.

Many tend to think “mainframe” when they see or hear the word “legacy”. It’s naive to think that applications developed relatively recently can’t turn into legacy applications as well. Operating systems, database software, and operating environments (.Net Framework/JVM, web servers, middleware, etc.) are all constantly evolving. While it’s not a crime if your application isn’t on the bleeding edge (it may even be a virtue that it’s not), it is vital that the architect is aware of where those components of the solution stand and has a roadmap for moving the application forward in order to avoid obsolescence. The roadmap should include costs, benefits, and risk of upgrading the various components, along with the risks of deferring the upgrade. This information allows stakeholders to make informed decisions about maintaining the application. Keeping the application technologically current allows an organization to avoid one of the key barriers to application retirement identified in the report:

Lack of qualified developers to migrate retired application data and functions. This is especially true for custom-designed systems. The people who were involved in building them are no longer with the company, and nobody fully understands the underlying processes and complex relationships between application components to successfully migrate the data and safely decommission an application.

When applications are allowed to deteriorate in this manner they are not only difficult to retire, but also nearly impossible to support or enhance. Essentially, they’re dead apps running.

Maintainability and supportability of an application are also areas where the architecture can contribute to or detract from the value provided. Stakeholders should be made aware that these aspects are not just for the convenience of the development and support teams, but are factors which affect the costs of these activities. Time (i.e. money) invested in maintainability should result in shorter development times for both enhancements and fixes. Likewise, investments in supportability should yield fewer and shorter outages as administrators are given the tools to predict and avoid issues and support staff is given more information to diagnose those that do occur.

The architect is also uniquely positioned to influence the interoperability of an application. Use of techniques such as layering and message orientation as well as tools such as messaging middleware can enhance the ability of an application to integrate with other applications while continuing to evolve internally in a controlled manner. Obviously this is most cost effective when built in from the ground up, but even refactoring an application to promote interoperability may be feasible for high value systems.

The report noted that “Companies continue to support applications that no longer deliver full business value and do not support current business processes”. The goal of ALM is to prevent that from happening. One way to avoid supporting applications that no longer provide value is to insure that all your applications continue evolving to avoid both technical and functional obsolescence. Obviously this is a team effort, but maintaining architectural oversight throughout the application’s lifecycle insures that the decision makers have the technical information needed to keep the system relevant and valuable for as long as possible.

Crossing the line

I ran across an interesting pair of opinion pieces magazine last week. My websites will only support the latest browser versions by Aral Balkin, declares war on support for older browsers. His position:

Given that web technologies evolve constantly and relentlessly, it is of utmost importance that the gateways to the web – the applications that we use to access the web – be equally versionless and evolve in a consistent manner. Any browser that cannot (or will not) keep pace, becomes not a gateway but a *gatekeeper*, artificially limiting access to the latest and greatest (and only) web. Such browsers are Harmful™ .

The last thing we, as developers, should do is reward such harmful behaviour on behalf of browser manufacturers. If we support such behaviour with our time, effort, and budgets, we are sending a dangerous message that the sort of business-as-usual attitude that resulted in the IE 6 fiasco is acceptable. We should instead be telling browser manufacturers, loudly and clearly, that such harmful behaviour is not acceptable and will not be tolerated.

So far, so good. Compliance with standards, quickly patching problems, and rolling out new features are all positive things for those who develop and those who use applications. Then Aral crosses the line:

Authoring web applications that are heavily behaviour-centric is a whole different ballgame and involve orders of magnitude more complexity. Instead of mostly non-interactive documents, we are talking about massively non-linear, heavily-interactive applications. Applications that must go beyond maintaining presentational consistency across browsers and platforms to guaranteeing behavioural consistency also. If you think that implementing responsive design – which currently focuses mainly on presentational issues – is hard, try maintaining behavioural consistency across platforms and devices for a heavily behaviour-centric web app.

Now multiply the complexity involved in that task with having to support quirks in previous versions of multiple browsers and it suddenly becomes clear that the web, with the combined complexities of multiple platforms and multiple runtime (browser) versions, puts developers at a disadvantage when compared to certain native platforms. If we can remove some of that complexity, we can give the web a competitive leg up. Making sure that browser manufacturers make updates seamless in an effort to eliminate version fragmentation is a big part of helping make this happen. And this is why browser manufacturers who – either though incompetence or malice – do not implement seamless upgrades in their browsers are harming the web.

This is also why my websites will support only the latest versions of the major web browsers and why yours should do so too.

Yes, it’s hard to develop complex applications. Yes, differences between platforms complicate an already complex task. Dropping support for a client platform, however, is as drastic a response to that complexity as refusing to implement functionality altogether. It’s laughable to think that this would provide “a competitive leg up”. Punishing customers is far from a viable business strategy.

If Aral is speaking solely about sites of which he is the business owner, then so be it; his choice. Otherwise, he has just made the decision to cast adrift users/customers under the pretext of making a technical decision. I don’t believe it’s possible to overstate how wrong that type of attitude can be.

John Alsopp provided the counterpoint article, Develop for as many users as possible. John notes the many classes of users that may not be able to have the latest, from those in the developing world to those with visual impairments. In aiming at browser manufacturers, Aral risks making them the target. One paragraph from John sums it up perfectly:

At the heart of my concern about Aral’s manifesto is I don’t think we should think of our websites and applications as supporting browsers. We should think of them as supporting users.