Architecting the Process

Jimmy Bogard has moved on to Lean from Scrum. InfoQ asks “Is Kanban the new Scrum?”, while Jim Bird asks “What can you get out of Kanban?”. Dan North touched a raw nerve by suggesting that techniques like TDD should be used only when they add value rather than universally. A post titled “Seven Things I Hate About Agile” is answered by another named “7 things I hate about people that don’t know Agile”.

bettyfjord, commenting on Jimmy Bogard’s “Why I’m done with Scrum”, probably spoke for many with the observation:

As a dev with many years’ experience delivering and responding in very high-pressure environments, I still cannot believe how seriously this methodology stuff gets debated. Y’all clearly don’t have enough work to do.

Certainly the process wars have consumed countless hours and electrons that could otherwise have been spent producing software. That being said, if we don’t make time to talk about how we work, when does it get evaluated? If our processes aren’t evaluated, how do we know they’re optimal, or at least moving in that direction? If we’re taking it on faith that our process is the one true way to develop, how agile is that?

At first glance, certainty sounds like a good thing. According to, to be certain is to be “free from doubt”. Doubt, however, can be a powerful driver of innovation. Over the centuries, people have from time to time held the opinion that progress has reached its end point. Their certainty has, without exception, been wrong and the doubters right.

Even when we find that our processes serve our needs, the evaluation still provides value in the form of validation. Unexamined processes risk becoming psychological experiments proving that groups will adhere to practices long after any need exists, just because “that’s the way we do it”.

Rather than expecting one methodology to suffice for all, it seems much more reasonable to accept that organizations, their systems and the focus of their development teams will differ. An independent software vendor (ISV) will have different circumstances and priorities to deal with than an in-house corporate development team who will operate under constraints different from contractors. Within these categories, goals and priorities can differ greatly. A startup with the next great game will almost certainly operate differently from a company selling financial software. For either to use the same process as the other would be a disaster.

The working environment can likewise vary within an organization. Teams working on innovative web initiatives for an enterprise would most likely be severely hampered by the restrictions the financial system teams work under. Likewise, allowing the financial systems teams the same latitude given the R&D groups could carry both professional and legal consequences. Even teams working on different aspects of the same product could benefit from different processes. Time-boxed methods may work better for those doing new development, particularly on products that can’t be continuously deployed. At the same time, those providing support for the same product may benefit from processes that maximize delivery.

Bogard’s situation illustrates another reason to favor a pragmatic approach. Technologies, people and organizations all change. Just because a particular process fits today’s situation does not mean that it will continue to work in the future. Constant evaluation and refinement is key to remaining relevant. As Jim Highsmith has noted:

There is no Agility for Dummies. Agility isn’t a silver bullet. You don’t achieve it in five easy steps. So what is it? For myself, I’ve characterized agility in two statements:

Agility is the ability to both create and respond to change in order to profit in a turbulent business environment.

Agility is the ability to balance flexibility and stability.

Part of responding to change and remaining flexible is avoiding letting processes calcify. From another Highsmith post:

The Agile Manifesto was carefully worded to promote adaptability—to steer people away from rigid interpretations of the principles. Take the delivery principle for example, “Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.” It would have been much easier to write this principle as “Deliver working software every two weeks,” but that wording would lead people to rigidity (they get there on their own anyway).

Furthermore, what happens in too many organizations is that practices become static and then quietly elevated to the level of principle—something that can’t be violated. A good practice, “daily standups,” becomes bureaucratic when it is interpreted as “You must do daily standups, or else!” When questioned, the proponents of this newly crowned principle usually respond with “well, Joe Jones who wrote the book on Agile says so on page 48.” They lose track of why they are using the practice and it becomes a de facto standard rather than a guideline.

In the past, I’ve noted that “why” is the most important question architects should ask themselves when designing. Likewise, when designing a process, we should ask what we’re trying to accomplish and whether the processes we’re adopting will further those goals.


Reduce, Reuse, Recycle

Reduce, Reuse, Recycle

Reuse is one of those concepts that periodically rears up to sing its seductive siren song. Like that in the legend, it is exceedingly attractive, whether in the form of object-orientation, design patterns, or services. Unfortunately, it also shares the quality of tempting the unwary onto the rocks to have their hopes (if not their ships) dashed.

The idea of lowering costs via writing something once, the “right way”, then reusing it everywhere, is a powerful one. The simplicity inherent in it is breathtaking. We even have a saying that illustrates the wisdom of reuse – “don’t reinvent the wheel”. And yet, as James Muren pointed out in a discussion on LinkedIn, we do just that every day. The wheels on rollerblades differ greatly from those on a bus. Each reuses the concept, yet it would be ludicrous to suggest that either could make do with the other’s implementation of that concept. This is not to say that reusable implementations (i.e. code reuse) are not possible, only that they are more tightly constrained than we might imagine at first thought.

Working within a given system, code reuse is merely the Don’t Repeat Yourself (DRY) principle in action. The use cases for the shared code are known. Breaking changes can be made with relatively limited consequences given that the clients are under the control of the same team as the shared component(s). Once components move outside of the team, much more in the way of planning and control is necessary and agility becomes more and more constrained.

Reusable code needs to possess a certain level of flexibility in order to be broadly useful. The more widely shared, the more flexible it must be. By the same token, the more widely used the code is, the more stability is required of the interface so as to maintain compatibility across versions. The price of flexibility is technical complexity. The price of stability is overhead and governance – administrative complexity. This administrative complexity not only affects the developing team, but the consuming one also in the form of another dependency to manage.

Last week, Tony DaSilva published a collection of quotes about code reuse from various big names (Steve McConnell, Larry Constantine, etc.), all of which stated the need for governance, planning and control in order to achieve reuse. In the post, he noted: “Planned? Top-down? Upfront? In this age of “agile“, these words border on blasphemy.” If blasphemy, it’s blasphemy with distinguished credentials.

In a blog post (the subject of the LinkedIn discussion I mentioned above) named “The Misuse of Reuse”, Roger Sessions touches on many of the problems noted above. Additionally, he notes security issues, infrastructure overhead, and the potential for a single point of failure that can come from poorly planned reuse. His most important point, however is this (emphasis mine):

Complexity trumps reuse. Reuse is not our goal, it is a possible path to our goal. And more often than not, it isn’t even a path, it is a distraction. Our real goal is not more reusable IT systems, it is simpler IT systems. Simpler systems are cheaper to build, easier to maintain, more secure, and more reliable. That is something you can bank on. Unlike reuse.

While I disagree that simplicity is our goal (value, in my opinion, is the goal; simplicity is just another tool to achieve that value), the highlighted portion is key. Reuse is not an end in itself, merely a technique. Where the technique does not achieve the goal, it should not be used. Rather than naively assuming that code reuse always lowers costs, it must be evaluated taking the costs and risks noted above into account. Reuse should only be pursued where the actual costs are outweighed by the benefits.

Following this to its logical conclusion, two categories emerge as best candidates for code reuse:

  • Components with a static feature set that are relatively generic (e.g. Java/.Net Framework classes, 3rd party UI controls)
  • Complex, uniform and specific processes, particularly where redundant implementations could be harmful (e.g. pricing services, application integrations)

It’s not an accident that the two examples given for generic components are commercially developed code intended for a wide audience. Designing and developing these types of components is more typical of a software vendor than an in-house development team. Corporate development teams would tend to have better results (subject to a context-specific evaluation) with the second category.

Code reuse, however, is not the only type of reuse available. Participants in the LinkedIn discussion above identified design patterns, models, business rules, requirements, processes and standards as potentially reusable artifacts. Remy Fannader has written extensively about the use of models as reusable artifacts. Two of his posts in particular, “The Cases for Reuse” and “The Economics of Reuse”, provide valuable insight into reuse of models and model elements as well as knowledge reuse across different architectural layers. As the example of the wheel points out, reuse of higher levels of abstraction may be more feasible.

Reuse of a concept as opposed to an implementation may allow you to avoid technical complexity. It definitely allows you to avoid administrative complexity. In an environment where a component’s signature is in flux, it makes little sense to try to reuse a concrete implementation. In this circumstance, DRY at the organizational level may be less of a virtue in that it will impede multiple teams ability to respond to change.

Reuse at a higher level of abstraction also allows for recycling instead of reuse. Breaking the concept into parts and transforming its implementation to fit new or different contexts may well yield better results than attempting to make one size fit all.

It would be a mistake to assume that reuse is either unattainable or completely without merit. The key question is whether the technique yields the value desired. As with any other architecturally significant decision, the most important question to ask yourself is “why”.