One Size Fits Somebody, But Probably Not You

Why One Process Can’t Work Everywhere

Gene Hughson and Charlie Alfred

In the software business, there’s been a strong tendency to treat standardized processes like Better Homes and Garden’s recipes. People pine after a standard way to make a team effective in dealing with complex problems. Agile methods, unit test strategies, continuous integration are just a few of the examples. The hope is that we can just copy a process that was successful somewhere else, or maybe make a few small alterations (like the tailor at Joseph A. Banks does), and presto, we have a quantum leap in effectiveness. This post explores why the authors believe that this model is closer to fantasy than reality.

The “Essence of Software”

“There is no single development, in either technology or management technique, which by itself, promises even one order of magnitude of improvement within a decade, in productivity, in reliability, in simplicity”

– Fred Brooks

The above quote appeared in 1986 in an article titled “No Silver Bullet – Essence and Accident in Software Engineering” [1]. The main premise of this article is that “the essence of software engineering is a construct of interlocking concepts among data items, algorithms, and invocations of functions. This essence is abstract, in that the conceptual construct is the same under many different representations.” He goes on to say that four things make software inherently different from other engineering disciplines:

  • Complexity: the vast number of different parts and the differences between them makes software profoundly different from other large-scale engineering, like computers, buildings or automobiles.
  • Conformity: Physics and chemistry have core unifying principles (General Relativity, Ohm’s Law) which drive behavior and organization. Software is more like law – designed by humans. Loose conformity exists, but with much variation and contradiction.
  • Changeability: Software is soft; firmware is firm hardware is hard. The names accurately represent which is likely to be changed first. High rates of change combine with interdependencies between parts to increase complexity.
  • Invisibility: The reality of software is not inherently embedded in space. As much as we try, UML class, state and sequence diagrams only capture the very highest levels of structure and behavior. Even the source code fails to tell the full picture, as is evident any complex multi-threaded program.

These same four aspects apply to the processes by which software is created. For any non-trivial system, the number of humans involved and their interactions will serve to increase complexity and decrease conformity. For good or ill, requirements will change. The more detailed the rules around how the various players interact, the less those rules will bear any resemblance to reality.

Software Development Processes

According to Wikipedia [2], the roots of agile methods can be traced back to 1957, to work done at IBM’s Service Bureau Corporation. The movement gathered momentum in the early 1970’s, and became a force in 2001, with the publication of the Agile Manifesto.

Agile is a collection of several methods, including Kanban, XP, Crystal Clear, Function-Driven Development, and Scrum. In 2012, Scrum seems to have emerged as the most widely adopted agile method [3]. Agile methods strive to address the complexity and changeability obstacles cited above by Brooks.

Traditional waterfall methods spend significant up-front time trying to define a system’s requirements and architecture. For large complex systems, this process can take years of effort and involve many people.

Agile proponents argue:

  • Textual and/or diagrammatic representations of a system are too vague and incomplete, and
  • During the process of representing the system, requirements change too rapidly
  • The ability to assess changes is hampered by the absence of early iterations of partial systems
  • Refactoring, the ability to restructure a design to better handle change, is an essential capability

By contrast, Waterfall proponents argue:

  • Refactoring doesn’t scale well. It works best in smaller, more localized areas
  • Early design decisions constrain the solution space of downstream decisions. Errors here multiply
  • Regardless of how well you encapsulate things, large systems have many cross-dependencies. Many systemic issues are not evident until a “critical mass” of the system can be viewed as parts and whole.

Further complicating the debate are the inherent trade-offs between three strong drivers:

  1. Velocity – At what rate is the system being developed? How long will it take to be done?
  2. Quality – How good is the result? Features? Cost? Usability? Performance? Security? Reliability? etc.
  3. Adaptability – How fast can the system adapt? To new Requirements? Geographies? Technologies?

These three drivers expand the scope of the debate. The development projects/process, the system being developed, and the deployment environment(s) are very tightly coupled.

One Size Does Not Fit All

If you accept the premises above, then one inescapable solution is that context matters, a lot. Brooks argued that there was “no silver bullet” in technology or management technique (and development process is, after all, a subclass of management technique). In the same way that the architecture of a system evolves, intentionally or not, to adapt to its context, so too must a process.

Evidence that this statement is true is that all methods get adapted. There are many Scrum projects, but none practice the same way. In the same vein, there are many waterfall projects (especially in safety-critical regulated development, like aircraft and medical devices), and few, if any, of them practice the same way.

Take a slight deviation, and you see a new picture. Product line (also called platform) adds a different wrinkle. A traditional development project has a specific target in mind. An automobile is designed for transporting a few passengers along roads. A boat is designed for transporting a few passengers over water. If either products ends up trying to do the work of the other, it is considered a failure condition.

With product lines, the intent is to create a shared asset base from which many related products can be built. Google’s Android framework is the foundation for smart phone and tablet computers from Samsung, Motorola, HTC and others [4]. Product lines create a new tension, which is similar to but quite different from the “change over time” tension which motivates agile. Product line development deals with change over time challenges and change over space (context) challenges at the same time. When a football players knee is forced to deal with concurrent changes in time and space, it often results in an ACL or MCL tear.

In December 2011, Mark Kennaley did an excellent podcast with Mike Gualtieri of Forrester Research [5]. The subject was that just because a development process works one place does not mean it will succeed in another. This is analogous to a plant that thrives on sunshine, heat and water, will fare poorly if planted in a cooler, drier climate in the shade. Kennaley lists ten factors which must be considered, such as size of the development team, complexity of the domain, technical complexity, whether the team is co-located or distributed, the division of labor within the organization, compliance, criticality, time to market pressures and culture.

We share this view, but believe that other factors need to be added. In particular, important variations must be considered in the nature of:

  • The system being developed – Early life-cycle stages of innovative systems are different from next-gen systems with well understood markets/solutions
  • The context of use – SUV’s are used by off-road enthusiasts and suburban families. A vehicle suitable for the former killed 100,000’s of the latter
  • System deployment – in-house hosting is different from Software as a Service, and mobile is quite different from LAN connectivity

Some factors are also in need of expansion. In addition to its complexity, the variability and volatility of the domain will affect the fit of the process to its context. The team’s level of experience with the technology platform will likewise impact the suitability of the process as much as its size and dispersion.

Examples

In this section, we’d like to present two short case studies which illustrate how development, system, and deployment factor strongly influence process selection.

Example 1 – Connected Medical Device

In regulated development of embedded systems (medical and aerospace), human safety concerns dominate the process. However, formal process only begins when development starts. At this point in time:

  • FMEA processes assess safety risks, impact and mitigations and ensure that unmitigated risk is acceptable
  • Product (feature) and system-level (architectural) requirements are clearly specified.
  • A formal life cycle process dictates reviews, approvals and traceability for designs, implementation and testing

As a result, requirements changes have a burden (analyze safety impact, redo traceability, redo tests). Again, given the overriding safety concern, this rigor is justifiable

The concept phase is critical to the process. The focus of this phase is hypothesis formulation, proof of concept and risk reduction. Design controls are off in this phase. As a result, it behooves a product development shop to stay in concept phase until the problem is well-defined, architecture is solid, and requirements are well-defined. A common mistake is to proclaim end of concept too soon, and carry ambiguity and risk into development. Design controls make these much more expensive to address than they would have been during concept.

This leads directly to one of the fundamental tenets of agile – requirements volatility. As mentioned earlier, once in the development phase, safety mechanisms (including risk analysis, change management, reviews and traceability) put a big tax on requirements change. With embedded systems, the good news is that physics, chemistry and biology are stable, predictable sciences, and user interfaces are relatively task focused.

The interesting situation occurs when moving to medical application and enterprise software that is also safety-critical. Regulatory design controls still apply because of the safety risks. However, physics, chemistry and biology are less important drivers and human users become a bigger factor. Now, you have the embedded software problem with significantly more requirements volatility. The best approach here isn’t always obvious. One strategy is to use architecture to separate the system into central parts whose requirements are more stable, and use agile methods on peripheral parts (e.g. reporting) whose requirements are more volatile.

Example 2 – Corporate Line of Business Application

Our second example involves an application used to provide title search services to the agency channel of a title insurance provider operating in multiple states. This application serves both external customers (title insurance agents) as well as internal users and integrates with a variety of systems. The system has multiple compliance requirements due to the fact that in addition to Federal law, it must be compliant with the laws of each state served. The system is maintained by a small, technically experienced team that has become familiar with the business domain over a ten-year period. During this period, the same business owner has been in place, yielding a high degree familiarity between the owner and the team. This gives us the following dominant process drivers:

  • Variability – The diversity of legal and regulatory requirements as well as operational workflow from one jurisdiction to another requires a high degree of flexibility from the system.
  • Complexity – Responding to the domain complexity noted above as well as the integrations with other systems yields a significant amount of technical complexity. This is further added to by the need to maintain acceptable performance of the system while the user base is growing.
  • Stability as a Priority – Users of the system value stability over new features.

These drivers have led to the development of a process that is agile without being Agile. “Just enough” is the watchword, and practices are constantly evaluated for relevance. Those that provide value are retained and those that do not are dropped.

Frequent internal releases are used to verify code and validate the product, but releases to production take place at three to six months intervals, according to the preference of the business owner. The release management practices of the group, described in the post “Do you have releases or escapes?” [6] is used to ensure the integrity of the release.

Rather than a time-boxed process, a negotiated method is used where the effort to deliver the desired bundle of functionality drives the projected due date for the release. Estimates are given as ranges, with variances that diminish as more is known about the individual features to be delivered. Any changes are triaged, and if needed for the release in progress, the schedule is adjusted accordingly. Collaborative requirements elicitation, constant feedback, and transparency are used to maintain the relationship between the business owner and the development team.

Conclusion

The contextual factors that determine the appropriateness of a development process will vary from industry to industry and from enterprise to enterprise. It is also important that these drivers vary from application to application within the same enterprise as well. A process that has achieved success with some groups in an organization may actually degrade the performance of other groups if it does not fit their context [7]. Standardization that ignores the appropriateness of a set of practices to the target environment may well do more harm than good.

[1] http://faculty.salisbury.edu/~xswang/Research/Papers/SERelated/no-silver-bullet.pdf
[2] http://en.wikipedia.org/wiki/Agile_software_development
[3] http://www.versionone.com/state_of_agile_development_survey/10/page3.asp
[4] http://www.botskool.com/geeks/list-andriod-based-smart-pnhones
[5] http://blogs.forrester.com/mike_gualtieri/12-12-11-technopolitics_podcast_agile_software_is_not_the_cats_meow
[6] https://genehughson.wordpress.com/2011/12/16/releases-or-escapes/
[7] http://thecodist.com/article/i_fear_our_mobile_group_being_forced_to_follow_scrum

10 thoughts on “One Size Fits Somebody, But Probably Not You

  1. Interestingly enough, these thoughts have been mirrored in the QA community, with sites like “http://context-driven-testing.com/”. Like the architecture approaches outlined above, these thought leaders argue that QA practices must be selected and customized from one project to the next. The site even uses similar examples (an air craft component versus an Internet-facing website). Interesting to see two different perspectives on similar problems reach similar solutions.

    Like

    • I hesitate to call it “axiomatic”, but I’ve yet to see a compelling argument that there could be one true method. There are so many different circumstances, that the idea of fitting the process to the context doesn’t seem that radical.

      Like

  2. Excellent post, Gene. Fred Brooks said so many pithy things; your reference to him here is perfect. In some ways you’ve articulated, perhaps more clearly than I managed, the truth that I often harp about: architecture, process, and people are all inextricably intertwined. A process that’s not adapted to the circumstances of culture and technical requirements is likely to fail; an architecture that doesn’t take its people and process into account is probably dead in the water; people that don’t understand and buy into their architecture and process are not effective employees.

    Like

  3. I agree with most of all that’s written in the post, and would like to challenge one thing: Like numerous leading experts, I relate to Agile concepts and frameworks, rather than methods. This inherently defines these frameworks as something you are supposed to adapt to your needs. Furthermore, all agile frameworks encourage to keep adapting to changing needs, and to treat the process itself like an agile project.
    Sadly, I see several behaviours in the community, which are detrimental to any agile project:
    – Seeing agile as a set of methods, or even one method. Frequently, this comes with the expectation to obtain a clear set of rules of how to do agile.
    – Having an expectation that at some point, preferably early in the “agile project” it will be possible to ‘tick it off’, and declare that we are now agile.
    – Agile is a fashion; like any other project management it will pass, and we might as well play along and while we do what we are used to.
    These are three (of the) signs of not being agile in so-called agile organisations.
    Referring to the latter point, agile, like any other framework, will be succeeded by something else. Nothing in this world, except taxes and death, is here to stay (with greater probability to the former). Until this best next thing arrives, what we know best are frameworks based on agility, if only because there is considerable empirical evidence to suggest that waterfallish project management is not suitable to the vast majority software projects.
    With that said – and this could be my own fixation, I read this post as truly agile.

    Like

    • Thanks, Ilan. In my opinion, it’s important to look at the word “agile” and understand what it means – “nimble”. If you can respond to change, whether that’s requirements shifting over the course of a project or context shifting over the lifetime of an application, then that’s truly agile.

      Two things you mentioned that are definitely worth emphasizing: the need for rules and the need to ‘tick it off’. Rules can be a wonderful tool when they enhance your ability to get things done but a trap when they’ve outlived their usefulness. They provide comfort, but a comfortable prison is a prison nonetheless. Likewise, if someone thinks they can “freeze” a process, they’re deluded. Process that does not adapt to and evolve with the environment it lives in becomes a set of handcuffs.

      I agree that frameworks can come and go, but I do believe the principles of agility are going to endure. Individual practices and the mix of those practices that we bring to bear will change, but the idea of working thoughtfully should go on.

      Like

  4. Pingback: No, Uncle Bob, No – the Obligatory healthcare.gov Post | Form Follows Function

  5. Pingback: Technical Debt & Quality – Binary Thinking in an Analog World | Form Follows Function

  6. Pingback: Who Needs Architects? – Monoliths as Systems of Stuff | Form Follows Function

Leave a reply to kirschilan Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.