Software Development Practice and Theory – A Chicken and Egg Story

which came first?

The state of software development as a profession looks a lot like the practice of medicine in the 19th century. We have our great ones in the tradition of Pasteur, Nightingale, Jenner, Lister, and Semmelweis. In spite of our advances, there’s a sense that the state of practice is still largely ad hoc.

A LinkedIn discussion started by Peter Murchland, “Does theory lead practice or practice lead theory?” illustrates the analogy (although the thrust of his question was directed towards enterprise architecture, the circumstances appear the same). To answer his question honestly, we have to say “both”. Established theory underlies many of our practices, and yet we often still find ourselves exploring uncharted territory at the edges. When the map runs out, we don’t have the option of stopping, but are required to press on, simultaneously creating new theory and testing it at the same time.

Theory, where it exists, guides practice. Practice reinforces, refines, or refutes theory. When practice faces problems outside the domain of established theory, then the seeds of new theory are planted.

Like medicine, software development (and enterprise architecture) lacks the ability to conduct full-scale controlled experiments to falsify it theories. For reasons of both practicality and ethics, more indirect means are the best available recourse. Just as medicine is ultimately “proven” in the practice, so too is software development. The fact that we are experimenting on our patients (customers) should give pause, particularly considering the “first, do no harm” credo is not a fundamental part of our training.

Much has been written regarding whether software development is science or art, craft or engineering. In “Enterprise Architecture as Science”, Richard Veryard makes the claim that enterprise architecture is a discipline, though not a scientific one because “many knowledge-claims within the EA world look more like religion or mediaeval scholastic philosophy than empirically verifiable science”. This same description can be applied equally to the current state of software development practice. It should be noted that this same description could also have been applied to the practice of medicine well into the 19th century.

Software development is a relatively young discipline (as is enterprise architecture, for that matter) with an immature, incomplete body of theory behind it. Building, testing, and refining that body should be a priority; absent that testing and refining it will have no claim to a scientific foundation. Until that foundation is much more solid, the emphasis must be on questioning, rather than trusting. Nullius in verba (“on the word of no one”), the motto of the Royal Society, should be our motto as well.

Not All Gold Glitters

ooh, shiny

After two back-to-back posts, I thought I was done with YAGNI, simplicity, and economy of design – at least for a while. But then Jef Claes published “But I already wrote it”.

Jef’s post dealt with how a colleague had implemented a new feature in a much richer manner than anticipated. When the analyst confirmed that the implementation was more than what was needed, Jef recommended trimming out the extra, while his colleague argued that since it was done, it should be left as is. After pointing out the risks and costs of the additional complexity, Jef’s colleague came around (which is, in my opinion, the correct way to do YAGNI – a consideration of the costs and benefits, rather than a reflex). Then came the comments.

One commenter took exception to Jef’s statement that “code is just a means to an end; the side product of creating a solution or learning about a problem”. For that commenter, that attitude would “inevitably” lead to writing bad code. “The way you write good code is by loving good code”.

Another suggested that the situation taught his colleague never to take initiative and had ruined his/her job satisfaction. “From now on, he should consider himself to be a code monkey whose job is to accept the designer’s vision, regardless of how short-sighted or limited it is, and produce a working program.” This commenter stated that Jef should have waited to see if additional maintenance costs had materialized before deciding.

Needless to say, I disagree with both.

The first commenter above needs to understand that the application belongs to the customer. For functionality, substituting your judgment for the customer’s is unprofessional. If I ask for a garage and you build a mansion while my back’s turned, you don’t get paid. Taking pride in how you deliver value is a virtue provided that you remember that the customer is the one who determines what they value.

The second commenter assumes that Jef’s colleague will be discouraged because his initiative wasn’t accepted. If that’s the case, perhaps another line of work would be appropriate. As noted above, the customer determines what is needed and we should be taking pride in fulfilling those needs. They also assume that the design was short-sighted and limited, but the basis for that is never provided.

The second commenter’s suggestion that the code should have been left as is and only changed if a problem emerged is even more problematic. Taking on risk and potential expense on the customer’s behalf is not responsible behavior. Additionally, decisions are not made in a vacuum – each choice builds on earlier choices to enable or constrain (often both). Making those decisions without a rational basis is equally irresponsible.

On a personal level, I can sympathize that someone has expended effort and is proud of what they’ve accomplished. However, putting our own wants above the needs of our customers does not advance the profession. Delivering requested value, without surprises, does.

Commitment

On the march

Napoleon nearly avoided his Waterloo.

Two days prior to the battle, Napoleon managed to divide the Prussian army from its Anglo-Dutch allies under Wellington. With the bulk of his forces, he attacked the Prussians while his subordinate, Marshall Michel Ney, screened Wellington’s forces dribbling in from Brussels. Napoleon’s main body had beaten the more numerous Prussians and he called on Ney’s I Corps, nearly twenty thousand strong, to fall on the Prussian right and complete the victory. Ney, meanwhile, called for I Corps to continue on to him and sweep away Wellington’s forces.

The French I Corps marched back and forth between the two battles, taking part in neither. Had either allied army been decisively defeated, then the outcome could have been much different. As it turned out, Wellington’s army was reinforced in the nick of time by the Prussians, and together they made Waterloo synonymous with downfall.

Uncertainty is an ever-present factor in the design and development of software. It is extremely important to recognize and acknowledge that uncertainty in order to mitigate it. Given that choices carry the risk of locking you into a particular approach, increasing flexibility by deferring commitment is a logical technique. One of the principles of Real Options is “Never commit early unless you know why”.

However, another principle of Real Options is “Options expire”. Decisions cannot and should not be deferred indefinitely. Failure to choose is a choice in itself and all the more dangerous if its nature isn’t recognized.

Architectural choices are architectural constraints. However, it should be recognized that constraints provide structure. Just as the walls that bound a building define it, so too design choices define systems. The key is that the definition of the system aligns with its goals and retains as much flexibility as possible.

Commitment can bring with it its own set of anxieties. Second-guessing is human nature and circumstances can be remarkably fluid. Once committed, however, radical change should be approached more carefully. Ignoring the cost of changing directions can be as much a source of analysis paralysis as failing to decide in the first place. It’s better to get into the battle with your best effort than to march around in circles hoping for something a little better.

Bumper Sticker Philosophy

Yeah!  Wait, what?

YAGNI: You Ain’t Gonna Need It.

Sound bites are great – short, sweet, clear, and simple. Just like real life, right?

Seductive simple certainty is what makes slogans so problematic. Uncertainty and ambiguity occur far more frequently in the real world. Context and nuance add complexity, but not all complexity can be avoided. In fact, removing essential complexity risks misapplication of otherwise valid principles.

Under the right circumstances, YAGNI makes perfect sense. Features added on the basis of speculation (“we might want to do this someday down the road”) carry costs and risks. Flexibility typically comes at the cost of complexity, which brings with it the risk of increased defects and more difficult maintenance. Even when perfectly implemented, this complexity poses the risk of making your code harder to use by its consumers due to the proliferation of options. Where the work meets a real need, as opposed to just a potential one, the costs and benefits can be assessed in a rational manner.

Where YAGNI runs into trouble is when it’s taken purely at face value. A naive application of the principle leads to design that ignores known needs that are not part of the immediate work at hand, trusting that a coherent architecture will “emerge” from implementing the simplest thing that could possibly work and then refactoring the results. As the scale of the application increases, the ability to do this across the entire application becomes more and more unlikely. One reason for employing abstraction is the difficulty in reasoning in detail about a large number of things, which you would need to do in order to refactor across an entire codebase. Another weakness of taking this principle beyond its realm of relevance is both the cost and difficulty of attempting to inject and/or modify cross-cutting architectural concerns (e.g. security, scalability, auditability, etc.) on an ad hoc basis. Snap decisions about pervasive aspects of a system ratchets up the risk level.

One argument for YAGNI (per the Wikipedia article) is that each new feature imposes constraints on the system, a position I agree with. When starting from scratch, form can follow function. However, as a system evolves, strategy tends to follow structure. Obviously, unnecessary constraints are undesirable. That being said, constraints which provide stability and consistency serve a purpose. The key is to be able to determine which is which and commit when appropriate.

In “Simplicity in Software Design: KISS, YAGNI and Occam’s Razor”, Hayim Makabee captures an important point – simple is not the same as simplistic. Adding unnecessary complexity adds risk without any attendant benefits. One good way to avoid the unnecessary that he lists is “…as possible avoid basing our design on assumptions”. At the same time, he cautions that we should avoid focusing only on the present.

It should be obvious at this point that I dislike the term YAGNI, even as I agree with it in principle. This has everything to do with the way some misuse it. My philosophy of design can be summed up as “the most important question for an architect is ‘why?'”. Relying on slogans gets in the way of a deliberate, well-considered design.