Let’s Talk Value (Who Needs Architects?)

Gold Bars

Value is a term that’s heard often these days, but I wonder how well it’s understood. Too often, it seems, value is taken to mean raw benefit rather than its actual meaning, benefit after cost (i.e. “bang for the buck”). An even better understanding of the concept can be had from Tom Cagley’s “Breaking Down Value”: “Value = (Benefit + Perception) – (Cost + Perception)”.

The point being?

Change involves costs and, one would hope, benefits. Not only does the magnitude of the cost matter, but also its perception matters. Where a cost is seen as unnecessary or incurred to benefit someone other than the one paying the bills, its perception will likely be unfavorable. Changes that come about due to unforeseen circumstances are more likely to be seen as necessary than those stemming from foreseeable ones. Changes to accommodate known needs are the least likely to be seen as reasonable. This is why I’ve always maintained that YAGNI doesn’t scale beyond low-level design into the realm of architectural decisions. Where cost of change determines architectural significance, decision churn is problematic.

After I posted “Who Needs Architects? Who’s Minding the Architecture?”, Charlie Alfred tweeted:

One way to be seen as an asset is to provide value. As Cesare Pautasso put it:

This is not to say that architectural refactoring is without value, but that refactoring will be seen as redundant work. When that work is for foreseeable needs, it will be perceived as costlier and less beneficial than strictly new functionality. Refactoring to accommodate known needs will suffer an even greater perception problem.

YAGNI presumes that the risk of flexible design being unnecessary outweighs the risk of refactoring being unnecessary. In my opinion, that is far too simplistic a view. Some functionality and qualities may be speculative, but the need for others (e.g. security) will be more certain.

Studies have shown that the ability to modify an application is a prime quality concern for stakeholders. Flexible design enables easier and cheaper change. “Big bang” changes (expensive and painful) are more likely where coherent design is lacking. Holistic design based on context seems to provide more value (both tangible and perceived) than a dogma-driven process of stringing together tactical decisions and hoping for the best.

Advertisement

#NoEstimates – Questions, Answers, and Credibility

This is questioning?

A recent episode of Thomas Cagley’s SPAMCast featured Woody Zuill discussing #NoEstimates. During the episode, Woody talked about his doubts about the usefulness of estimating and the value of questioning processes rather than accepting them blindly.

I’m definitely a fan of pragmatism. Rather than relying on dogma, I believe we should be constantly evaluating our practices and improving those found wanting. That being said, to effectively evaluate our practices, we need to be able to frame the evaluation. It’s not enough to pronounce something is useful or not, we have to be able to say who is involved, what they are seeking, what the impact is on them, and to what extent the outcome matches up with what they are seeking. Additionally, being able to reason about why a particular practice tends to generate a particular outcome is critical to determining what corrective action will work. In short, if we don’t know what the destination looks like, we will have a hard time steering toward it.

In his blog post “Why do we need estimates?”, Woody (rightly, in my opinion) identifies estimates as a tool used for decision-making. He also lists a number of decisions that estimates can be used to make:

  • To decide if we can do a project affordably enough to make a profit.
  • To decide what work we think we can do during “a Sprint”. (A decision about amount)
  • To decide what work we should try to do during “a Sprint”. (A decision about priority or importance)
  • To decide which is more valuable to us: this story or that story.
  • To decide what project we should do: Project A or Project B.
  • To decide how many developers/people to hire and how fast to “ramp up”.
  • To decide how much money we’ll need to staff a team for a year.
  • To give a price, or an approximate cost so a customer can decide to hire us to do their project.
  • So we can determine the team’s velocity.
  • So marketing can do whatever it is they do that requires they know 6 months in advance exactly what will be in our product.
  • Someone told me to make an estimate, I don’t use them for anything.

What is missing, however, is an alternate way to make these decisions. It’s also missing from the follow up post “My Customers Need Estimates, What Do I do?”. If the customer has a need, does it seem wise to ask them to abandon (not amend, but abandon) a technique without proposing something else? Even “let’s try x instead of y” is insufficient if we can’t logically explain why we expect “x” to work better than “y”. The issue is one of credibility, a matter of trust.

In her post “What Creates Trust in Your Organization?”, Johanna Rothman related her technique for creating trust:

Since then, I asked my managers, “When do you want to know my project is in trouble? As soon as it I think I’m not going to meet my date; after I do some experiments; or the last possible moment?” I create trust when I ask that question because it shows I’m taking their concerns seriously.

After that project, here is what I did to create trust:

  1. Created a first draft estimate.
  2. Tracked my work so I could show visible progress and what didn’t work.
  3. Delivered often. That is why I like inch-pebbles. Yes, after that project, I often had one- or two-day deliverables.
  4. If I thought I wasn’t going to make it, use the questions above to decide when to say, “I’m in trouble.”
  5. Delivered a working product.

While I can’t say that Johanna’s technique is the optimal one for all situations, I can at least explain why I can put some faith in it. In my experience, transparency, collaboration, and a respect for my stakeholders’ needs tends to work well. Questions without answers? Not so much.

One Weird Trick to Design Perfect Applications (?)

What’s the right way to design an application?

Scanning the web, one might be tempted to believe that all sites must use AngularJS Backbone.js Ember.js some sort of javascript framework, all services must be RESTful, and if those services aren’t of the “micro” variety, well…

Is that really the case, though?

Is there a right way, or is it case of choosing the way which most closely matches our current context? We can say rules are rules, but experience will teach that rules lose meaning when divorced from the context they were formed in response to. There is no universal best practice. Without understanding the “why” behind a rule, you can’t determine whether it applies.

A conversation with John Evdemon captured principle this in regards to the latest application architecture “sensation” (AKA microservices):



A Kindle highlight shared by Tony DaSilva extends this principle to enterprise architecture:

There is no evidence that structure in and of itself affects the profitability of a company; different structures work best for different companies.

Meeting a need involves two architectures: the architecture of the problem and the architecture of the solution. Understanding the architecture of the problem is a necessary prerequisite to designing the architecture of the solution. Without that context providing definition of the desired end, imperfect though it may be, any proposed solution becomes a shot in the dark. Attempting to extend a technology or technique outside its range of utility risks harming its credibility (see SOA).

Thinking isn’t an option, it’s a requirement.

#4U2U – Canned Competency, Values & Pragmatism

home canned food

Not quite two years ago, I put up a quick post entitled “The Iron Law of Tools”, which in its essence was: “that which does for you, can do to you” (whence comes the #4U2U in the title of this post). That particular post focused on ORMs (Entity Framework to be specific), but the warning equally applies to libraries and frameworks for other technical issues as well as process, methodology and techniques.

Libraries, frameworks, and processes (“tools” from this point forward) can make things easier by allowing you to concentrate on what to do rather than how to do it (via high-level abstractions and/or conventions). However, tools are not a substitute for understanding. Neither the Law of Unintended Consequences nor Murphy’s Law have been repealed. Without an adequate understanding of how something works, you cannot assess the costs of the trade-offs that are being made (and there are trade-offs involved, you can rely on that). Understanding is likewise necessary to recognize and fix those situations where the tool causes more problems than it solves. As Pawel Brodzinski observed in his post “A Fool With a Tool Is Still a Fool”:

Any time a discussion goes toward tools, any tools really, it’s a good idea to challenge the understanding of a tool itself and principles behind its successes. Without that shared success stories bear little value in other contexts, thus end result of applying the same tools will frequently result in yet another case of a cargo cult. While it may be good for training and consulting businesses (aka prophets) it won’t help to improve our organizations.

A fool with a tool will remain a fool, only more dangerous since now they’re armed.

Pawel’s point regarding cargo cults is particularly important. Lack of understanding of how a particular effect proceeds from a given cause often manifests as dogmatic assertions in defense of some “universal truth”. The closest thing I’ve found to a universal truth of software development is that it’s very unlikely that anything is universally applicable to every context.

It’s dangerous to conflate adherence to a tool with one’s core values, such that anyone who disagrees is “wrong” or “deluded” or “unprofessional”. That being said, values can provide a frame of reference in understanding someone’s position in regard to a tool. In “The TDD Divide: Everyone is Right”, Cory House addresses the current dispute over Test-Driven Development and notes (rightly, in my opinion):

The world is a messy place. Deadlines loom, team skills vary widely, and the impact of bugs varies greatly by industry. Ultimately, we write software to make money and solve problems. Tests are a tool that help us do both. Consider the context to determine which testing style fits for your project.

Uncle Bob is right. Quality matters. Separation of concerns and unit testing help assure the utmost quality, speed, and flexibility.

DHH is right. Sometimes the cost of unit tests exceed their benefit. Some of the benefit of automated testing can be achieved through automated integration testing instead.

You need to understand what a tool offers and what it costs and the result of that equation in relation to what’s important in your context. With that understanding, you can make a rational choice.

Technical Debt & Quality – Binary Thinking in an Analog World

How many shades of gray?

I admit it, I’m a pragmatist.

Less than two weeks after starting this blog, I posted “There is no right way (though there are plenty of wrong ones)”, proclaiming my adherence to the belief that absolutes rarely stand the test of time. In design as well as development process, context is king.

Some, however, tend to take a more black and white approach to things. I recently saw an assertion that “Quality is not negotiable” and that “Only Technical Debt enthusiasts believe that”. By that logic, all but a tiny portion of working software professionals must be “Technical Debt enthusiasts”, because if you’re not the one paying for the work, then the decision about what’s negotiable is out of your hands. Likewise, there’s a difference between being an “enthusiast” and recognizing that trade-offs are sometimes required.

Seventeen years ago, Fast Company published “They Write the Right Stuff”, showcasing the quality efforts of the team working on the code that controlled the space shuttle. There results were impressive:

Consider these stats : the last three versions of the program — each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. Commercial programs of equivalent complexity would have 5,000 errors.

Impressive results are certainly in order given the criticality of the software:

The group writes software this good because that’s how good it has to be. Every time it fires up the shuttle, their software is controlling a $4 billion piece of equipment, the lives of a half-dozen astronauts, and the dreams of the nation. Even the smallest error in space can have enormous consequences: the orbiting space shuttle travels at 17,500 miles per hour; a bug that causes a timing problem of just two-thirds of a second puts the space shuttle three miles off course.

It should be noted, however, that while the bug rate is infinitesimally small, it’s still greater than zero. With a defined hardware environment, highly trained users, and a process that consumed a budget of $35 million annually, perfection was still out of reach. Reality often tramples over ideals, particularly considering that technical debt can arise from changing business needs and changing technical environments as much as sloppy practice. Recognizing that circumstances may make it the better choice and managing it is more realistic than taking a dogmatic approach.

For most products, it’s common to find multiple varieties with different features and different levels of quality with the choice left to the consumer as to which best suits his/her needs. It’s rare, and rightly so, for that value judgment to be taken out of the consumer’s hands. Taking the position that “quality is not negotiable” (with the implicit assertion that you are the authority on what constitutes quality) places you in just that position of dictating to your customer what is in their best interests. Under the same circumstance, what would be your reaction?

Welcome to a Dogma-Free Zone

I drank what?

I was thinking of the immortal words of Socrates, who said, “… I drank what?”
(Val Kilmer as Chris Knight in “Real Genius”)

More than 2400 years ago, Socrates was convicted of corrupting the youth and impiety, for which he was sentenced to death. It was a high price to pay for asking embarrassing questions. And yet, Athens gained little by it. It’s prime was long past, and no matter how many critics it silenced, it could not regain its former glory.

So why the history lesson? Earlier this month, Dan North posted a brief notice that he had just returned from the Norwegian Developers Conference in Oslo and that they had published his article about opportunity cost in development leading up to the conference. The premise of the article was summed in the penultimate sentence: “So take nothing at face value, and instead look for the trade-offs in every decision you make, because those trade-offs are there whether or not you see them”. Encouraging people to evaluate their practices in light of the trade-offs involved did not strike me as a radical position, but it certainly attracted some heated comments. One in particular stated that Dan and all who agreed with him were “disingenuously misleading the ranks of up-and-coming programmers into wasting their time looking for better design methodologies than TDD when no such beast exists”.

That’s a bold statement. It assumes that x is universally applicable. It assumes that a x represents perfection and no further refinement is possible. Lastly, it assumes that questioning x is wrong. History has never been very kind to those holding these opinions, regardless of what we substitute for x.

There’s always an exception. There’s always something better down the road. Informed choice is superior to blind acceptance.

Those who question either prove the soundness of the current way or point the way to a better solution. Teaching the young to critically examine their methods lays the foundation for a stronger future. It certainly doesn’t corrupt. I know that I put more faith in anything strong enough to tolerate scrutiny than not.

Ironically, while all this was playing out, I stumbled across a post by Alistair Cockburn promoting “a discussion about whether idea (agile or plan-driven or impure or whatever) works well in the conditions of the moment”:

I signed it!

I’m tired of people from one school of thought dissing ideas from some other school of thought. I hunger for people who don’t care where the ideas come from, just what they mean and what they produce. So I came up with this “Oath of Non-Allegiance”.

I promise not to exclude from consideration any idea based on its source, but to consider ideas across schools and heritages in order to find the ones that best suit the current situation.

I think that covers it nicely.

Rules are rules…or are they?

Elizabeth: Wait! You have to take me to shore. According to the Code of the Order of the Brethren…

Barbossa: First, your return to shore was not part of our negotiations nor our agreement so I must do nothing. And secondly, you must be a pirate for the pirate’s code to apply and you’re not. And thirdly, the code is more what you’d call “guidelines” than actual rules. Welcome aboard the Black Pearl, Miss Turner.

(from “Pirates of the Caribbean: The Curse of the Black Pearl”)

Over the course of a career in technology, you collect “rules” that guide the way you work. However, as a blogger noted: “Good design principles are usually helpful. But, are they always applicable?”.

We know redundant data is bad, except for caching, when it’s good. Database normalization is a good thing until it affects performance, then it’s not so good…same with transactions. Redundant code, now there’s an absolute! Except we find that the Open/Closed Principle dictates that new code be added for changes, rather than modifying the existing code. What to do?

The Open/Closed Principle always bothered me. I agree with it philosophically–good designs make it possible to add functionality without disturbing existing features–but in my experience there are no permanently closed abstractions. Superclasses or APIs might be stable for a (relatively) long time, but eventually even the most fundamental classes and interfaces need updating to meet emerging needs.

(Kent Beck, “The Open/Closed/Open Principle”)

Even Bob Martin, who stated “In many ways this principle is at the heart of object oriented design” recognized its limits:

It should be clear that no significant program can be 100% closed… In general, no matter how “closed” a module is, there will always be some kind of change against which it is not closed.

(from “The Open-Closed Principle”)

The takeaway is that an architect should be pragmatic, not dogmatic. Understanding the “why” behind a “rule” allows you to know when it doesn’t apply. Knowing the trade-offs allows you to make informed, rationale decisions that are consistent with the needs at hand. Blindly adhering to received wisdom is just magical thinking, and it’s value is more a function of chance than reality. By the same token, understanding why you’re acting contrary to common practice marks the difference between boldness and gambling.

A foolish consistency is the hobgoblin of little minds, adored by little statesman and philosophers and divines. With consistency a great soul has simply nothing to do. He may as well concern himself with his shadow on the wall. Speak what you think now in hard words, and to-morrow speak what to-morrow thinks in hard words again, though it contradict every thing you said to-day.

(Ralph Waldo Emerson, “Self Reliance”)