No, Uncle Bob, No – the Obligatory healthcare.gov Post

Good for what ails you?

I tried to avoid this one. First of all, I don’t do politics on this site and this topic has way too much political baggage. Second, a great many people have already written about it, so I didn’t think I really had anything to add.

Then, Uncle Bob Martin chimed in.

I agree with some of what he has to say. I have no doubt that this particular debacle has harmed the image of software development in the eyes of the general public. Then he falls over the edge, comparing the launch of healthcare.gov with the Challenger disaster. After all, in both cases, political considerations overrode technical concerns. Regardless of this, Bob puts the blame on those far down the ladder:

Perhaps you disagree. Perhaps you think this was a failure of government, or of management. Of course I agree. Government failed and management failed. But government and management don’t know how to build software. We do. We were hired because of that knowledge. And we are expected to use that knowledge to communicate to the managers and administrators who don’t have it.

The thing is, the Centers for Medicare and Medicaid Services (CMS) is both a government agency and the system integrator on the healthcare.gov project. While there’s plenty of evidence of really poor code across the various parts, the integration of those parts is where the project fell down. Had the various contractors hired numerous Bob Martin clones and obtained the cleanest of clean code, the result would have still been the same.

Those with the technical knowledge and experience are, without a doubt, obligated to provide their best advice to the managers and administrators. When those managers and administrators ignore that advice, it is incorrect to allege that the fault lies elsewhere.

The end of the post, however, is the worst:

So, if I were in government right now, I’d be thinking about laws to regulate the Software Industry. I’d be thinking about what languages and processes we should force them to use, what auditing should be done, what schooling is necessary, etc. etc. I’d be thinking about passing laws to get this unruly and chaotic industry under some kind of control.

If I were the President right now, I might even be thinking about creating a new Czar or Cabinet position: The Secretary of Software Quality. Someone who could regulate this misbehaving industry upon which so much of our future depends.

Considering that all indications are that the laws and regulations around government purchasing and contracting contributed to this mess, I’m not sure how additional regulation is supposed to fix it. Likewise, it’s a little boneheaded to suggest that those responsible for this debacle (by attempting to manage what they should have known they were unqualified to manage) should now regulate the entire software development industry. For a fact, the very diversity of the industry should make it obvious that a one-size-fits-all mandate would make matters irretrievably worse.

Handing out aspirin to treat Ebola is just bad medicine.

Advertisements

Mixed Signals and Messed-Up Metrics

yes...no...um, maybe?

Dentistry has made me a liar.

One of my tasks each morning is to make sure my youngest son brushes his teeth. Someone, somewhere decided that two minutes of tooth brushing will ensure optimal oral hygiene, which target has been transmitted by our dentist to our very bright, but very literal six year-old. Every morning when he has thoroughly brushed and I give him the go-ahead to rinse, he asks “Dad, was that two minutes?”, to which I reply “yes, yes it was”, regardless of how long it took. I’m a horrible person, yes, but trying to explain the nuances to him at this age would be a pain on par with trimming my nails with a chainsaw – the lie works out better for all involved.

My daily moral dilemma has a very common source – metrics are frequently signals of a condition, rather than the condition itself. When you cannot use a direct measure (e.g. number of widgets per hour), it’s usual to substitute a proxy that indicates the desired condition (at least that’s the plan). A simplistic choice, however, will lead to metrics that fail to hold true. Two minutes spent brushing well should yield very good results, however, inadequate brushing, no matter how long you spend doing it, will always be inadequate. This is a prime example of what Seth Godin termed “…measuring what’s easy to measure as opposed to what’s important”.

Examples of these types of measures in software development are well-known:

  • Lines of Code: Rather than productivity, this just measure typing. Given two methods equal in every other way, would the longer one be better?
  • Bug Counts: Not all bugs are created equal. One substantive bug can outweigh thousands of cosmetic ones.
  • Velocity/Turn Time: Features and (again) bugs are not created equal. Complexity, both business and technical, as well as clarity of the problem, tend to be have more impact on time to complete than effort expended or size.

As John Sonmez noted in “We Can’t Measure Anything in Software Development”: “We can track the numbers, but we can’t draw any good conclusions from them.”

There are a number of reasons these measures are unreliable. First, is the tenuous ties between the measures and what they hope to represent as noted above. Second, is the phenomenon known as Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure”. In essence, when people know that a certain number is wanted/expected, the system will be gamed to achieve that number. Most importantly, however, is that value is the desired result, not effort. In manufacturing, more widgets per hour means greater profits (assuming sufficient demand). For software development, excess production can likely yield excess risk.

None of this is to suggest that metrics, including those above, are useless. What is important, however, is not the number, but what the number signals (particularly when metrics are combined). Increasing lines of code over time coupled with increasing bug counts and/or decreasing velocity may signal increased complexity/technical debt in your codebase, allowing you to investigate before things reach critical mass. Capturing the numbers to use as an early warning mechanism will likely bear much more fruit than using them as a management tool, where they likely become just a lie we tell ourselves and others.