Form Follows Function on SPaMCast 467

SPaMCAST logo

It’s time for another appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

This week’s episode, number 467, features Tom’s excellent essay on value (value is one of those simple-seeming, but complex concepts). Jeremy Berriault‘s QA corner covers testing in difficult circumstances. I bat cleanup with a Form Follows Function segment discussing my post “Management, Simple and Wrong – Semantics, Systems, and Self-Correction”.

In this installment, Tom and I talk about a post that came about, not intentionally, but because I got into a conversation on Twitter that played out along the lines of the post I really intended to write (dealing with organizations as complex, multi-level systems). When life hands you your topic, it’s generally a good idea to chase it to the conclusion!

You can find all the SPaMCast episodes I’m in under the SPaMCast Appearances category on this blog. Enjoy!

Advertisement

Management, Simple and Wrong – Semantics, Systems, and Self-Correction

Villain Caricature

Simple responses to complex situations are both seductive and dangerous. The difficulty in juggling lots of variables tempts us to employ abstraction so as to avoid being overwhelmed. Abraham Maslow’s observation, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail”, applies. Some things (e.g. landmines) react badly to being treated as if they were nails. Having more tools in the box may help avoid problems.

This isn’t the post I had in mind to write next, but it’s one that came about by accident (via a multi-day mass participant Tweet-storm, with my participation beginning here). I had planned an Organizations as Systems post re: multiple players in multiple contexts (competing, and possibly conflicting goals and motivations) and I stumbled into a conversation that should provide a nice preamble to that post which should follow this one.

Before I dive in, two quick notes:

  • Rather than try to summarize the entire conversation, I’m going to lay out what I brought to and took from it. There are far too many tweets and, as of this writing, I can’t be sure the conversation has concluded.
  • My thanks to everyone involved, whether named or not. This kind of civil, if contentious, dialog is much appreciated. When ideas rub together, it can produce irritation, but sometimes they also get polished.

Management is one of those things that, like landmines, tends to react badly to the hammer of simplistic thought. We can see this in managers who apply (or misapply) theories of management, particularly ones like scientific management (AKA Taylorism) to contexts where it is extremely inappropriate and counterproductive. Whether there really exists a context where Taylorism is actually appropriate or productive is a question for another day. We also see the hammer in reactions to abuses that dismiss all value of management out of hand. While the reaction is understandable, that doesn’t make it credible. The vicious circle just becomes more vicious; heat is generated but without corresponding light.

One thing that’s necessary to pin down is what we mean by the term “management”. Are we talking about a concept (“…the administration of an organization…”)? Are we talking about the job/profession? Perhaps the discipline (branch of knowledge) or academic discipline (field of study) is what we’re talking about. We could be talking about a theory management, or we could be talking about management practices, either individually or grouped into systems of management. Knowing specifically what’s being referred to is critical for evaluating statements. A very valid criticism of a specific theory or system (e.g. Taylorism) will likely fall apart when applied to the concept as a whole due to the fact that the concept is far broader and contrary examples are easily found.

Another issue relates to intent. Few would argue the universal detriment of poor management practices. Extracting the maximum possible effort from your employees is unlikely to result in the generation of the most value in the context of knowledge work. These practices are the very antithesis of fitness for purpose in that they do not materially benefit the organization and they alienate employees (which is yet another hit to the organization where the product is knowledge work). And yet, there are still managers that manage in that very manner. Are they, each and every one, evil? A simplistic answer, hard against either end of the spectrum, is almost surely going to be wrong. That being said, in my experience the distribution is skewed more towards the “no” side than not (just as I’ve found people who only perform when driven to it to be a very small minority).

Why would someone who wants to do their job well and in an ethical manner resort to practices that are harmful to all parties? With sadism eliminated as a motivation (there just aren’t enough in the population to account for all the positions to be filled), the far more plausible answer would be culture, tradition, and/or lack of knowledge regarding alternatives. In short, when the outcome of a system doesn’t match the intent, there’s a bug in the system.

The disconnect between leadership and management is also a problem. Leadership, admittedly, is a concept distinct from management. It makes sense that not every leader needs to be a manager. The extent to which we as a society tolerate management absent leadership, however, is shocking and part of the problem. Consider a tweet from Esther Derby:

I would argue that steering and enabling can be considered leadership qualities as much as management activities. There’s a place for supervision and compliance, however knowing how to achieve results without forcing the issue is, in my opinion, an extremely useful skill. This is not manipulation, rather a matter of understanding goals and how to achieve them intelligently. It’s a matter of understanding how to resolve potential conflicts between the goals and motivations of an organization, groups, and individuals and adapting the system so that the outcomes more closely track the intent. The alternative is allowing the system to degenerate into a web of perverse incentives that increase the gap between intent and outcome. This gap may benefit some individuals, but at the cost to other individuals and the organization as a whole.

Medicine is something that has been through a number of changes, large and small, by finding a way to adapt. While the concept of medicine (diagnosis, treatment, and prevention of disease and injury) has remained constant over time, the practices and theories have evolved greatly. The discipline itself has evolved so that not only does it adapt to change, but that it adapts in as optimal a manner as possible. In short, it has developed a culture of learning.

Understanding organizations from a systems standpoint means recognizing the need for sensing the fit between the system and its contexts (learning) and then steering to correct any mismatches (management). Simplistic approaches to management (particularly relatively static ones that have little save tradition to recommend them) can only lead to a widening gap between the intended outcome and actual results. At some point, this gap becomes wide enough to swallow the organization.

[Villain Carricature by J.J. via Wikimedia Commons.]

First Do No Harm – the Practice of Software Development

Medieval Anatomy Illustration

Analogies are never perfect, but reading Erik Dietrich’s “Do Programmers Practice Computer Science?” brought one to mind. Software development has much in common with the practice of medicine. Software development, like medicine, involves the application of knowledge. Also like medicine, this application is made complex by considerations of context. Yet another commonality is that in both disciplines, there are (or, at least, should be, limits regarding experimentation).

Erik’s post used the following comparison of developers to electricians:

Let’s consider three actors in the realm of physics, as a science.

  1. A physicist, who runs electricity through things to see if they explode.
  2. An electrical engineer, who takes the knowledge of what explodes from the physicist and designs circuitry for houses.
  3. An electrician, who builds houses using the circuits designed by the electrical engineer.

I list these out to illustrate that there are layers of abstraction on top of actual science. Is an electrician a scientist, and does the electrician use science? Well, no, not really. His work isn’t advancing the cause of physics, even if he is indirectly using its principles.

Let’s do a quick exercise that might be a bit sobering when we think of “computer science.” We’ll consider another three actors.

  1. Discrete mathematician, looking to win herself a Fields medal for a polynomial time factoring algorithm.
  2. R&D programmer, taking the best factoring algorithms and turning them into RSA libraries.
  3. Line of business programmer, securing company’s Sharepoint against script kiddies uploading porn.

Programming is knowledge work and non-repetitive, so the comparison is unfair in some ways. But, nevertheless, what we do is a lot more like what an electrician does than what a scientist does. We’re not getting paid to run experiments — we’re getting paid to build things.

There is definitely some validity in this. The three roles in each example have many similarities. His observation that development work is “non-repetitive”, however, is key. Electricians work in a more certain context than doctors who may need to account for body chemistry or metabolism. Likewise, developers may find environmental factors (e.g. memory usage profile, network load, etc.) produce uncertainty in the course of their work. Whereas the plumbing and electrical systems in a house are mostly separate, biological systems and information systems tend to be more intertwined.

Another similarity between software development and the practice of medicine is the feedback loop. The physicist will never hear back from the electrician, but physicians doing research are not similarly removed from practitioners. Practice and theory in medicine have a chicken and egg relationship where neither is clearly dominant, but each influences the other. Likewise with software development. Ethics and practicality in both cases constrain pure research.

As Erik noted, developers are “…not getting paid to run experiments — we’re getting paid to build things”. That being said, the uncertainties mean that, like physicians, we can’t be positive about the exact outcome without trying a particular course of action (which isn’t really an experiment):

Like doctors, those involved in software development have an ethical obligation to let our “patients” know when we’re learning on the job and what the risks are (not to mention the obligation to try things that are in their best interests and not just something we want to test drive). In addition to considerations of professionalism, more open communication has its benefits. We can solve problems and advance the practice at the same time.

#NoEstimates – Questions, Answers, and Credibility

This is questioning?

A recent episode of Thomas Cagley’s SPAMCast featured Woody Zuill discussing #NoEstimates. During the episode, Woody talked about his doubts about the usefulness of estimating and the value of questioning processes rather than accepting them blindly.

I’m definitely a fan of pragmatism. Rather than relying on dogma, I believe we should be constantly evaluating our practices and improving those found wanting. That being said, to effectively evaluate our practices, we need to be able to frame the evaluation. It’s not enough to pronounce something is useful or not, we have to be able to say who is involved, what they are seeking, what the impact is on them, and to what extent the outcome matches up with what they are seeking. Additionally, being able to reason about why a particular practice tends to generate a particular outcome is critical to determining what corrective action will work. In short, if we don’t know what the destination looks like, we will have a hard time steering toward it.

In his blog post “Why do we need estimates?”, Woody (rightly, in my opinion) identifies estimates as a tool used for decision-making. He also lists a number of decisions that estimates can be used to make:

  • To decide if we can do a project affordably enough to make a profit.
  • To decide what work we think we can do during “a Sprint”. (A decision about amount)
  • To decide what work we should try to do during “a Sprint”. (A decision about priority or importance)
  • To decide which is more valuable to us: this story or that story.
  • To decide what project we should do: Project A or Project B.
  • To decide how many developers/people to hire and how fast to “ramp up”.
  • To decide how much money we’ll need to staff a team for a year.
  • To give a price, or an approximate cost so a customer can decide to hire us to do their project.
  • So we can determine the team’s velocity.
  • So marketing can do whatever it is they do that requires they know 6 months in advance exactly what will be in our product.
  • Someone told me to make an estimate, I don’t use them for anything.

What is missing, however, is an alternate way to make these decisions. It’s also missing from the follow up post “My Customers Need Estimates, What Do I do?”. If the customer has a need, does it seem wise to ask them to abandon (not amend, but abandon) a technique without proposing something else? Even “let’s try x instead of y” is insufficient if we can’t logically explain why we expect “x” to work better than “y”. The issue is one of credibility, a matter of trust.

In her post “What Creates Trust in Your Organization?”, Johanna Rothman related her technique for creating trust:

Since then, I asked my managers, “When do you want to know my project is in trouble? As soon as it I think I’m not going to meet my date; after I do some experiments; or the last possible moment?” I create trust when I ask that question because it shows I’m taking their concerns seriously.

After that project, here is what I did to create trust:

  1. Created a first draft estimate.
  2. Tracked my work so I could show visible progress and what didn’t work.
  3. Delivered often. That is why I like inch-pebbles. Yes, after that project, I often had one- or two-day deliverables.
  4. If I thought I wasn’t going to make it, use the questions above to decide when to say, “I’m in trouble.”
  5. Delivered a working product.

While I can’t say that Johanna’s technique is the optimal one for all situations, I can at least explain why I can put some faith in it. In my experience, transparency, collaboration, and a respect for my stakeholders’ needs tends to work well. Questions without answers? Not so much.

Cargo Cult Architecture

According to Mark Little, Red Hat VP of Engineering, the microservice backlash has arrived, coming from “people who were really pushing it at the beginning and who are now just starting to realise it’s not all sunshine and roses, or people who never felt the need for it at all”. The Twitterverse seems to agree:

This post, however, is less about microservices, and more about what their rise and fall (and, no doubt, recovery as we violently discover equilibrium) says about software development as a discipline.

As Sander Mak observed in his post “On monoliths, microservices and critical thinking” (h/t Paul Bakker):

What does it mean if public software engineering opinion flips 180 degrees in a matter of weeks? It’s too easy to chalk it all up to people needing authority figures. Yes, I know: not everybody was all over microservices. But you have to admit there’s something fundamentally unsound going on here.

This is hardly a new problem. The same Mark Little mentioned in the opening wrote an article for InfoQ almost three years ago titled “IT Values Technologies Over Thought” where he stated “If the people delivering the implementations that are supposed to be solutions to business problems aren’t looking beyond the hype and considering alternatives, especially when those alternatives may have been tried and tested for many years, then we are in for some very interesting times ahead”.

It’s a known problem. We even laugh at articles that trade on our tendency to jump from silver bullet to silver bullet (although I’m not sure if that laughter is based on sangfroid or fatalism).

It’s not even a problem that’s exclusively ours. An article in Forbes, “Why So Many Management Strategies Become Fads That Fade Away”, refers to it as “idea surfing”. When complexity, unrealistic expectations, cultural resistance, or poor fit lead to management souring on the current strategy du jour, there’s always a shinier object just down the road that promises to be the recipe for success.

Accord to “Rats Can Be Smarter Than People” in January’s Harvard Business Review, our predilection for easy answers is deeply rooted (emphasis added):

Our rule-based system was an evolutionary development: How do you tell if a berry is good for eating? You learn that this small red one is good, and then you save energy by bypassing the ones of a different shape or color. So our brains have been conditioned to look for rules. We’re taught them in school, at work, and by our parents, and we can make many good decisions by applying the ones we’ve learned. But in other situations there’s too much going on for simple rules to work, and that’s when information integration learning has to kick in. Think of a radiologist evaluating an X-ray. If you ask him what rules he uses to determine whether a spot is cancer, he’d probably have a hard time verbalizing them. He’s learned from labeled examples in medical school and his own experience, and then developed an instinct for identifying cancerous spots based on what he’s seen before. Another example that comes to mind is a manager interviewing a job candidate. There aren’t any hard-and-fast rules about who will be a good hire. You have to consider many factors and rely on your judgment or on a gut feeling based on your experience with people in the workplace. Unfortunately, there’s a great deal of evidence showing that humans have a harder time learning how to integrate information in this way, because they seek rules even when there are none.

In spite of how much it’s part of our nature, we have to overcome the desire for easy answers. No matter how many jumps we make, the magic recipe will not be found:

Ignore that last guy 😉

Software Development Practice and Theory – A Chicken and Egg Story

which came first?

The state of software development as a profession looks a lot like the practice of medicine in the 19th century. We have our great ones in the tradition of Pasteur, Nightingale, Jenner, Lister, and Semmelweis. In spite of our advances, there’s a sense that the state of practice is still largely ad hoc.

A LinkedIn discussion started by Peter Murchland, “Does theory lead practice or practice lead theory?” illustrates the analogy (although the thrust of his question was directed towards enterprise architecture, the circumstances appear the same). To answer his question honestly, we have to say “both”. Established theory underlies many of our practices, and yet we often still find ourselves exploring uncharted territory at the edges. When the map runs out, we don’t have the option of stopping, but are required to press on, simultaneously creating new theory and testing it at the same time.

Theory, where it exists, guides practice. Practice reinforces, refines, or refutes theory. When practice faces problems outside the domain of established theory, then the seeds of new theory are planted.

Like medicine, software development (and enterprise architecture) lacks the ability to conduct full-scale controlled experiments to falsify it theories. For reasons of both practicality and ethics, more indirect means are the best available recourse. Just as medicine is ultimately “proven” in the practice, so too is software development. The fact that we are experimenting on our patients (customers) should give pause, particularly considering the “first, do no harm” credo is not a fundamental part of our training.

Much has been written regarding whether software development is science or art, craft or engineering. In “Enterprise Architecture as Science”, Richard Veryard makes the claim that enterprise architecture is a discipline, though not a scientific one because “many knowledge-claims within the EA world look more like religion or mediaeval scholastic philosophy than empirically verifiable science”. This same description can be applied equally to the current state of software development practice. It should be noted that this same description could also have been applied to the practice of medicine well into the 19th century.

Software development is a relatively young discipline (as is enterprise architecture, for that matter) with an immature, incomplete body of theory behind it. Building, testing, and refining that body should be a priority; absent that testing and refining it will have no claim to a scientific foundation. Until that foundation is much more solid, the emphasis must be on questioning, rather than trusting. Nullius in verba (“on the word of no one”), the motto of the Royal Society, should be our motto as well.