Form Follows Function on SPaMCast 471

SPaMCAST logo

It’s time for another appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

Last week’s episode, number 471, features Tom’s essay on the top 20 transformation killers. Jeremy Berriault‘s QA corner is about involving testers in the requirements process. My Form Follows Function segment rounds out the podcast, covering my post “Systems Thinking Complicates Things”.

In this installment, Tom and I talk about how simplistic analysis is unlikely to fit a complex problem. We illustrate this by talking about the game of “rock, paper, scissors” (then graduate to “rock, paper, scissors, lizard, Spock”!). I don’t really think that’s what’s meant by game theory, but hey, it’s fun (and illustrative) nonetheless.

You can find all the SPaMCast episodes I’m in under the SPaMCast Appearances category on this blog. Enjoy!

Form Follows Function on SPaMCast 467

SPaMCAST logo

It’s time for another appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

This week’s episode, number 467, features Tom’s excellent essay on value (value is one of those simple-seeming, but complex concepts). Jeremy Berriault‘s QA corner covers testing in difficult circumstances. I bat cleanup with a Form Follows Function segment discussing my post “Management, Simple and Wrong – Semantics, Systems, and Self-Correction”.

In this installment, Tom and I talk about a post that came about, not intentionally, but because I got into a conversation on Twitter that played out along the lines of the post I really intended to write (dealing with organizations as complex, multi-level systems). When life hands you your topic, it’s generally a good idea to chase it to the conclusion!

You can find all the SPaMCast episodes I’m in under the SPaMCast Appearances category on this blog. Enjoy!

Form Follows Function on SPaMCast 463

SPaMCAST logo

I’m back for another appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

This week’s episode, number 463, features Tom’s essay on big picture stories. This is followed by our Form Follows Function segment discussing my post “Management, Simple and Wrong – Semantics, Systems, and Self-Correction”. Jeremy Berriault‘s QA corner finishes the cast with a segment on motivating testers.

In this installment, Tom and I talk about the way people approach complex (and often emotionally charged) subjects like management. Semantics, defining what we mean, is critical to keep discussions on track. Likewise, we need to be able to differentiate between a concept, differing theories around that concept, and just plain poor practice. Simplistic mental models are likely to generate much more heat than light. It’s not often that something good starts off “I got into a discussion on Twitter”, but this episode is the exception to the rule!

You can find all the SPaMCast episodes I’m in under the SPaMCast Appearances category on this blog. Enjoy!

Systems Thinking Complicates Things

4th UK Rock Paper Scissors Championships by James Bamber via Wikimedia

 

I’ve had the honor and pleasure of appearing as a regular on Tom Cagley‘s SPaMCast podcast for almost three years now. Before I write one of my “Form Follows Function on SPaMCast x” posts, I always listen to the podcast to make sure that the summary is right (the implication being, relying purely on my memory won’t be right). I got a bonus while writing up last week’s appearance, because Tom asked an excellent question that deserved its own post: does thinking about a problem (legacy systems, in the instance of last week’s discussion) holistically/systematically complicate things?

Abso-freakin’-lutely.

It is much easier to avoid all the twists and turns and possibilities inherent in systems thinking. A simpler approach, picking one lever to pull/one button to push, makes it much easier to come up with a solution.

It just doesn’t work very well at coming up with solutions that actually work.

When there is a mismatch in complexity between problem and solution architectures, this mismatch will be an additional problem to deal with. This will apply when the solution is more complex than the problem space warrants and when the opposite is the case. Solutions that fail to account for the context they will encounter are vulnerable. This is the idea behind the quote attributed to Albert Einstein: “Everything should be made as simple as possible, but not simpler.”

Human nature can push us to fix problems quickly, and quick will generally equate to simple. It takes time to analyse the angles and consider the alternatives. How often have you seen people ask for “the best” way to do something absent any context? How often have you seen people ask “why would someone ever do that?”

I’ll answer that by asking 3 questions:

  • since Rock beats Scissors, why would anyone ever choose Scissors?
  • since Paper beats Rock, why would anyone ever choose Rock?
  • since Scissors beats Paper, why would anyone ever choose Paper?

Reality isn’t binary. It’s not what’s “best”, it’s what’s fit for purpose in a given context and there are lots and lots of contexts out there.

This isn’t to say that all quick, simple interventions are wrong. If you find yourself in a house fire, more action and less comprehensive deliberation may well be in order. The key is matching the cost (largely in terms of time) of defining the problem space with cost (in terms of both effort and risk that the intervention adds to the problem) of crafting the solution.

Rock, Paper, Scissor, Lizard, Spock rules diagram

It’s almost guaranteed that the system contexts we deal with (both technical and social) will evolve toward more and more complexity. Surprises will emerge as a matter of course. We don’t need to make more by failing to take a more holistic view when we have the time to do so.

Management, Simple and Wrong – Semantics, Systems, and Self-Correction

Villain Caricature

Simple responses to complex situations are both seductive and dangerous. The difficulty in juggling lots of variables tempts us to employ abstraction so as to avoid being overwhelmed. Abraham Maslow’s observation, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail”, applies. Some things (e.g. landmines) react badly to being treated as if they were nails. Having more tools in the box may help avoid problems.

This isn’t the post I had in mind to write next, but it’s one that came about by accident (via a multi-day mass participant Tweet-storm, with my participation beginning here). I had planned an Organizations as Systems post re: multiple players in multiple contexts (competing, and possibly conflicting goals and motivations) and I stumbled into a conversation that should provide a nice preamble to that post which should follow this one.

Before I dive in, two quick notes:

  • Rather than try to summarize the entire conversation, I’m going to lay out what I brought to and took from it. There are far too many tweets and, as of this writing, I can’t be sure the conversation has concluded.
  • My thanks to everyone involved, whether named or not. This kind of civil, if contentious, dialog is much appreciated. When ideas rub together, it can produce irritation, but sometimes they also get polished.

Management is one of those things that, like landmines, tends to react badly to the hammer of simplistic thought. We can see this in managers who apply (or misapply) theories of management, particularly ones like scientific management (AKA Taylorism) to contexts where it is extremely inappropriate and counterproductive. Whether there really exists a context where Taylorism is actually appropriate or productive is a question for another day. We also see the hammer in reactions to abuses that dismiss all value of management out of hand. While the reaction is understandable, that doesn’t make it credible. The vicious circle just becomes more vicious; heat is generated but without corresponding light.

One thing that’s necessary to pin down is what we mean by the term “management”. Are we talking about a concept (“…the administration of an organization…”)? Are we talking about the job/profession? Perhaps the discipline (branch of knowledge) or academic discipline (field of study) is what we’re talking about. We could be talking about a theory management, or we could be talking about management practices, either individually or grouped into systems of management. Knowing specifically what’s being referred to is critical for evaluating statements. A very valid criticism of a specific theory or system (e.g. Taylorism) will likely fall apart when applied to the concept as a whole due to the fact that the concept is far broader and contrary examples are easily found.

Another issue relates to intent. Few would argue the universal detriment of poor management practices. Extracting the maximum possible effort from your employees is unlikely to result in the generation of the most value in the context of knowledge work. These practices are the very antithesis of fitness for purpose in that they do not materially benefit the organization and they alienate employees (which is yet another hit to the organization where the product is knowledge work). And yet, there are still managers that manage in that very manner. Are they, each and every one, evil? A simplistic answer, hard against either end of the spectrum, is almost surely going to be wrong. That being said, in my experience the distribution is skewed more towards the “no” side than not (just as I’ve found people who only perform when driven to it to be a very small minority).

Why would someone who wants to do their job well and in an ethical manner resort to practices that are harmful to all parties? With sadism eliminated as a motivation (there just aren’t enough in the population to account for all the positions to be filled), the far more plausible answer would be culture, tradition, and/or lack of knowledge regarding alternatives. In short, when the outcome of a system doesn’t match the intent, there’s a bug in the system.

The disconnect between leadership and management is also a problem. Leadership, admittedly, is a concept distinct from management. It makes sense that not every leader needs to be a manager. The extent to which we as a society tolerate management absent leadership, however, is shocking and part of the problem. Consider a tweet from Esther Derby:

I would argue that steering and enabling can be considered leadership qualities as much as management activities. There’s a place for supervision and compliance, however knowing how to achieve results without forcing the issue is, in my opinion, an extremely useful skill. This is not manipulation, rather a matter of understanding goals and how to achieve them intelligently. It’s a matter of understanding how to resolve potential conflicts between the goals and motivations of an organization, groups, and individuals and adapting the system so that the outcomes more closely track the intent. The alternative is allowing the system to degenerate into a web of perverse incentives that increase the gap between intent and outcome. This gap may benefit some individuals, but at the cost to other individuals and the organization as a whole.

Medicine is something that has been through a number of changes, large and small, by finding a way to adapt. While the concept of medicine (diagnosis, treatment, and prevention of disease and injury) has remained constant over time, the practices and theories have evolved greatly. The discipline itself has evolved so that not only does it adapt to change, but that it adapts in as optimal a manner as possible. In short, it has developed a culture of learning.

Understanding organizations from a systems standpoint means recognizing the need for sensing the fit between the system and its contexts (learning) and then steering to correct any mismatches (management). Simplistic approaches to management (particularly relatively static ones that have little save tradition to recommend them) can only lead to a widening gap between the intended outcome and actual results. At some point, this gap becomes wide enough to swallow the organization.

[Villain Carricature by J.J. via Wikimedia Commons.]

Regulating Software Development

'Belvidere Street construction, pouring concrete', Library of Virginia

 

Another weekend, another too good to pass up Twitter conversation during my “unplugged” time. This weekend, Grady Booch hooked me by retweeting Mike Potts tweet:

Mike’s tweet was a reply to Grady’s comment on the latest news out of Uber:

It’s an understandable question. It’s a reasonable question. It’s one that came up back during the healthcare.gov fiasco and it’s one raised by Volkswagen’s recent criminal misconduct.

However, when contemplating fixing a problem, we need to be extremely mindful of the potential for creating harm as a result of the “fix”. Particularly we should be wary of creating harm out of proportion to any good we do (i.e. we don’t want to kill roaches by burning down the house). I chose the image at the top to illustrate something key to this discussion – changing laws (the software of our meta-enterprise) is only slightly harder than moving a roadway once laid down.

Now for the caveats:

  • I do my utmost to avoid politics on this site – I really doubt you’re looking to me for guidance or even just my opinion. I’m not intending this post as a political statement. I’m not asserting that government is never the answer, merely that it’s a rather blunt instrument that we need to use with care.
  • I agree with Grady and Mike that those who took part in this are a disgrace. Moreover, I believe everyone involved, top to bottom, needs to be prosecuted and, if convicted, punished to the fullest extent of the law.
  • My tl;dr position is this: if we have regulation, it should be effective and without avoidable harmful side effects.

As I noted above, it’s human nature to respond to problems with some proposal to fix the problem. It also seems to be human nature to respond in a manner that doesn’t necessarily deal with an issue from a systemic perspective. We tend to allow ourselves to concentrate on the need to “do something” and ignore the hard work of making sure what we do is effective (and doesn’t cause further problems). In other words, we put band aids on bullet wounds.

In both the case of VW and Uber, the conduct alleged is criminal. We could pass new laws making it a crime to commit a crime, but that seems to be an exercise in recursive futility. If the potential penalty in the first case was insufficient to induce compliance, should we really believe adding another layer will make it better?

An element that’s present in both cases is that the illegal conduct involves creating software to help avoid detection of the fact that the company was breaking another law. Regulatory pressures coupled with a corrupt culture can create perverse incentives to cheat. This does not in any way excuse the conduct, particularly in the case of VW. It is, however, one of the systemic factors that should be taken into account.

In my experience, the most effective compliance program is one where compliance is the path of least resistance. Self-imposed compliance cannot fail to be more effective than compliance enforced externally. Corrupt agents will still violate the rules, but ideally you want to make it so that the lazy way out is the desired behavior.

Another aspect of regulation that comes up is something along the lines of professional standard similar to those of attorneys, accountants, and doctors. Increasing the level of professionalism is laudable, but would it be an effective response to the issue of criminal misconduct? Additionally, assuming it was legally enforced, what would the cost be? Everything from administration of the program to salary increases would introduce new costs and would likely affect the pace of innovation (due to the impact on both supply and demand). Again, without justifying the conduct, what was Uber’s motivation to develop its code to defeat detection by regulators?

I can well imagine other potential issues with a regulatory regime that requires a license to code. Not only commercial innovation would suffer, but the effects on the Open Source community could be disastrous if the licensing regime was expensive.

Doing “something” is easy. Doing something effective is a bit harder. I’m all aboard for punishing the guilty (each and every one), but we should move carefully when considering actions that might be more difficult to undo.

I fought the law (of unintended consequences) and the law won

https://twitter.com/jetpack/status/714972957269839874

Sometimes, what seemed to be a really good idea just doesn’t turn out that way in the end.

In my opinion, a lack of a systems approach to problem solving makes that type of outcome much more likely. Simplistic responses to issues that fail to deal with problems holistically can backfire. Such ill-considered solutions not only fail to solve the original problem, but often set up perverse incentives that can lead to new problems.

An article on the Daily WTF last week, “Just the fax, Ma’am”, illustrates this perfectly. In the article, an inflexible and time-consuming database change process (layered on top of the standard change management process) leads to the “reuse” of an existing, but obsolete field in the database. Using a field labeled “Fax” for an entirely different purpose is far from “best practice”, but following the rules would lead to being seen as responsible for delaying a release. This is an example of a moral hazard, such as Tom Cagley discussed in his post “Some Moral Hazards In Software Development”. Where the cost of taking a risk is not borne by the party deciding whether to take it, potential for abuse abounds. This risk becomes particularly likely when the person taking shortcuts can claim a “moral” rationale for doing so (such as “getting it done” for the customer).

None of this is to suggest that change management isn’t a worthy goal. In fact, the worthier the goal, the greater the danger of creating an unintended consequence because it’s so easy to conflate argument over means with disagreement regarding the ends. If you’re not in favor of being strip-searched on arrival and departure from work, that doesn’t mean you’re anti-security. Nonetheless, the danger of that accusation being made will likely resonate for many. When the worthiness of the goal forestalls, or even just hinders, examination of the effectiveness of methods, then that effectiveness is likely to suffer.

Over the course of 2016, I’ve published twenty-two posts, counting this one, with the category Organizations as Systems. The fact that social systems are less deterministic than software systems only reinforces the need for intentional design. When foreseeable abuses are not accounted for, their incidence becomes more likely. Whether the abuse results from personal pettiness, doctrinal disagreements, or even just clumsy design like the change management process described above is irrelevant. In all of those cases, the problem is the same, decreased respect for institutional norms. Studies have found that “…corruption corrupts”:

Gächter has long been interested in honesty and how it manifests around the world. In 2008, he showed that students from 16 cities, from Riyadh to Boston, varied in how likely they were to punish cheaters in their midst, and how likely those cheaters were to then retaliate against their castigators. Both qualities were related to the values of the respective cities. Gächter found that the students were more likely to tolerate free-loaders and retaliate against do-gooders if they came from places whose citizens took a more relaxed view on tax evasion or fare-dodging, or had less trust in their courts and police.

If opinions around corruption and rule of law can affect people’s reactions to dishonesty, Gächter reasoned that they surely affect how honest people are themselves. If celebrities cheat, politicians rig elections, and business leaders engage in nepotism, surely common citizens would feel more justified in cutting corners themselves.

Taking a relaxed attitude toward the design of a social system can result in its constituents taking a relaxed attitude toward those aspects of the system that are inconvenient to them.

Form Follows Function on SPaMCast 411

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast, number 411, features Tom’s essay on Servant Leadership (which I highly recommened), John Quigley on managing requirements as a part of product management, a Form Follows Function installment based on my post “Organizations as Systems – ‘Uneasy Lies the Head that Wears the Crown'”, and Kim Pries on software craftsmanship.

Tom and I discuss the danger of trying to use simplistic explanations for the interactions that make up complex human systems. No one has the power to force things in a particular direction, rather the direction comes about as a result of the actions and interactions of everyone involved. It might be comforting to believe that there’s one single lever for change, but it’s wrong.

You can find all my SPaMCast episodes using under the SPAMCast Appearances category on this blog. Enjoy!

NPM, Tay, and the Need for Design

Take a couple of seconds and watch the clip in the tweet below:

https://twitter.com/jetpack/status/713320642616156161

While it would be incredibly difficult to predict that exact outcome, it is also incredibly easy to foresee that it’s a possibility. As the saying goes, “forewarned is forearmed”.

Being forewarned and forearmed is an important part of what an architect does. An architect is supposed to focus on the architecturally significant aspects of a system. I like to use Ruth Malan‘s definition of architectural significance due to its flexibility:

Decisions (both those that were made and those that were left unmade) that end up taking systems offline and causing very public embarrassment are, in my opinion, architecturally significant.

Last week, two very public, very foreseeable failures took place: first was the chaos caused by a developer removing his modules from NPM, which was followed by Microsoft having to pull the plug on its Tay chatbot when it was “trained” to spew offensive comments in less than 24 hours. In my opinion, these both represented design failures resulting from a lack of due consideration of the context in which these systems would operate.

After all, can anyone really claim that no one would expect that people on the internet would try to “corrupt” a chatbot? According to Azeem Azhar as quoted in Business Insider, not really:

“Of course, Twitter users were going to tinker with Tay and push it to extremes. That’s what users do — any product manager knows that.

“This is an extension of the Boaty McBoatface saga, and runs all the way back to the Hank the Angry Drunken Dwarf write in during Time magazine’s Internet vote for Most Beautiful Person. There is nearly a two-decade history of these sort of things being pushed to the limit.”

The current claim, as reported in CIO.com, is that Tay was developed with filtering built-in, but there was a “critical oversight” for a specific kind of attack. According to the article, it’s believed that the attack vector involved asking Tay to “repeat after me”.

Or, as Matt Ballantine put it:

https://twitter.com/jetpack/status/713012721218883585

Likewise, who could imagine issues with a centralized repository of cascading dependencies? Failing to consider what would happen if someone suddenly pulled one of the bottom blocks out led to a huge inconvenience to anyone depending on that module or any downstream module. There’s plenty of blame to go around: the developer who took his toys and went home, those responsible for NPM’s design, and those who depended on it without understanding its weaknesses.

“The Iron Law of Tools” is “that which does for you will also do to you”. Understanding the trade-offs allows you to plan for risk mitigation in advance. Ignoring them merely ensures that they will have to be dealt with in crisis mode. This is something I covered in a previous post, “Dependency Management is Risk Management”.

Effective design involves not only the internals of a system but its externals as well. The conditions under which the system will be used, it’s context, is highly significant. That means considering not only the system’s use cases, but also its abuse cases. A post written almost a year ago by Brandon Harris, “Designing for Evil”, conveys this well:

When all is said and done, when you’ve set your ideas to paper, you have to sit down and ask yourself a very specific question:

How could this feature be exploited to harm someone?

Now, replace the word “could” with the word “will.”

How will this feature be exploited to harm someone?

You have to ask that question. You have to be unflinching about the answers, too.

Because if you don’t, someone else will.


When I began working on this post, the portion above was what I had in mind to say. In essence, I planned a longer-form version of what I’d tweeted about the Tay fiasco:

However, before I had finished writing the post, Greger Wikstrand posted “The fail fast fallacy”. Greger and I have been carrying on a conversation about innovation over the last few months. While I had initially intended to approach this as a general issue of architectural practice rather than innovation, the points he makes are just too apropos to leave out.

In the post, Greger points out that the focus seems to have shifted from learning to failure. Learning from experience can be the best way to test an idea. However, it’s not the only way:

Evolution and nature has shown us that there are two, equally valid, approaches to winning the gene game. The first approach is to get as much offspring as possible and “hope” many of them survive (r-selection). The second approach is to have few offspring but raise them and nurture them carefully (K-selection). Biologists tell us that the first strategy works best in a harsh, unpredictable environment where the effort of creating offspring is low. The second strategy works better in an environment where there is less change and offspring are more expensive to produce. Some of the factors that favour r-selection seems to be large uncompeted resources. K-selection is more favourable in resource scarce, low predator areas.

The phrase “…where the effort of creating offspring is low” is critical here. The higher the “cost” of the experiment, the more risk is involved in failure. This makes it advisable to tilt the playing field by supporting and nurturing the “offspring”.

In response to Greger’s post, Casimir Artmann posted two excellent articles that further elaborated on this. In “Fail Fast During Adventures”, he noted that “There is a fine line between fail fast and Darwin Awards in IRL.” His point, preparation beforehand and being willing to abort during an experiment before failure is equivalent to suffering a fatality can be effective learning strategies. Lessons that you don’t live to apply aren’t worth much.

Casimir followed with “Fail is not an Option”, in which he stated:

I want the project to succeed, but I plan for things going wrong so that the consequences wouldn’t be to huge. Some risk are manageable, as walking alone, but not alone and off-trail. That’s to risky. If you doing outdoor adventures, you are probably more prepared and skilled than a ordinarie project member, and thats a huge benefit.

I guess the best advice, when doing completely new things with IT, is to start really small so that the majority of your business is not impacted if there is a failure. When something goes wrong, be sure that you could go back to safe place. Point of no return is like being on a sailing boot in the middle of the Atlantic where you can’t go back.

That’s excellent advice. “Fail Fast” has the advantage of being able to fit on a bumper sticker, but the longer, more nuanced version is more likely to serve you well.

Form Follows Function on SPaMCast 373

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast, number 373, features Tom’s essay on #NotImplementedNoValue and a Form Follows Function installment on simplistic mental models.

Tom and I discuss my post “All models may be wrong, but it’s not a contest to see how wrong you can be”, talking about cognitive biases and how overly simplistic mental models fail us.

You can find all my SPaMCast episodes using under the SPAMCast Appearances category on this blog. Enjoy!