Why does software development have to be so hard?

Untangling this could be tricky

A series of 8 tweets by Dan Creswell paints a familiar, if depressing, picture of the state of software development:

(1) Developers growing up with modern machinery have no sense of constrained resource.

(2) Thus these developers have not developed the mental tools for coping with problems that require a level of computational efficiency.

(3) In fact they have no sensitivity to the need for efficiency in various situations. E.g. network services, mobile, variable rates of change.

(4) Which in turn means they are prone to delivering systems inadequate for those situations.

(5) In a world that is increasingly networked & demanding of efficiency at scale, we would expect to see substantial polarisation.

(6) The small number of successful products and services built by a few and many poor attempts by the masses.

(7) Expect commodity dev teams to repeatedly fail to meet these challenges and many wasted dollars.

(8) Expect smart startups to limit themselves to hiring a few good techies that will out-deliver the big orgs and define the future.

The Fallacies of Distributed Computing are more than twenty years old, but Arnon Rotem-Gal-Oz’s observations (five years after he first made them) still apply:

With almost 15 years since the fallacies were drafted and more than 40 years since we started building distributed systems – the characteristics and underlying problems of distributed systems remain pretty much the same. What is more alarming is that architects, designers and developers are still tempted to wave some of these problems off thinking technology solves everything.

Why?

Is it really this hard to get it right?

More importantly, how do we change this?

In order to determine a solution, we first have to understand the nature of the problem. Dan’s tweets point to the machines developers are used to, although in fairness, those of us who lived through the bad old days of personal computing can attest that developers were getting it wrong back then. In “Most software developers are not architects”, Simon Brown points out that too many teams are ignorant of or downright hostile to the need for architectural design. Uncle Bob Martin in “Where is the Foreman?”, suggests the lack of a gatekeeper to enforce standards and quality is why “our floors squeak”. Are we over-emphasizing education and underestimating training? Has the increasing complexity and the amount of abstraction used to manage it left us with too generalized a knowledge base relative to our needs?

Like any wicked problem, I suspect that the answer to “why?” lies not in any one aspect but in the combination. Likewise, no one aspect is likely, in my opinion, to hold the answer in any given case, much less all cases.

People can be spoiled by the latest and greatest equipment as well as the optimal performance that comes for working and testing on the local network. However, reproducing real-world conditions is a bit more complicated than giving someone an older machine. You can simulate load and traffic on your site, but understand and accounting for competing traffic on the local network and the internet is a bit more difficult. We cannot say “application x will handle y number of users”, only that it will handle that number of users under the exact same conditions and environment as we have simulated – a subtle, but critical difference.

Obviously, I’m partial to Simon Brown’s viewpoint. The idea of a coherent, performant design just “emerging” from doing the simplest thing that could possibly work is ludicrous. The analogy would be walking into an auto parts store, buying components individually, and expecting them to “just work” – you have to have some sort of idea of the end product in mind. On the other hand, attempting to specify too much up front is as bad as too little – the knowledge needed is not there and even if it were, a single designer doesn’t scale when dealing with any system that has a team of any real size.

Uncle Bob’s idea of a “foreman” could work under some circumstances. Like Big Design Up Front, however, it doesn’t scale. Collaboration is as important to the team leader as it is to the architect. The consequences of an all-knowing, all-powerful personality can be just as dire in this role as for an architect.

In “Hordes of Novices”, Bob Martin observed “When it’s possible to get a degree in computer science without writing any code, the quality of the graduates is questionable at best”. The problem here is that universities are geared to educate, not train. Just because training is more useful to an employer (at least in the short term), does not make education unimportant. Training deals with this tool at this time while how to determine which tool is right for a given situation is more in the province of education. It’s the difference between how to do versus how to figure out how to do. Both are necessary.

As I’ve already noted, it’s a thorny issue. Rather than offering an answer, I’d rather offer the opportunity for others to add to the conversation in the comments below. How did we get here and how do we go forward?

, , , , ,

  1. #1 by Jan van Oort on February 22, 2014 - 1:27 pm

    Is it possible that we got here by the ubiquitousness of professional managers à la Pointy-Haired Boss ? [ I never saw a truly *bad* manager who had "come up through the ranks" ]

    Is it possible to go forward from here by favouring a certain degree of autodidactic learning in our teams ? [ The best developers I know are the ones with a substantial amount of autodidactic learning on their CV. ]

    I have been developing and designing software for 20 years now, and this post is so to-the-point that it makes me think deeply. The above two questions are just other questions in answer to the OP’s last question.

    Like

    • #2 by Gene Hughson on February 22, 2014 - 4:15 pm

      Thanks, Jan. Poor management could certainly be a contributing factor. Great point re: self-directed learning – I absolutely agree that those who are life-long learners (for the love of learning) are, in my experience, very valuable people to have on your team.

      Like

  2. #3 by calfred (@calfred) on February 22, 2014 - 4:17 pm

    The Book of Revelation had the Four Horseman of the Apocalypse.

    As computer systems become larger and more complex, maybe we have the Four Horseman of Computer Science:

    o Change/Volatility
    o Uncertainty
    o Variability
    o Effective Communication

    As you’ve written, Gene, “context is king” and the first three bullets above create 3 of the 4 walls of context (perception of variation being #4).

    But the killer from within, the one that erodes everything from the bottom-up is communication. Here, I mean the ability to transfer understanding effectively from one mind to another.

    The concerns mentioned in your post about scarce computing resources, are, at their root, a communication problem. There are no shortage of people who are well aware of CPU, thread, memory, display, disk or network bandwidth constraints.

    Why are they not able to transfer this knowledge?
    o Are they unaware?
    o Do they not try
    o Do they not communicate clearly?
    o Are the others not paying attention?
    o Do they not think it’s important?
    o Maybe they just don’t understand?
    o Or maybe they understand, but they are not motivated to act?

    Our architects and foremen cannot save us if they can’t solve this problem, both in and out.

    Imagine if our land lines, cellular towers and internet links communicated like this.

    As computer systems become larger and more complex, I think effectiveness is rooted in communication

    Like

    • #4 by Gene Hughson on February 22, 2014 - 4:23 pm

      Excellent comment! “Effectiveness is rooted in communication” holds true across a variety of contexts, both human and machine,

      Like

    • #5 by Felipe Bormann on March 4, 2014 - 12:48 pm

      Great comment as I’m starting computer science, I know how to program but I’m not “educated” yet, it stills an issue such as novice programmers do not want to design their products which end up in a software full of waste structures.

      Like

  3. #6 by Craig on March 10, 2014 - 6:42 pm

    One of the biggest things I see that contributes to making software development difficult is that there are too many ways to do the same thing. When learning new technologies one cannot just focus on how to accomplish a task. One has to focus on at least several ways to accomplish the same task, because you never know how someone else is going to approach the same problem. You may have to work on their code or answer their interview questions. With the constant stream of new changes to technologies, the big challenge is to keep up instead of spending time to dig deeper into what we already know and achieve a level of mastery. Change is inevitable, but we can at least stop providing so many options that end up accomplishing the same thing.

    Like

    • #7 by Gene Hughson on March 11, 2014 - 10:39 am

      Craig, I have mixed feelings about this. Obviously variety poses the challenges you mentioned. By the same token, that same variety should allow for tailoring the solution to the problem. Both depth and breadth have their advantages.

      Deliberately choosing a method of solving a problem and promoting consistency (where appropriate) should yield better results than having redundant and inconsistent solutions.

      Like

  1. Software Development, Coding, Forests and Trees | Form Follows Function

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 132 other followers

%d bloggers like this: