Regulating Software Development

'Belvidere Street construction, pouring concrete', Library of Virginia

 

Another weekend, another too good to pass up Twitter conversation during my “unplugged” time. This weekend, Grady Booch hooked me by retweeting Mike Potts tweet:

Mike’s tweet was a reply to Grady’s comment on the latest news out of Uber:

It’s an understandable question. It’s a reasonable question. It’s one that came up back during the healthcare.gov fiasco and it’s one raised by Volkswagen’s recent criminal misconduct.

However, when contemplating fixing a problem, we need to be extremely mindful of the potential for creating harm as a result of the “fix”. Particularly we should be wary of creating harm out of proportion to any good we do (i.e. we don’t want to kill roaches by burning down the house). I chose the image at the top to illustrate something key to this discussion – changing laws (the software of our meta-enterprise) is only slightly harder than moving a roadway once laid down.

Now for the caveats:

  • I do my utmost to avoid politics on this site – I really doubt you’re looking to me for guidance or even just my opinion. I’m not intending this post as a political statement. I’m not asserting that government is never the answer, merely that it’s a rather blunt instrument that we need to use with care.
  • I agree with Grady and Mike that those who took part in this are a disgrace. Moreover, I believe everyone involved, top to bottom, needs to be prosecuted and, if convicted, punished to the fullest extent of the law.
  • My tl;dr position is this: if we have regulation, it should be effective and without avoidable harmful side effects.

As I noted above, it’s human nature to respond to problems with some proposal to fix the problem. It also seems to be human nature to respond in a manner that doesn’t necessarily deal with an issue from a systemic perspective. We tend to allow ourselves to concentrate on the need to “do something” and ignore the hard work of making sure what we do is effective (and doesn’t cause further problems). In other words, we put band aids on bullet wounds.

In both the case of VW and Uber, the conduct alleged is criminal. We could pass new laws making it a crime to commit a crime, but that seems to be an exercise in recursive futility. If the potential penalty in the first case was insufficient to induce compliance, should we really believe adding another layer will make it better?

An element that’s present in both cases is that the illegal conduct involves creating software to help avoid detection of the fact that the company was breaking another law. Regulatory pressures coupled with a corrupt culture can create perverse incentives to cheat. This does not in any way excuse the conduct, particularly in the case of VW. It is, however, one of the systemic factors that should be taken into account.

In my experience, the most effective compliance program is one where compliance is the path of least resistance. Self-imposed compliance cannot fail to be more effective than compliance enforced externally. Corrupt agents will still violate the rules, but ideally you want to make it so that the lazy way out is the desired behavior.

Another aspect of regulation that comes up is something along the lines of professional standard similar to those of attorneys, accountants, and doctors. Increasing the level of professionalism is laudable, but would it be an effective response to the issue of criminal misconduct? Additionally, assuming it was legally enforced, what would the cost be? Everything from administration of the program to salary increases would introduce new costs and would likely affect the pace of innovation (due to the impact on both supply and demand). Again, without justifying the conduct, what was Uber’s motivation to develop its code to defeat detection by regulators?

I can well imagine other potential issues with a regulatory regime that requires a license to code. Not only commercial innovation would suffer, but the effects on the Open Source community could be disastrous if the licensing regime was expensive.

Doing “something” is easy. Doing something effective is a bit harder. I’m all aboard for punishing the guilty (each and every one), but we should move carefully when considering actions that might be more difficult to undo.

Babies, Bathwater, and Software Architects

'Fixing Problems' - XKCD 1739

I try to be disciplined about my writing (picking themes, creating a backlog, collecting notes and links on those topics, etc.), but it seems like serendipity won’t be denied, no matter what I do. On the same day that XKCD published this cartoon, Erik Dietrich published “Software Architect as a Developer Pension Plan”. While I agree with a lot that Erik says in the post, I think he also gets into a “throwing out the baby with the bathwater” situation by dismissing the need for the software architect role, part of which the cartoon illustrates.

Before I go any farther, I do need to point out that I’ve been following and interacting with Erik for about four years now. It’s a friendly disagreement; meant to be more debate than argument.

In the post, Erik makes the point (which I largely agree with) that companies see the role of software architect through a Taylorist lens. They see the software architect as a “thinker”, meant to drive a herd of “doers”:

The modern corporation, like Taylor before it, loosely divides into three categories of humans: owners/executives, managers, and grunts. Owners own and charge executives with executing their will. Executives delegate to managers. Managers think and assemble the workers into org charts designed to optimize efficiency. Workers keep their simple heads down and do as they’re told.

Theoretically, proponents would say, this applies to any domain and type of work. Corporations form themselves into these pyramids without considering any other options, whether they’re private military outfits, gigantic manufacturing facilities, or companies designing and selling software. Of course, the reality turns out to be that not all operations do well splitting thinking and labor. Let’s pick one at random. Oh, I dunno, say, writing software.

We try it. We hire junior developers to be directed by senior ones. And all of them fall in line under the watchful guise of a “tech lead,” who defers to a council of architects. And somewhere at the center of the organizational maelstrom stands The Lead Architect, directing elaborate bucket marches, water flows, and all sorts of magic, like Mickey Mouse in Fantasia. It’s a beautiful, cascading symphony of work flowing from the cleverest down to the simplest. Or… at least, that’s how it goes on paper.

Erik rightly argues that this model is fatally flawed. This is doubly so in the case of the example he gives, an individual offered a position as a software architect doing the thinking for a set of offshore “commodity” developers. Offshore development can be done well (that’s a post I’ll save for another day), but not using that model. Where Erik goes wrong, in my opinion, is in adopting this flawed definition of the software architect role.

Developers should be perfectly competent to do their own thinking. If they’re not, no software architect will be able to help. Even if that person were capable of handling the load of providing detailed directions for several developers, doing so would prevent them from attending to their own areas of concern. It would require being able to anticipate and make all the functional design decisions up front, as well as communicating them to multiple “typists” quicker than they could mindlessly hammer out the code, while still having time to consider all the requisite cross-cutting, quality of service considerations for an application. Do we really need to bother with experiments to determine that this is beyond improbable?

There is a separation between the developer role and software architect role, but it needs to be clearly understood as a difference in focus. A developer’s focus is more vertical, concentrating on implementing functionality while maintaining/improving/not harming the quality of service (aka the horrendously misnamed “non-functional”) aspects of the system. A software architect’s focus should be more horizontal, concentrating on the architecturally significant quality of service aspects that will enable the system to meet the needs of its stakeholders (or, at least, not unreasonably prevent the system from doing so).

The difference between the roles is a matter of thinking at different levels of abstraction, not of one being superior to the other. The two mindsets are both necessary and complementary. A high-quality architectural design without quality implementation cannot produce a high-quality system. Likewise, where there is implementation without architectural coordination, the quality of the resulting system is a matter of chance. For very small systems maintained by small, permanent teams, it may not be necessary for these roles to be distinct. In my experience, however, dealing with both the broadly general and the very specific at the same time scales poorly. Non-trivial systems quickly come to require architectural leadership, otherwise people are left fixing problems caused by problems fixed previously.

Leadership has been the main theme of almost all of my posts this month and has figured prominently in several of what’s been written this year. Many of these posts are assigned to the “Architectural Practice” category. This is because I absolutely believe that the software architect role is a leadership role. That being said, I don’t consider the Taylorist model to be effective leadership practice. In fact, I consider it an anti-pattern.

I’ve long been an advocate of a more collaborative model of practice. Rather than dictating, the software architect’s role should be one of consensus building and communication. Significant architectural disagreement indicates an issue with one or more members of the team. Significant uncertainty about the architecture indicates an issue with the software architect role.

Effective systems require both breadth and depth of effectiveness. This holds true for software systems as much as social systems. For the social systems producing software systems for use by other social systems (i.e. software development teams), it is imperative.

Don’t Teach Coding

Before the pitchforks and torches come out, allow me to clarify – don’t just teach coding.

Jeff Susna and Mark M captured the essence of what I’m talking about:

https://twitter.com/jetpack/status/638453985880338432

The article Mark M referenced, “Is coding the new literacy?”, had this to say:

So you might be forgiven for thinking that learning code is a short, breezy ride to a lush startup job with a foosball table and free kombucha, especially given all the hype about billion-dollar companies launched by self-taught wunderkinds (with nary a mention of the private tutors and coding camps that helped some of them get there). The truth is, code—if what we’re talking about is the chops you’d need to qualify for a programmer job—is hard, and lots of people would find those jobs tedious and boring.

But let’s back up a step: What if learning to code weren’t actually the most important thing? It turns out that rather than increasing the number of kids who can crank out thousands of lines of JavaScript, we first need to boost the number who understand what code can do. As the cities that have hosted Code for America teams will tell you, the greatest contribution the young programmers bring isn’t the software they write. It’s the way they think. It’s a principle called “computational thinking,” and knowing all of the Java syntax in the world won’t help if you can’t think of good ways to apply it.

Whether you call it computational thinking or computer literacy, understanding the high-level basics of the technology is what is useful. Conflating the ability to create Hello World in JavaScript with that understanding is both simplistic and counter-productive. Chris Granger, in “Coding is not the new literacy”, observed:

Being literate isn’t simply a matter of being able to put words on the page, it’s solidifying our thoughts such that they can be written. Interpreting and applying someone else’s thoughts is the equivalent for reading. We call these composition and comprehension. And they are what literacy really is.

Languages come and go. Technologies evolve. Concentrating exclusively on specific languages and technologies risks creating future obsolescence. The underlying concepts that must be understood to effectively use them, however, have longer lives. It’s the 21st century equivalent of the difference between giving someone a fish and teaching them to fish. Erik Dietrich in “Don’t Learn to Code — Learn to Automate” points out the broader utility of computer literacy:

It’s obtuse to suppose that a prerequisite for every job in the future will be the ability to implement sophisticated, specialized computer applications. But it’s not at all obtuse to suppose that, given the ubiquity of computing, a prerequisite for every job in the future will be the ability to recognize which tasks are better suited for humans and which for computers. Learn at least to recognize which parts of your job are a poor use of your time. After that, perhaps learn to use your ingenuity and creativity to automate using the tools that you know (such as googling for solutions, leveraging apps, etc). And, if you’ve come that far, maybe it’s time to roll up your sleeves and take the plunge into learning to code a little bit to help you along.

The need for greater numbers of people with computer literacy is real, as is the need for greater diversity within the ranks of those who work with technology. It serves no one to misrepresent the skills needed or the nature of those skills. People who know how to google for code but lack the ability to understand and evaluate what they get back are no better off than if they had never seen a computer. We owe them, and ourselves, better.

Fixing IT – Remembering What They’re Paying For

Dan Creswell‘s tweet said it well:

Max Roser‘s tweet, however, illustrated it in unmistakable fashion:

https://twitter.com/jetpack/status/596604003527589890

Success looks like that.

Not everyone gets to create the kind of magic that lets a blind woman “see” her unborn child. Nearly everyone in IT, however, has the ability to influence customer satisfaction. Development, infrastructure, support, all play a part in making someone’s life better or worse. It’s not just what we produce, but also the process, management and governance that determines how we produce. The product is irrelevant, it’s the service that counts. The woman in the picture above isn’t happy about 3D printing, she’s overjoyed at the experience it enabled.

Customer service isn’t just a concern for software vendors. It’s remarkably easy to destroy a relationship and remarkably hard to repair one. The most important alignment is the alignment of concerns between those providing the service and those consuming it. Mistrust between the two is a common source of “Shadow IT” problems, and far from helping, cracking down may well make things worse.

Building trust by meeting needs creates a virtuous circle. Satisfaction breeds appreciation. Appreciation breeds motivation. Motivation, in turns, yields more satisfaction.

As the saying goes, you catch more flies with honey.

Institutional Amnesia, Cargo Cults and Software Development

When George Santayana stated that “Those who cannot remember the past are condemned to repeat it.”, he wasn’t talking about technology. When Brenda Michelson and Ed Featherston said much the same thing recently, they were:

https://twitter.com/jetpack/status/573850405026861056
https://twitter.com/jetpack/status/573851102808010752

It’s a sad fact of life that today’s silver bullet is likely to be yesterday’s junk which was probably the day before yesterday’s silver bullet.

Poor design choices are made for a variety of reasons. Sometimes it’s a matter of ego. Sometimes inadequate analysis is the culprit. Focusing on technology rather than problem-solving can be another pitfall. Even attempts at post-hoc justification of a prior bad decision can drive new mistakes.

An uncritical acceptance of tradition is a significant source of problem designs. Eberhard Wolff recently took a swipe at one old standard:

The stock reason for a tiered/distributed design is scalability. However, it’s not a given that distributing X horizontal layers across Y machines (yielding X/Y instances) will yield better results than Y machines, each with all three layers deployed on the same machine. The context in which this sort of distribution makes sense is far from universal. Even when the costs of distribution are outweighed by the benefits, traditional monolithic horizontal layers will likely be less efficient than vertical slices. One of the purported benefits of microservices is the ability to independently scale according to business concerns (vertical slices organized around bounded contexts) rather technology concerns (horizontal layers).

The mention of microservices brings to mind the problem of jumping on bandwagons. How many applications currently under development are being designed using this architectural style because it’s the “next big thing” rather than because the style fits the problem? Sam Newman, author of O’Reilly’s Building Microservices, in “Microservices for Greenfield?”, even states that he considers the style to be more suitable for evolving an existing system rather than building from scratch:

I remain convinced that it is much easier to partition an existing, “brownfield” system than to do so up front with a new, greenfield system. You have more to work with. You have code you can examine, you can speak to people who use and maintain the system. You also know what ‘good’ looks like – you have a working system to change, making it easier for you to know when you may have got something wrong or been too aggressive in your decision making process.

You also have a system that is actually running. You understand how it operates, how it behaves in production. Decomposition into microservices can cause some nasty performance issues for example, but with a brownfield system you have a chance to establish a healthy baseline before making potentially performance-impacting changes.

I’m certainly not saying ‘never do microservices for greenfield’, but I am saying that the factors above lead me to conclude that you should be cautious. Only split around those boundaries that are very clear at the beginning, and keep the rest on the more monolithic side. This will also give you time to assess how how mature you are from an operational point of view – if you struggle to manage two services, managing 10 is going to be difficult.

This same over-eagerness is present in front-end development as much as back-end development. Stefan Tilkow recently tweeted regarding the trend of jumping straight into complex Javascript framework applications rather than evolving into them based on need:

In my opinion, the key to effective design is being able to give a good answer when asked “why”. Being able to articulate the reasons behind the choices made is critical to justifying them. By reasons, I mean a logical explanations of how the techniques chosen contribute to the desired ends. Neither “X recommends this” nor “This is what everybody’s doing” count. Designing, developing, and evolving software systems is not a game of following a recipe. In the words of Grady Booch:

Product Owner or Live Customer?

Wooden Doll

Who calls the shots for a given application? Who gets to decide which features go on the roadmap (and when) and which are left by the curb? Is there one universal answer?

Many of my recent posts have dealt in some way with the correlation between organizational structure and application/solution architecture. While this is largely to be expected due to Conway’s Law, as I noted in “Making and Taming Monoliths”, many business processes depend on other business processes. While a given system may track a particular related set of business capabilities, concepts belonging to other capabilities and features related to maintaining them, tend to creep in. This tends to multiply the number of business stakeholders for any given system and that’s before taking into account the various IT groups that will also have an interest in the system.

Scrum has the role of Product Owner, whose job it is to represent the customer. Alan Holub, in a blog post for Dr. Dobb’s, terms this “flawed”, preferring Extreme Programming’s (XP) requirement for an on-site customer. For Holub, introducing an intermediary introduces risk because the product owner is not a true customer:

Agile processes use incremental design, done bit by bit as the product evolves. There is no up-front requirements gathering, but you still need the answers you would have gleaned in that up-front process. The question comes up as you’re working. The customer sitting next to you gives you the answers. Same questions, same answers, different time.

Put simply, agile processes replace up-front interviews with ongoing interviews. Without an on-site customer, there are no interviews, and you end up with no design.

For Holub, the product owner introduces either guesses or delay. Worst of all in his opinion is when the product owner is “…just the mouthpiece of a separate product-development group (or a CEO) who thinks that they know more about what the customer wants than the actual customer…”. For Holub, the cost of hiring an on-site customer is negligible when you factor in the risks and costs of wrong product decisions.

Hiring a customer, however, pretty much presumes an ISV environment. For in-house corporate work, the customer is already on board. However, in both cases, two issues exist: one is that a customer that is the decision-maker risks giving the product a very narrow appeal; the second is that the longer the “customer” is away from being a customer, the currency of their knowledge of the domain diminishes. Put together, you wind up with a product that reflects one person’s outdated opinion of how to solve an issue.

Narrowness of opinion should be sufficient to give pause. As Jim Bird observed in “How Product Ownership works in the Real World”:

There are too many functional and operational and technical details for one person to understand and manage, too many important decisions for one person to make, too many problems to solve, too many questions and loose ends to follow-up on, and not enough time. It requires too many different perspectives: business needs and business risks, technical dependencies and technical risks, user experience, supportability, compliance and security constraints, cost factors. And there are too many stakeholders who need to be consulted and managed, especially in a big, live business-critical system.

As is the case with everything related to software development, there is a balance to be met. Coherence of vision enhances design to a point, but then can lead to a too-insular focus. Mega-monoliths that attempt to deal comprehensively with every aspect of an enterprise, if they ever launch, are unwieldy to maintain. By the same token, business dependencies become system dependencies and cannot be eliminated, only re-arranged (microservices being a classic example – the dependencies are not removed, only their location has changed). The idea that we can have only one voice directing development is, in my opinion, just too simplistic.

Professional Software Development – Can We Mandate What We Can’t Define?

The law is a what?!?

The only true wisdom is in knowing you know nothing.
Socrates

What types of software products have you worked on: desktop applications, traditional web, single-page applications, embedded, mobile, mainframe?

How about organizations: private for-profit, government, non-profit?

How about domains: finance, retail, defense, health care, entertainment, banking, law enforcement, intelligence, real estate, etc. etc. etc.?

Given that the realm of “software development” is currently huge (and probably expanding as you read this), how logical is it that someone (or even a group) could regulate what is acceptable process and practice? I won’t say that it would be impossible to come up with one unified set of regulations that would fit all circumstances, but I’m very comfortable estimating the likelihood as a minute fraction of a percent. If the entire realm were broken down into smaller groupings, the chance might increase, but the resulting glut of regulations would become an administrative nightmare and still wouldn’t address those circumstances that aren’t in the list above but are on the horizon.

Nonetheless, people continue to float the idea of regulation.

Last fall, Bob Martin floated the idea of government regulation as a reaction to the healthcare.gov fiasco. That would be the same government whose contracting regulations contributed to the fiasco in the first place, correct? That would be the same government that has legally mandated Agile for Department of Defense contracts? Legally mandated agility just seems to sound a bit suspicious. As Jeff Sutherland noted “Many in Washington are still trying to figure out what exactly that means but it is a start”. A start, for sure, but the start of what?

Ken Schwaber’s blog post “Can Software Developers Meet the Need?” takes a different approach. Schwaber proposes that:

A software profession governing body is needed. We need to formalize and regulate the skills, techniques, and practices needed to build different types of software capabilities. On one side, there is the danger of squeezing the creativity out of software development by unknowledgeable bureaucrats. On the other side is the danger of the increasingly vital software our society relies on failing critically.

We can either create such a governance capability, or the governments will legislate it after a particularly disastrous failure.

Call me a cynic, but I’m betting that the amount of bureaucratic squeezing that would result from this would far outweigh any gain in quality.

Most of the organization types listed above are already on the hook for harm caused by their IT operations; just ask Target and Knight Capital (don’t ask the Centers for Medicare & Medicaid Services). Is it more likely that a committee, whether private or public, can better manage the quality of software across all the various categories listed above? Could they be more likely to keep up with change in the industry? Color me doubtful.

Software Development, Coding, Forests and Trees

They say there's a forest in here somewhere

Some responses to my post “Why does software development have to be so hard?” illustrated one major (in my opinion) aspect of the problem – for many people, software development is synonymous with coding. It’s certainly understandable that someone might jump to that conclusion. After all, no matter how many slides, documents, diagrams, etc. someone produces, it is code that makes those ideas real.

Code, however, is not enough.

Over the last seventeen-plus years that I’ve been involved in software development, great strides have been made in languages and platforms. Merely look at the plumbing code needed to write a Hello World for Windows in C should you need convincing. Frameworks for application infrastructure, unit testing and acceptance testing are plentiful. Coding and coding cleanly is far, far easier and yet, people still complain about software.

While poor quality code can sink a product, excellent quality code cannot make a product. No matter how right you build a thing, the customer won’t be happy if it’s the wrong thing. The Hagia Sophia, Taj Mahal, Empire State Building, and many others are all breathtakingly magnificent structures that would utterly fail a customer who wanted (not to mention, budgeted for) a garage. We still fail to adequately understand the needs of our customers and the environments they work within. This is an area that desperately needs improvement. This is not a technical issue, but one of communication, collaboration, and organization. Neither customer nor provider can impose this improvement unilaterally.

Understanding the architecture of the problem is critical to designing and evolving the architecture of the solution, which is yet another area of need. Big Design Up Front (BDUF) assumes too much certainty and never (at least in my experience) survives contact with reality. No Design Up Front (NDUF), however, swings too far in the opposite direction and is unlikely to yield a cohesive design without far too much re-work. Striking a balance between the two is, in my opinion, key to producing an architecture that satisfies the functional and quality of service requirements of today while retaining sufficient flexibility for tomorrow.

Quality code implementing an architectural design grounded in a solid understanding of the customer’s problem space is, in my opinion, the essence of software development. Anything less than those three elements misses the mark.

Why does software development have to be so hard?

Untangling this could be tricky

A series of 8 tweets by Dan Creswell paints a familiar, if depressing, picture of the state of software development:

(1) Developers growing up with modern machinery have no sense of constrained resource.

(2) Thus these developers have not developed the mental tools for coping with problems that require a level of computational efficiency.

(3) In fact they have no sensitivity to the need for efficiency in various situations. E.g. network services, mobile, variable rates of change.

(4) Which in turn means they are prone to delivering systems inadequate for those situations.

(5) In a world that is increasingly networked & demanding of efficiency at scale, we would expect to see substantial polarisation.

(6) The small number of successful products and services built by a few and many poor attempts by the masses.

(7) Expect commodity dev teams to repeatedly fail to meet these challenges and many wasted dollars.

(8) Expect smart startups to limit themselves to hiring a few good techies that will out-deliver the big orgs and define the future.

The Fallacies of Distributed Computing are more than twenty years old, but Arnon Rotem-Gal-Oz’s observations (five years after he first made them) still apply:

With almost 15 years since the fallacies were drafted and more than 40 years since we started building distributed systems – the characteristics and underlying problems of distributed systems remain pretty much the same. What is more alarming is that architects, designers and developers are still tempted to wave some of these problems off thinking technology solves everything.

Why?

Is it really this hard to get it right?

More importantly, how do we change this?

In order to determine a solution, we first have to understand the nature of the problem. Dan’s tweets point to the machines developers are used to, although in fairness, those of us who lived through the bad old days of personal computing can attest that developers were getting it wrong back then. In “Most software developers are not architects”, Simon Brown points out that too many teams are ignorant of or downright hostile to the need for architectural design. Uncle Bob Martin in “Where is the Foreman?”, suggests the lack of a gatekeeper to enforce standards and quality is why “our floors squeak”. Are we over-emphasizing education and underestimating training? Has the increasing complexity and the amount of abstraction used to manage it left us with too generalized a knowledge base relative to our needs?

Like any wicked problem, I suspect that the answer to “why?” lies not in any one aspect but in the combination. Likewise, no one aspect is likely, in my opinion, to hold the answer in any given case, much less all cases.

People can be spoiled by the latest and greatest equipment as well as the optimal performance that comes for working and testing on the local network. However, reproducing real-world conditions is a bit more complicated than giving someone an older machine. You can simulate load and traffic on your site, but understand and accounting for competing traffic on the local network and the internet is a bit more difficult. We cannot say “application x will handle y number of users”, only that it will handle that number of users under the exact same conditions and environment as we have simulated – a subtle, but critical difference.

Obviously, I’m partial to Simon Brown’s viewpoint. The idea of a coherent, performant design just “emerging” from doing the simplest thing that could possibly work is ludicrous. The analogy would be walking into an auto parts store, buying components individually, and expecting them to “just work” – you have to have some sort of idea of the end product in mind. On the other hand, attempting to specify too much up front is as bad as too little – the knowledge needed is not there and even if it were, a single designer doesn’t scale when dealing with any system that has a team of any real size.

Uncle Bob’s idea of a “foreman” could work under some circumstances. Like Big Design Up Front, however, it doesn’t scale. Collaboration is as important to the team leader as it is to the architect. The consequences of an all-knowing, all-powerful personality can be just as dire in this role as for an architect.

In “Hordes of Novices”, Bob Martin observed “When it’s possible to get a degree in computer science without writing any code, the quality of the graduates is questionable at best”. The problem here is that universities are geared to educate, not train. Just because training is more useful to an employer (at least in the short term), does not make education unimportant. Training deals with this tool at this time while how to determine which tool is right for a given situation is more in the province of education. It’s the difference between how to do versus how to figure out how to do. Both are necessary.

As I’ve already noted, it’s a thorny issue. Rather than offering an answer, I’d rather offer the opportunity for others to add to the conversation in the comments below. How did we get here and how do we go forward?