Don’t Teach Coding

Before the pitchforks and torches come out, allow me to clarify – don’t just teach coding.

Jeff Susna and Mark M captured the essence of what I’m talking about:

The article Mark M referenced, “Is coding the new literacy?”, had this to say:

So you might be forgiven for thinking that learning code is a short, breezy ride to a lush startup job with a foosball table and free kombucha, especially given all the hype about billion-dollar companies launched by self-taught wunderkinds (with nary a mention of the private tutors and coding camps that helped some of them get there). The truth is, code—if what we’re talking about is the chops you’d need to qualify for a programmer job—is hard, and lots of people would find those jobs tedious and boring.

But let’s back up a step: What if learning to code weren’t actually the most important thing? It turns out that rather than increasing the number of kids who can crank out thousands of lines of JavaScript, we first need to boost the number who understand what code can do. As the cities that have hosted Code for America teams will tell you, the greatest contribution the young programmers bring isn’t the software they write. It’s the way they think. It’s a principle called “computational thinking,” and knowing all of the Java syntax in the world won’t help if you can’t think of good ways to apply it.

Whether you call it computational thinking or computer literacy, understanding the high-level basics of the technology is what is useful. Conflating the ability to create Hello World in JavaScript with that understanding is both simplistic and counter-productive. Chris Granger, in “Coding is not the new literacy”, observed:

Being literate isn’t simply a matter of being able to put words on the page, it’s solidifying our thoughts such that they can be written. Interpreting and applying someone else’s thoughts is the equivalent for reading. We call these composition and comprehension. And they are what literacy really is.

Languages come and go. Technologies evolve. Concentrating exclusively on specific languages and technologies risks creating future obsolescence. The underlying concepts that must be understood to effectively use them, however, have longer lives. It’s the 21st century equivalent of the difference between giving someone a fish and teaching them to fish. Erik Dietrich in “Don’t Learn to Code — Learn to Automate” points out the broader utility of computer literacy:

It’s obtuse to suppose that a prerequisite for every job in the future will be the ability to implement sophisticated, specialized computer applications. But it’s not at all obtuse to suppose that, given the ubiquity of computing, a prerequisite for every job in the future will be the ability to recognize which tasks are better suited for humans and which for computers. Learn at least to recognize which parts of your job are a poor use of your time. After that, perhaps learn to use your ingenuity and creativity to automate using the tools that you know (such as googling for solutions, leveraging apps, etc). And, if you’ve come that far, maybe it’s time to roll up your sleeves and take the plunge into learning to code a little bit to help you along.

The need for greater numbers of people with computer literacy is real, as is the need for greater diversity within the ranks of those who work with technology. It serves no one to misrepresent the skills needed or the nature of those skills. People who know how to google for code but lack the ability to understand and evaluate what they get back are no better off than if they had never seen a computer. We owe them, and ourselves, better.

Who Needs Architects? Well, Nobody Needs this Kind

The question above came up while recording SPaMCast 357 with Tom Cagley, and it’s an extremely important one. The post we were discussing, “Who Needs Architects? Because YAGNI Doesn’t Scale”, is one of many discussing the need for architectural design in software development. While I’m firmly convinced that the need is real, it should also be realized that there is a real danger in unilaterally imposing the design on a team.

Tom’s question about an “aristocracy of architects” was taken from his post “Re-Read Saturday: The Mythical Man-Month, Part 4 Aristocracy, Democracy and System Design”, part of a series in which he is reviewing Frederick Brooks’ The Mythical Man-Month. In the essay reviewed in this post, “Aristocracy, Democracy and System Design”, Brooks discussed the importance and value of conceptual integrity (i.e. a cohesive, unified design) to software systems. While I agree wholeheartedly that both architectural design and someone (or more than one someone) responsible for that design is necessary, I disagree that establishing an aristocracy is beneficial or even necessary. In fact, the portion labeled “Reality” on the graphic in Kelly Abuelsaad‘s tweet below, although talking about imposter syndrome, also illustrates why dictating design can be a bad idea.

One can certainly influence, even control the architecture of a system via a mandate. The problem with being given control is that no one can give effectiveness to go with it. As such, it’s brittle, subject to the limitations of the person given the authority and the compliance of those implementing the system. This brittleness exists even when the architect stays within their level of detail. Combining a dictatorial style with Big Design Up Front all but ensures failure.

In my experience, a participative, collaborative style of design yields better designs. In addition to benefiting from a variety of skills and experience, it also engenders greater understanding and ownership across the team. Arrogance, on the other hand, can be costly.

I firmly believe that a product will benefit from having someone whose focus is the cross-cutting, architecturally significant concerns. I also believe that part of that job is teaching and mentoring as well as listening to the rest of the team so that architectural awareness permeates the entire team. There are many aspects to being an architect, but being a dictator should not be one of them.

Updated 4/8/2016 to fix a broken link.

Laziness as a Virtue in Software Architecture

Laziness may be one of the Seven Deadly Sins, but it can be a virtue in software development. As Matt Osbun observed:

Robert Heinlein noted the benefits of laziness:

Even in the military, laziness carries potential greatness (emphasis mine):

I divide my officers into four groups. There are clever, diligent, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and diligent — their place is the General Staff. The next lot are stupid and lazy — they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the intellectual clarity and the composure necessary for difficult decisions. One must beware of anyone who is stupid and diligent — he must not be entrusted with any responsibility because he will always cause only mischief.

Generaloberst Kurt von Hammerstein-Equord

The lazy architect will ignore slogans like YAGNI and the Rule of Three when experience and/or information tells them that it’s far more likely that the need will arise than not. As Matt stated in “Foreseeable and Imaginary Design”, architects must ask “What changes are possible and which of those changes are foreseeable”. The slogans point out that engineering costs but the reality is that so does re-work resulting from decisions deferred. Avoiding that extra work (laziness) avoids the cost associated with it (frugality).

Likewise, lazy architects are less likely cave when confronted with the sayings of some notable person. Rather than resign themselves to the extra work, they’re more likely to examine the statement as Kevlin Henney did:

It’s far cheaper to design and build a system according to its context than re-build a system to fix issues that were foreseeable. The re-work born of being too focused on the tactical to the detriment of the strategic is as much a form of technical debt as cutting corners. The lazy architect knows that time spent identifying and reconciling contexts can allow them to avoid the extra work caused by blind incremental design.

Shut up, salute & soldier on?

Yes boss

Leadership and management are currently hot topics, with the #NoManager movement among the hottest of the hot. My detailed opinion on flat organizations/holacracy is a post for another day, but one aspect that I fully agree with is the differentiation between leadership and management. They can and should coincide, but they don’t always. Most importantly, the number of leaders should exceed the number of managers. To re-state a point I made in “Lord of the Repository”, the best managers develop their team members so that the team is never without leadership, even when the manager is away.

Tony DaSilva’s recent post on the subject, “In Defense of Hierarchy”, spoke to some benefits that derive from hierarchies. One of those benefits identified was “orderly execution of operations”, supported by the following quote:

Imagine if students argued with their teachers, workers challenged their bosses, and drivers ignored traffic cops anytime they asked them to do something they didn’t like. The world would descend into chaos in about five minutes. – Duncan J. Watts

For each of Watts’ examples, his point is, in order, wrong, possibly wrong, and correct. I have experience that speaks to all three.

My career in software development is my second career; my first was in law enforcement, serving as a Deputy in a county Sheriff’s Office. One of the positions I held during my tenure there was Assistant Director of the Training Academy. My role was split between administration, instruction, and supervision of students (limited strictly to their time in training, I didn’t hold a supervisory rank that would apply beyond that). My position was then and has always been, that any trainer or teacher that cannot tolerate respectful, appropriate challenge is unworthy of their position. A student probing and testing the information being presented was something to be celebrated in my opinion, not discouraged.

An alert went out on the radio one afternoon that there was a fire in one of the housing units of the jail. After a quick run to the location, I found that the supervisors had succeeded in getting the fire knocked down, removing the person who had started it and detailed staff to evacuate the other inmates to a smoke-free secure area. However, the remaining staff were milling about without protective gear or spare fire extinguishers should the embers flare back up. While waiting for someone with authority, I took it upon myself to direct individuals to get the equipment that was needed. Once someone arrived and assumed control, I then headed back to normal duties.

While talking about the incident later on with a co-worker, they happened to mention that they were really shocked when I ordered the Major to go retrieve an air pack and he did so (n.b. the Major in question was the third ranking person in the department and five levels higher than me in the hierarchy). Needless to say, I was just as shocked. I hadn’t been paying attention to much beyond getting the situation safely under control and the Major hadn’t objected, so I didn’t notice the real-life inversion of control, though my colleague certainly did.

That incident illustrates several things about leadership. First, is the point I mentioned above that leadership and management/authority are separate things. I had no official authority, but exercised leadership until someone with authority was in a position to take over. Second, is that my unofficial authority rested on the trust and acquiescence of those executing my orders. I would argue that, far from undermining their trust, my openness to challenge in non-emergency situations made them more likely to follow me in the emergency.

So, to return to Watts’ examples – teachers should be challenged (appropriately), cops should be obeyed (until the emergency is in hand), and both you and the boss should be able to flex based on the whether the current situation requires a teacher or a cop.

Participative leadership is more likely to engender trust and buy-in. Smart leaders (be they managers, architects, team leads, etc.) aren’t looking for passive followers, they know it could cost them. As Tom Cagley observed in a his post “It Takes A Team”:

While a product owner prioritizes and a scrum master facilitates, it takes a whole team to deliver. The whole team is responsible for getting the job done which means that at different times in different situations different members will need to provide leadership. Every team member brings their senses to the project-party, which makes all of them responsible looking for trouble and then helping to resolve it even if there isn’t a scrum master around.

Lord of the Repository

The man on horseback

In Robert “Uncle Bob” Martin’s “Where is the Foreman”, he advocated for a “foreman” with exclusive commit rights who would review each and every potential commit before it made its way into the repository in the interest of ensuring quality. While I am in sympathy with some of his points, ultimately the idea breaks down for a number of reasons, most particularly in terms of introducing a bottleneck. A single person will only be able to keep up with so many team members and if a sudden bout of the flu can bring your operation to a standstill, there’s a huge problem.

Unlike Jason Gorman, I believe that egalitarian development teams are not the answer. When everyone is responsible for something, it is cliche that nobody takes responsibility for it (they’ve even given the phenomena its own name). However, being responsible for something does not mean dictating. Dictators eventually tend to fall prey to tunnel vision.

Jason Gorman pointed out in a follow-up post, “Why Code Inspections Need To Be Egalitarian”, “You can’t force people, con people, bribe people or blackmail them into caring.” You can, however, help people to understand the reasons behind decisions and participate in the making of those decisions. Understanding and participation are more conducive to ownership and adoption than coercion. Promoting ownership and adoption of values vital to the mission is the essence of leadership.

A recent Tweet from Thomas Cagley illustrates the need for reflective, purposeful leadership:

In my experience, the best leaders exercise their power lightly. It’s less a question of what they can decide and more a question of should they decide out of hand. When your philosophy is “I make the decisions”, you make yourself a hostage to presence. Anywhere you’re not, no decision will be made, regardless of how disastrous that lack of action may be. I learned from an old mentor that the mark of a true leader is that they can sleep when they go on vacation. They’re still responsible for what happens, but they’ve equipped their team to respond reasonably to issues rather than to mill about helplessly.

In his follow-up post, “Oh Foreman, Where art Thou?”, Uncle Bob moderated his position a bit, introducing the idea of assistants to help in the reviews and extension of commit rights to those team members who had proved trustworthy. It’s a better position than the first post, but still a bit too controlling and self-certain. The goal should not be to grow a pack of followers who mimic the alpha wolf, but to grow the predators who snap at your heals. This keeps them and just as important, you, on the path of learning and growth.

Commitment

On the march

Napoleon nearly avoided his Waterloo.

Two days prior to the battle, Napoleon managed to divide the Prussian army from its Anglo-Dutch allies under Wellington. With the bulk of his forces, he attacked the Prussians while his subordinate, Marshall Michel Ney, screened Wellington’s forces dribbling in from Brussels. Napoleon’s main body had beaten the more numerous Prussians and he called on Ney’s I Corps, nearly twenty thousand strong, to fall on the Prussian right and complete the victory. Ney, meanwhile, called for I Corps to continue on to him and sweep away Wellington’s forces.

The French I Corps marched back and forth between the two battles, taking part in neither. Had either allied army been decisively defeated, then the outcome could have been much different. As it turned out, Wellington’s army was reinforced in the nick of time by the Prussians, and together they made Waterloo synonymous with downfall.

Uncertainty is an ever-present factor in the design and development of software. It is extremely important to recognize and acknowledge that uncertainty in order to mitigate it. Given that choices carry the risk of locking you into a particular approach, increasing flexibility by deferring commitment is a logical technique. One of the principles of Real Options is “Never commit early unless you know why”.

However, another principle of Real Options is “Options expire”. Decisions cannot and should not be deferred indefinitely. Failure to choose is a choice in itself and all the more dangerous if its nature isn’t recognized.

Architectural choices are architectural constraints. However, it should be recognized that constraints provide structure. Just as the walls that bound a building define it, so too design choices define systems. The key is that the definition of the system aligns with its goals and retains as much flexibility as possible.

Commitment can bring with it its own set of anxieties. Second-guessing is human nature and circumstances can be remarkably fluid. Once committed, however, radical change should be approached more carefully. Ignoring the cost of changing directions can be as much a source of analysis paralysis as failing to decide in the first place. It’s better to get into the battle with your best effort than to march around in circles hoping for something a little better.

Who’s Your Predator?

Just for the howl of it

Have you ever worked with someone with a talent for asking inconvenient questions? They’re the kind of person who asks “Why?” when you really don’t have a good reason or “How?” when you’re really not sure. Even worse, they’re always capable of finding those scenarios where your otherwise foolproof plan unravels.

Do you have someone like that?

If so, treasure them!

In a Twitter conversation with Charlie Alfred, he pointed out that you need a predator, “…an entity that seeks the weakest of the designs that evolve from a base to ensure survival of fittest”. You need this predator, because “…without a predator or three, there’s no limit to the number of unfit designs that can evolve”.

Preach it, brother. Sometimes the best friend you can have is the person who tells you what you don’t want to hear. It’s easier (and far cheaper) to deal with problems early rather than late.

I’ve posted previously regarding the benefits of designing collaboratively and the pitfalls of too much self reliance, but it’s one of those topics that merits an occasional reminder. As Richard Feynman noted, “The first principle is that you must not fool yourself – and you are the easiest person to fool.”

Self-confidence is a natural and desirable trait. Obviously, if you’re not confident in your decision, you should probably defer committing to it. That confidence, however, can blind us to potential flaws. Just as innovators overestimate consumer interest in their product, we can place too much faith in our own decisions and beliefs. Our lack of emotional distance can make it easy to fool ourselves. In that case, having someone to challenge our assumptions can be invaluable.

Working collaboratively increases the odds that flaws will be found, but does not guarantee it. Encouraging questions, even challenges is a good start – you don’t want to cause a failure because people were afraid to question the design. However, groups can be as subject to cognitive biases as individuals (for a great overview, see Thomas Cagley Jr.’s excellent series on the subject: July 8th, July 9th, July 10th, July 11th, and July 12th). No bad news is not necessarily good news.

Some times you have to be your own predator.

Getting ideas out of your head is helpful in getting the distance you need to evaluate your ideas in a more objective manner. Likewise, writing and/or diagramming forces a bit of rigor and organization, making it easier to spot gaps in the design. The more scrutiny a design can withstand, the more likely it is to survive in the wild.

When the wolf’s at the door, you’ll want to rely something that’s been proven, not pampered.

“It Depends” and “I Don’t Know”

Getting good advice?

When Croesus of Lydia considered going to war against the Persian Empire, he sought advice from the best available source of his day – the Oracle at Delphi. He was told that attacking the Persians would mean the end of a mighty empire. Armed with this “knowledge”, he attacked and a mighty empire, his own, was destroyed.

While the Oracle’s advice was arguably accurate, it definitely wasn’t helpful. The ambiguous answer conveyed more certainty than was warranted. While Croesus was complicit in his downfall (what’s that saying about “assumptions”?), the Oracle must also accept some blame. Failing to convey the uncertainty was a betrayal.

Just like Croesus, contemporary decision makers crave certainty. Executives are frequently called upon to synthesize multiple viewpoints, many of which may be outside their area of expertise, into a coherent decision. An expert’s opinion of what’s “right” can be a seductive thing. Likewise, technologists are often uncomfortable with ambiguity, and rightly so. Implementing contradictory requirements is difficult, to say the least.

Uncertainty, however, is a fact of life. Pretending that it does not exist is neither honest, nor effective. Picking a number without any basis in reality does not serve to eliminate it. In fact, elimination of uncertainty is a fool’s errand. As Tom Graves stated in “Who will lead us out of our uncertainty”:

But that tag-line is kinda interesting – because the only valid answer is ‘No-one’.

Oh, no doubt there’d be plenty of people who would offer to lead us out of uncertainty. Yet the reality is that in every case they’ll either be a fool, a fraud, or both. The blunt fact is that uncertainty is a fact of life: there is no way to ‘lead us out of uncertainty’ – because ‘certainty’ is itself a delusion about a world that does not and cannot ever actually exist.

A tweet from Charlie Alfred provides the alternative:

Just as weather is composed of many simple physical processes whose interplay results in a chaotic whole, so too are systems (both human and machine). While we may have a firm understanding of the behavior of the various components, as our scope widens, our certainty must decrease. Tweaks to those low-level components must be tempered by the knowledge that the consequences may go beyond our intentions.

Under these circumstances, the phrases “It depends” or “I don’t know” become the honest answer. It is important to remember, however, that Ruth Malan’s definition of a good architect, one who can tell you what it depends on, applies.

Given the uncontrolled variables of network speed, client machine capabilities, site traffic, and network traffic (just to name a few), anyone who guarantees a page load time for an internet application would be Tom’s “a fool, a fraud, or both”. The genuine article would explain why a definite answer was not possible, what actions could be taken to test the capability in question, and what could be done to improve the chances of meeting the requirement when faced with various challenges.

Human systems are just as chaotic and uncertain, subject to circumstances beyond an individuals control. Akio Toyoda, President of Toyota, was recently quoted in the New York times:

“Have we really turned into a company that will be profitable and continue to grow no matter what happens to its business environment?” he asked.

“I am not sure yet, is my honest answer. An unprecedented crisis even beyond the scale of the Lehman shock may happen again,” Mr. Toyoda added. “We’ll only know the answer when such events actually happen.”

It takes a certain amount of courage to say “I don’t know”. “It depends” is not always the answer that people want to hear. However, in the face of uncertainty, they are the right answers. Awareness of uncertainty arms you to deal with events as they arise. A false sense of certainty is comforting, right up until it’s shattered.

Fightin’ Words

That's gonna leave a mark

Architect: Would it make sense to only host the UI on the SharePoint front end servers and put the other layers behind a service on another box?

Consultant: Naw, that would be stupid.

If you’ve read more than a couple of my posts, you may have noticed that I’m something of a fanatic about pragmatism. The blog was only a little more than two weeks old when “There is no right way (though there are plenty of wrong ones)” was published, and it’s been a recurring theme since. Whether designing a system or designing the process under which systems are developed, context is king and very little is truly absolute.

In the exchange above, the consultant in question didn’t need to ask about load on the front end web servers, dependencies, etc. before deciding that the idea was “stupid”. Any contextual information that might require deviating from the script appeared to be unwelcome. As it turned out, that wasn’t the case. The consultant was actually a very talented and flexible individual who had a momentary lapse in people skills. That momentary lapse, however, could just as easily poisoned the well and ended the collaboration before it started.

Most would understand that “stupid” is one of those words best avoided in all cases. However, words like “right”, “proper”, “good”, and “best” are frequently used as are “easy”, “quick”, and “cheap”. All of these are meaningless without context (and extremely subjective, even with it). Some can be positive or negative depending on the circumstances – is it shoddy “cheap” or inexpensive “cheap”? Ostensibly positive terms can seem like a weapon when you’re on the receiving end – if my way is “proper”, that would imply that your way (assuming it differs) is “improper”.

Just as lock-in creates a danger when designing, mental lock-in can lead to communication failures. Whether it is a matter of shutting down the conversation or misinterpreting it as such, the result will be the same. In my opinion, inadvertent offense is actually worse in that you’ve offended the other party and aren’t even aware of it, which can lead to unexpected issues in the future.

Another danger inherent in these words is the mindset it can foster, both in ourselves and others. We’re ill served when no one will challenge us and worse off when we’re too certain of our own skill. A few weeks back, the saying “strong opinions, weakly held” made the rounds on Twitter. It defines “Wisdom as the courage to act on your knowledge AND the humility to doubt what you know.” I’ll agree with Liz Keogh that it’s probably too much to ask for from a human, but it’s a worthy goal nonetheless.

So, does this mean that I’m advocating verbal pacifism? Hardly, I enjoy a good mental wrestling match too much for that. I do, however, recommend knowing when to be diplomatic (customers) and when to be nurturing (mentoring). As in everything, striking a balance and molding to the situation is key.

To code or not to code, is really not the question

Whether or not application and solution architects should code is an old controversy that never seems to go away. Currently it’s a discussion topic on the “97 Things Every Software Architect Should Know” group on LinkedIn. Unfortunately, the question misses the point. Whether an architect codes or not on a given project is irrelevant. What is relevant is that the architect’s technical capability (knowledge and experience) should match the role(s) he or she is responsible for. In other words, not “does the architect code”, but “can the architect code”. Additionally, it is vitally important that architects not be over-committed, leaving them with a responsibility they lack the time to fulfill.

Simon Brown, on the blog Coding the Architecture”, has observed that “It’s a role, not a rank”. This role-based view is key. Simply put, the architect role architects, the designer role designs, and the coder role codes. It is the role(s) that are significant, not the title. It is fairly common to find individuals filling multiple roles. When the demands of those roles exceed either the capability or the bandwidth of an individual, trouble ensues.

In terms of technical knowledge, there is both breadth and depth. The requirements for those dimensions vary based on the role. Different individuals have different capacities for attaining both breadth and depth of technical knowledge. However, the limiting factor will be capacity for work. Even if someone has the ability to attain the greatest possible depth of knowledge across the broadest selection of domains, there are only so many hours in the day.

If we factor tasks per role, intensity of those tasks, number of roles, and number of applications, the workload on a given individual becomes clear. If that workload exceeds capacity, then some tailoring must take place to bring it back in line. Task intensity is generally a poor candidate for this type of tailoring, as reducing focus will generally result in reducing quality. Likewise, paring tasks from a given role will likely result in diminished performance. If an architect is charged with both architectural and detailed design across multiple applications simultaneously, chances are that architect will be over-committed. If this is resolved by rushing work and/or eliminating necessary tasks (communicating and collaborating with the development team; adapting to changed requirements, failed assumptions, better information; etc.), then the result is the “pigeon architect” who swoops in, deposits a mess, and flies away leaving the development team to clean up. A far better option is to reduce the number of roles and/or the number of applications that the architect is responsible for. Absent that, extending project timelines to reflect the architect’s capacity will work also.

Over the last sixteen years I have held roles ranging from developer for a single application to designer/developer for a single application to application architect/designer/developer for a family of two applications to solution architect for over a dozen applications. My current roles are solution architect for four applications, application architect for three, and designer/developer for one. Stepping away from the coding role for a number of years posed no problems; I was able to resume it easily since I had remained up to date via coding proofs of concept. I quickly learned, however, that unless I had the bandwidth to fulfill a particular role (such as detailed design) completely, that it was better to delegate that role to others than to try to get by performing it in part. I have no doubt that there are plenty of ivory tower architects, but I have a suspicion that there are even more over-committed ones.