In Robert “Uncle Bob” Martin’s “Where is the Foreman”, he advocated for a “foreman” with exclusive commit rights who would review each and every potential commit before it made its way into the repository in the interest of ensuring quality. While I am in sympathy with some of his points, ultimately the idea breaks down for a number of reasons, most particularly in terms of introducing a bottleneck. A single person will only be able to keep up with so many team members and if a sudden bout of the flu can bring your operation to a standstill, there’s a huge problem.
Unlike Jason Gorman, I believe that egalitarian development teams are not the answer. When everyone is responsible for something, it is cliche that nobody takes responsibility for it (they’ve even given the phenomena its own name). However, being responsible for something does not mean dictating. Dictators eventually tend to fall prey to tunnel vision.
Jason Gorman pointed out in a follow-up post, “Why Code Inspections Need To Be Egalitarian”, “You can’t force people, con people, bribe people or blackmail them into caring.” You can, however, help people to understand the reasons behind decisions and participate in the making of those decisions. Understanding and participation are more conducive to ownership and adoption than coercion. Promoting ownership and adoption of values vital to the mission is the essence of leadership.
A recent Tweet from Thomas Cagley illustrates the need for reflective, purposeful leadership:
Is the leadership style you employ a conscious choice? It should be.
— Thomas Cagley (@TCagley) March 5, 2014
In my experience, the best leaders exercise their power lightly. It’s less a question of what the can decide and more a question of should they decide out of hand. When your philosophy is “I make the decisions”, you make yourself a hostage to presence. Anywhere you’re not, no decision will be made, regardless of how disastrous that lack of action may be. I learned from an old mentor that the mark of a true leader is that they can sleep when they go on vacation. They’re still responsible for what happens, but they’ve equipped their team to respond reasonably to issues rather than to mill about helplessly.
In his follow-up post, “Oh Foreman, Where art Thou?”, Uncle Bob moderated his position a bit, introducing the idea of assistants to help in the reviews and extension of commit rights to those team members who had proved trustworthy. It’s a better position than the first post, but still a bit too controlling and self-certain. The goal should not be to grow a pack of followers who mimic the alpha wolf, but to grow the predators who snap at your heals. This keeps them and just as important, you, on the path of learning and growth.
A series of 8 tweets by Dan Creswell paints a familiar, if depressing, picture of the state of software development:
(1) Developers growing up with modern machinery have no sense of constrained resource.
(2) Thus these developers have not developed the mental tools for coping with problems that require a level of computational efficiency.
(3) In fact they have no sensitivity to the need for efficiency in various situations. E.g. network services, mobile, variable rates of change.
(4) Which in turn means they are prone to delivering systems inadequate for those situations.
(5) In a world that is increasingly networked & demanding of efficiency at scale, we would expect to see substantial polarisation.
(6) The small number of successful products and services built by a few and many poor attempts by the masses.
(7) Expect commodity dev teams to repeatedly fail to meet these challenges and many wasted dollars.
(8) Expect smart startups to limit themselves to hiring a few good techies that will out-deliver the big orgs and define the future.
With almost 15 years since the fallacies were drafted and more than 40 years since we started building distributed systems – the characteristics and underlying problems of distributed systems remain pretty much the same. What is more alarming is that architects, designers and developers are still tempted to wave some of these problems off thinking technology solves everything.
Is it really this hard to get it right?
More importantly, how do we change this?
In order to determine a solution, we first have to understand the nature of the problem. Dan’s tweets point to the machines developers are used to, although in fairness, those of us who lived through the bad old days of personal computing can attest that developers were getting it wrong back then. In “Most software developers are not architects”, Simon Brown points out that too many teams are ignorant of or downright hostile to the need for architectural design. Uncle Bob Martin in “Where is the Foreman?”, suggests the lack of a gatekeeper to enforce standards and quality is why “our floors squeak”. Are we over-emphasizing education and underestimating training? Has the increasing complexity and the amount of abstraction used to manage it left us with too generalized a knowledge base relative to our needs?
Like any wicked problem, I suspect that the answer to “why?” lies not in any one aspect but in the combination. Likewise, no one aspect is likely, in my opinion, to hold the answer in any given case, much less all cases.
People can be spoiled by the latest and greatest equipment as well as the optimal performance that comes for working and testing on the local network. However, reproducing real-world conditions is a bit more complicated than giving someone an older machine. You can simulate load and traffic on your site, but understand and accounting for competing traffic on the local network and the internet is a bit more difficult. We cannot say “application x will handle y number of users”, only that it will handle that number of users under the exact same conditions and environment as we have simulated – a subtle, but critical difference.
Obviously, I’m partial to Simon Brown’s viewpoint. The idea of a coherent, performant design just “emerging” from doing the simplest thing that could possibly work is ludicrous. The analogy would be walking into an auto parts store, buying components individually, and expecting them to “just work” – you have to have some sort of idea of the end product in mind. On the other hand, attempting to specify too much up front is as bad as too little – the knowledge needed is not there and even if it were, a single designer doesn’t scale when dealing with any system that has a team of any real size.
Uncle Bob’s idea of a “foreman” could work under some circumstances. Like Big Design Up Front, however, it doesn’t scale. Collaboration is as important to the team leader as it is to the architect. The consequences of an all-knowing, all-powerful personality can be just as dire in this role as for an architect.
In “Hordes of Novices”, Bob Martin observed “When it’s possible to get a degree in computer science without writing any code, the quality of the graduates is questionable at best”. The problem here is that universities are geared to educate, not train. Just because training is more useful to an employer (at least in the short term), does not make education unimportant. Training deals with this tool at this time while how to determine which tool is right for a given situation is more in the province of education. It’s the difference between how to do versus how to figure out how to do. Both are necessary.
As I’ve already noted, it’s a thorny issue. Rather than offering an answer, I’d rather offer the opportunity for others to add to the conversation in the comments below. How did we get here and how do we go forward?
What is the best architectural style/process/language/platform/framework/etc.?
A question posed that way can ignite a war as easily as Helen of Troy. The problem is, however, that it’s impossible to answer in that form. It’s a bit like asking which fastener (nail, screw, bolt, glue) is best. Without knowing the context to which it will be applied, we cannot possibly form a rational answer. “Best” without context is nonsense; like “right”, it’s a word that triggers much heat, but very little light.
People tend to like rules as there is a level of comfort in the certainty associated with them. The problem is that this certainty can be both deceptive and dangerous. Rules, patterns, and practices have underlying principles and contexts which give them value (or not). Understanding these is key to effective application. Without this understanding, usage becomes an act of faith rather than a rational choice.
Best practices and design patterns are two examples of useful techniques that have come to be regarded as silver bullets by some. Design patterns are useful for categorizing and communicating elements of design. Employing design patterns, however, is no guarantee of effective design. Likewise, understanding the principles that lie beneath a given practice is key to successfully applying that practice in another situation. Context is king.
Prior to applying a technique, it’s useful to ask why? Why this technique? Why do we think it will be effective? Rather than suggest a hard and fast number (say…5 maybe?), I’d recommend asking until you’re comfortable that the decision is based on reason rather than hope or tradition. Designing the architecture of systems requires evaluation and deliberation. Leave the following of recipes to the cooks.
(originally posted on CitizenTekk)
According to Dwight D. Eisenhower, “…plans are useless but planning is indispensable”. How can the production of something “useless” be “indispensable”?
The answer can be found on a banner recently immortalized on Bulldozer00′s blog: “React Less…PLAN MORE!”. Unpacking this is simple – the essence of planning is to decide on responses to events that have yet to occur, without the stress of a time crunch. Gaining time to analyze a response and reducing the emotional aspects should lead to better decisions than ones made on the fly and under pressure. The problem we run into, however, is that reality fails to coincide with our plans for very long.
As Colin Powell observed, “No battle plan survives contact with the enemy”. Detailed, long-term plans can quickly become swamped by complexity as the tree of options branches out. Making assumptions about expected outcomes can prune the number of branches, but each assumption becomes a risk that an unexpected event invalidates the plan. The key is to find a middle ground between operating completely ad hoc on the one hand and having to be Nostradamus on the other.
Planning at the proper scope is one tool to help avoid problems. As noted above, plans with deep detail and long durations are brittle due to complexity and/or the difficulty in making predictions. Like any other view, plans should be more detailed in the foreground and fuzzier in the distance. Much more than a general path to your desired destination will likely turn out to be wasted effort. Only that planning that promotes success is needed. There’s no magic inherent in planning that justifies a belief in “more equals better”. Fitness for purpose should be the metric rather than pure quantity of detail.
Another benefit to avoiding useless detail is that it makes it easier to abandon a plan when it no longer makes sense. Humans tend to value that which they’ve invested time in. In execution, commitment is a virtue right up until the point it ceases to be. Hanging on to a plan past that point can be expensive. Having the flexibility to pivot to a new plan can make the difference between success and failure.
If your job is developing custom software, whether as part of an in-house IT group or as a contractor, what are you selling?
According to Rob Vens’ post, “Software is not a product”:
The product of ICT is the process. Software systems are by-products. By focussing on improving the quality of processes, the quality of the by-products will improve more effectively.
While I agree that improving the process will contribute to customer satisfaction, I have to agree more with what Oliver Baier observed in “It’s the process, not the product”:
In my experience, clients still want to buy the results of the software process (e.g. an evolving web shop) rather than the collaborative design process yielding this result. This is despite the fact that software development processes and methods can be the subject of great debate at all phases of the sales and delivery process.
I would, in fact, go further: your customers don’t want software, they want a need fulfilled. Software is merely a means to that end. They don’t want a web site (though that may be what they ask for), they want sales, exposure, etc.
We shouldn’t get hung up on the issue of product versus service. We should realize that, ultimately, a product is a service. As Tom Graves noted in “Product and service”:
In essence, ‘product’ and ‘service’ are different views into the same entity: the creation and delivery (potential and/or actual) of value, usually associated with some form of asset – in turn typically as associated with some notion of ‘value-proposition‘.
Whether I’m selling new shoes (product) or repairing your old ones (service), the desired end result is serviceable footwear. Circumstances and desires may make the customer prefer one path over another, but the destination is functionally identical.
This is not to say that the manner in which the product/service is provided is unimportant. Quite the contrary, the better the provider is at working with the customers, the more likely the product will satisfy their needs. What it does say is that, first and foremost, the need must be satisfied. When the customer doesn’t get their expected value, then the process by which the failure is provided is irrelevant.
I admit it, I’m a pragmatist.
Less than two weeks after starting this blog, I posted “There is no right way (though there are plenty of wrong ones)”, proclaiming my adherence to the belief that absolutes rarely stand the test of time. In design as well as development process, context is king.
Some, however, tend to take a more black and white approach to things. I recently saw an assertion that “Quality is not negotiable” and that “Only Technical Debt enthusiasts believe that”. By that logic, all but a tiny portion of working software professionals must be “Technical Debt enthusiasts”, because if you’re not the one paying for the work, then the decision about what’s negotiable is out of your hands. Likewise, there’s a difference between being an “enthusiast” and recognizing that trade-offs are sometimes required.
Seventeen years ago, Fast Company published “They Write the Right Stuff”, showcasing the quality efforts of the team working on the code that controlled the space shuttle. There results were impressive:
Consider these stats : the last three versions of the program — each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. Commercial programs of equivalent complexity would have 5,000 errors.
Impressive results are certainly in order given the criticality of the software:
The group writes software this good because that’s how good it has to be. Every time it fires up the shuttle, their software is controlling a $4 billion piece of equipment, the lives of a half-dozen astronauts, and the dreams of the nation. Even the smallest error in space can have enormous consequences: the orbiting space shuttle travels at 17,500 miles per hour; a bug that causes a timing problem of just two-thirds of a second puts the space shuttle three miles off course.
It should be noted, however, that while the bug rate is infinitesimally small, it’s still greater than zero. With a defined hardware environment, highly trained users, and a process that consumed a budget of $35 million annually, perfection was still out of reach. Reality often tramples over ideals, particularly considering that technical debt can arise from changing business needs and changing technical environments as much as sloppy practice. Recognizing that circumstances may make it the better choice and managing it is more realistic than taking a dogmatic approach.
For most products, it’s common to find multiple varieties with different features and different levels of quality with the choice left to the consumer as to which best suits his/her needs. It’s rare, and rightly so, for that value judgment to be taken out of the consumer’s hands. Taking the position that “quality is not negotiable” (with the implicit assertion that you are the authority on what constitutes quality) places you in just that position of dictating to your customer what is in their best interests. Under the same circumstance, what would be your reaction?
Long ago, in a land far away, a newly minted developer received a lesson in the danger of untested assumptions. There was an export job that extracted and transmitted to the state data about county jail inmates each month for reimbursement. Having developed this job, our hero was called upon to diagnose and correct an error that the state’s IT staff reported: the county had claimed reimbursement for an inmate held in a nearby city. This was proven by the fact that the Social Security Number submitted for the county’s inmate was identical to that submitted for the city’s inmate.
After much investigation, the plucky newbie determined that the identity of the county inmate was correct (fingerprints, and all that). The person submitted by the city was actually a former roommate of the person in question who had been admitted to the city jail under the borrowed identity of his old pal. It was truly shocking to realize that someone who had a lengthy criminal record would stoop to using another’s identity (although he did deserve kudos for being a pioneer – this was the mid 90′s when identity theft wasn’t yet in vogue).
What was even more shocking was the fix to be used: the county should re-submit the record with a “999-99-9999″ value and the state would generate a fake SSN to be used for the remainder of the inmate’s incarceration. Since the city’s submission was first in, it would have to be considered “correct”.
The truly wonderful thing about that story is that it illustrates so many potential issues that can result from an assumption being allowed to roam unchallenged. Needless error conditions were created that delayed the business process of getting reimbursed. The validity of data in the system was compromised: across multiple incarcerations the same inmate could have multiple identities and multiple inmates could share the same identity over the life of the system.
Just as you can have technical debt, you can have cognitive debt by failing to adequately think through your design. One bad assumption can snowball into others(e.g. the “first in equals correct” rule could only be a poor reaction to the belated realization that the identity logic was unable to guarantee uniqueness). Just as collaboration can help avoid design issues, so too can adequate analysis of the problem space. Making a quick and dirty assumption and running with it leaves you at risk for wasting a lot of time and money.