Software development in general and IT in particular seems to have a love-hate relationship with our customers – as in, we really love to hate on our customers. We have Stupid User Tricks, ID10T issues, PEBCAK, and of course, Clients From Hell. Every once in a while, even Dilbert takes a break from bashing managers to take a swing or two at customers.
There’s even some evidence that the feelings are mutual.
How is it that we’ve managed to come to the point where distrust, even open hostility, is the norm? How is it that this situation is allowed to continue? What if we don’t change the dynamic?
To get an idea of how we got here, imagine the following scenario:
- There’s a restaurant where you’re required to eat.
- You don’t get to decide when you can eat; you have to ask (and ask) and eventually you’re allowed to sit at the table without any idea of when you’ll get another chance.
- You don’t pay for what you eat, but you will have to justify each menu item you order.
- The kitchen staff will be required to say how exactly long it will take to prepare the order, even if the item is not on the menu and no one has ever made it before.
- The waiter, the chef, and the maitre d’ may not understand or be able to prepare your order, so they reserve the right to alter it without any notice – you’ll find out when it arrives.
- Waiters interacting with diners after the initial order is considered poor practice; kitchen staff doing so is completely out of the question.
- If the order doesn’t meet your approval, you can send it back to be fixed as much as you like.
Under those circumstances, one might expect the restaurant patrons to be a tad distrustful of the staff, who will probably respond in kind. The experienced patrons will have learned to order as much as possible, regardless of need, subject only to the restraint of getting approval. They will have learned to be vague enough to allow them to keep sending dishes back for fixes that are enhancements in disguise. The bolder patrons will either learn to cook for themselves or find another restaurant, perhaps both.
Is this starting to sound familiar?
The second question, how is it that this has been allowed to continue, is something of a mystery. While there has been a growing significant incidence of shadow IT, things still haven’t broken out into open rebellion. How much of this is inertia and how much of this is the current economy holding back expenditures? More ominously, how close to the edge are we?
This brings us to the third question, the answer to which should be obvious. Trying to maintain the status quo will not work. In fact, doing so will be more likely to hasten the demise of IT as it becomes more of a commodity. Without major changes, IT risks becoming irrelevant and marginalized. Rather than worrying about blame (it should be obvious that this is a systemic problem rather than one or two bad actors), both business and IT need to find a way forward that maximizes value and minimizes friction. The risk to those organizations that cannot make this transition increases with each passing day.
The only true wisdom is in knowing you know nothing.
What types of software products have you worked on: desktop applications, traditional web, single-page applications, embedded, mobile, mainframe?
How about organizations: private for-profit, government, non-profit?
How about domains: finance, retail, defense, health care, entertainment, banking, law enforcement, intelligence, real estate, etc. etc. etc.?
Given that the realm of “software development” is currently huge (and probably expanding as you read this), how logical is it that someone (or even a group) could regulate what is acceptable process and practice? I won’t say that it would be impossible to come up with one unified set of regulations that would fit all circumstances, but I’m very comfortable estimating the likelihood as a minute fraction of a percent. If the entire realm were broken down into smaller groupings, the chance might increase, but the resulting glut of regulations would become an administrative nightmare and still wouldn’t address those circumstances that aren’t in the list above but are on the horizon.
Nonetheless, people continue to float the idea of regulation.
Last fall, Bob Martin floated the idea of government regulation as a reaction to the healthcare.gov fiasco. That would be the same government whose contracting regulations contributed to the fiasco in the first place, correct? That would be the same government that has legally mandated Agile for Department of Defense contracts? Legally mandated agility just seems to sound a bit suspicious. As Jeff Sutherland noted “Many in Washington are still trying to figure out what exactly that means but it is a start”. A start, for sure, but the start of what?
Ken Schwaber’s blog post “Can Software Developers Meet the Need?” takes a different approach. Schwaber proposes that:
A software profession governing body is needed. We need to formalize and regulate the skills, techniques, and practices needed to build different types of software capabilities. On one side, there is the danger of squeezing the creativity out of software development by unknowledgeable bureaucrats. On the other side is the danger of the increasingly vital software our society relies on failing critically.
We can either create such a governance capability, or the governments will legislate it after a particularly disastrous failure.
Call me a cynic, but I’m betting that the amount of bureaucratic squeezing that would result from this would far outweigh any gain in quality.
Most of the organization types listed above are already on the hook for harm caused by their IT operations; just ask Target and Knight Capital (don’t ask the Centers for Medicare & Medicaid Services). Is it more likely that a committee, whether private or public, can better manage the quality of software across all the various categories listed above? Could they be more likely to keep up with change in the industry? Color me doubtful.
In a previous post, I used the Eisenhower quote “…plans are useless but planning is indispensable”. The Agile Manifesto expresses a preference for “Responding to change over following a plan”. A tweet I saw recently illustrates both of those points and touches on why so many seem to have problems with estimates:
“ETA for an apple pie?”
“Where is it?”
“You didn’t tell me the dishes were dirty and you lacked an oven.”
At first glance, it’s the age-old story of being given inadequate requirements and then being held to an estimate long after it’s proven unreasonable. However, it should also be clear that the estimate was given without adequate initial planning, no “plan B” and when the issues were discovered, there was no communication of the need to revise the estimate by an additional 300%.
Before the torches and pitchforks come out, I’m not assigning blame. There are no villains in the scenario, just two victims. While I’ve seen my share of dysfunctional situations where the mutual distrust between IT and the business was the result of bad actors, I’ve also seen plenty that were the result of good people trapped inside bad processes. If the situation can be salvaged, communication and collaboration are going to be critical to doing so.
People deal with uncertainty every day. Construction projects face delays due to weather. Watch any home improvement show and chances are you’ll see a renovation project that has to change scope or cost due to an unforeseen situation. Even surgeons find themselves changing course due to circumstances they weren’t aware of until the patient was on the table. What the parties need to be aware of is that the critical matter is not whether or not an issue appears, but how it’s handled.
The first aspect of handling issues is not to stick to a plan that is past its “sell by” date. A plan is only valid within its context and when the context changes, sticking to the plan is delusional. If your GPS tells you to go straight and your eyes tell you the bridge is out, which should you believe?
Sometimes the expiration of a plan is strategic; the goal is not feasible and continuing will only waste time, money, and effort. Other times, the goal remains, but the original tactical approach is no longer valid. There are multiple methods appropriate to tactical decision-making. Two prominent ones are Deming’s Plan-Do-Check-Act and Boyd’s Observe-Orient-Decide-Act. Each has its place, but have a looping nature in common. Static plans work for neither business leaders nor fighter pilots.
The second aspect of handling issues is communication. It can be easy for IT to lose sight of the fact that the plan they’re executing is a facet of the overarching plan that their customer is executing. Whether in-house IT or contractor, the relationship with the business is a symbiotic one. In my experience, success follows those who recognize that and breakdowns occur when it is ignored. Constant communication and involvement with that customer avoids the trust-killing green-green-green-RED!!! project management theater.
In his post “Setting Expectations”, George Dinwiddie nailed the whole issue with plans and estimates:
What if we were able to set expectations beyond a simple number? What if we could say what we know and what we don’t know? What if we could give our best estimate now, and give a better one next week when we know more? Would that help?
The thing is, these questions are not about the estimates. These questions are about the relationship between the person estimating and the person using the estimate. How can we improve that relationship?
What does slow code look like?
“Jeez, this code sure looks slow” is hardly helpful and just not quantitative enough for effective decision-making.
Tony’s tweet reminded me of a time where I had to explain to a coder why the data access classes of a particular performance-sensitive application used a DataReader to fill POCO data transfer objects (DTOs). After all, we could have just used one line of code to fill a DataSet; that would be much faster. Patient soul that I am (or pedantic, it depends on who you ask), I took the time to demonstrate how one line of code that we write may involve a lot lines of code within the library we’re calling. In fact, filling a DataSet involves using a DataReader, thus filling DTOs from a DataSet involves iterating the results of a query twice. The size difference between the DTOs and the DataSet when serialized was a bonus lesson.
Some performance issues, notably those involving redundant work, might be detected by inspection, assuming that the redundant work is visible. In the example above, it wasn’t. Many performance issues will only become visible via profiling. More importantly, without profiling data, the relative significance of the issue can’t be determined. Saving a few microseconds in a particular section of code isn’t going to be much help if several seconds are being lost to network or database issues. This type of ad hoc response is symptomatic of more than one performance analysis anti-pattern. Performance profiling and tuning requires a holistic approach to be effective.
It’s not just about better performance, it’s about better performance in the areas that make the most difference.
Leadership and management are currently hot topics, with the #NoManager movement among the hottest of the hot. My detailed opinion on flat organizations/holacracy is a post for another day, but one aspect that I fully agree with is the differentiation between leadership and management. They can and should coincide, but they don’t always. Most importantly, the number of leaders should exceed the number of managers. To re-state a point I made in “Lord of the Repository”, the best managers develop their team members so that the team is never without leadership, even when the manager is away.
Tony DaSilva’s recent post on the subject, “In Defense of Hierarchy”, spoke to some benefits that derive from hierarchies. One of those benefits identified was “orderly execution of operations”, supported by the following quote:
Imagine if students argued with their teachers, workers challenged their bosses, and drivers ignored traffic cops anytime they asked them to do something they didn’t like. The world would descend into chaos in about five minutes. – Duncan J. Watts
For each of Watts’ examples, his point is, in order, wrong, possibly wrong, and correct. I have experience that speaks to all three.
My career in software development is my second career; my first was in law enforcement, serving as a Deputy in a county Sheriff’s Office. One of the positions I held during my tenure there was Assistant Director of the Training Academy. My role was split between administration, instruction, and supervision of students (limited strictly to their time in training, I didn’t hold a supervisory rank that would apply beyond that). My position was then and has always been, that any trainer or teacher that cannot tolerate respectful, appropriate challenge is unworthy of their position. A student probing and testing the information being presented was something to be celebrated in my opinion, not discouraged.
An alert went out on the radio one afternoon that there was a fire in one of the housing units of the jail. After a quick run to the location, I found that the supervisors had succeeded in getting the fire knocked down, removing the person who had started it and detailed staff to evacuate the other inmates to a smoke-free secure area. However, the remaining staff were milling about without protective gear or spare fire extinguishers should the embers flare back up. While waiting for someone with authority, I took it upon myself to direct individuals to get the equipment that was needed. Once someone arrived and assumed control, I then headed back to normal duties.
While talking about the incident later on with a co-worker, they happened to mention that they were really shocked when I ordered the Major to go retrieve an air pack and he did so (n.b. the Major in question was the third ranking person in the department and five levels higher than me in the hierarchy). Needless to say, I was just as shocked. I hadn’t been paying attention to much beyond getting the situation safely under control and the Major hadn’t objected, so I didn’t notice the real-life inversion of control, though my colleague certainly did.
That incident illustrates several things about leadership. First, is the point I mentioned above that leadership and management/authority are separate things. I had no official authority, but exercised leadership until someone with authority was in a position to take over. Second, is that my unofficial authority rested on the trust and acquiescence of those executing my orders. I would argue that, far from undermining their trust, my openness to challenge in non-emergency situations made them more likely to follow me in the emergency.
So, to return to Watts’ examples – teachers should be challenged (appropriately), cops should be obeyed (until the emergency is in hand), and both you and the boss should be able to flex based on the whether the current situation requires a teacher or a cop.
Participative leadership is more likely to engender trust and buy-in. Smart leaders (be they managers, architects, team leads, etc.) aren’t looking for passive followers, they know it could cost them. As Tom Cagley observed in a his post “It Takes A Team”:
While a product owner prioritizes and a scrum master facilitates, it takes a whole team to deliver. The whole team is responsible for getting the job done which means that at different times in different situations different members will need to provide leadership. Every team member brings their senses to the project-party, which makes all of them responsible looking for trouble and then helping to resolve it even if there isn’t a scrum master around.
Some responses to my post “Why does software development have to be so hard?” illustrated one major (in my opinion) aspect of the problem – for many people, software development is synonymous with coding. It’s certainly understandable that someone might jump to that conclusion. After all, no matter how many slides, documents, diagrams, etc. someone produces, it is code that makes those ideas real.
Code, however, is not enough.
Over the last seventeen-plus years that I’ve been involved in software development, great strides have been made in languages and platforms. Merely look at the plumbing code needed to write a Hello World for Windows in C should you need convincing. Frameworks for application infrastructure, unit testing and acceptance testing are plentiful. Coding and coding cleanly is far, far easier and yet, people still complain about software.
While poor quality code can sink a product, excellent quality code cannot make a product. No matter how right you build a thing, the customer won’t be happy if it’s the wrong thing. The Hagia Sophia, Taj Mahal, Empire State Building, and many others are all breathtakingly magnificent structures that would utterly fail a customer who wanted (not to mention, budgeted for) a garage. We still fail to adequately understand the needs of our customers and the environments they work within. This is an area that desperately needs improvement. This is not a technical issue, but one of communication, collaboration, and organization. Neither customer nor provider can impose this improvement unilaterally.
Understanding the architecture of the problem is critical to designing and evolving the architecture of the solution, which is yet another area of need. Big Design Up Front (BDUF) assumes too much certainty and never (at least in my experience) survives contact with reality. No Design Up Front (NDUF), however, swings too far in the opposite direction and is unlikely to yield a cohesive design without far too much re-work. Striking a balance between the two is, in my opinion, key to producing an architecture that satisfies the functional and quality of service requirements of today while retaining sufficient flexibility for tomorrow.
Quality code implementing an architectural design grounded in a solid understanding of the customer’s problem space is, in my opinion, the essence of software development. Anything less than those three elements misses the mark.
In Robert “Uncle Bob” Martin’s “Where is the Foreman”, he advocated for a “foreman” with exclusive commit rights who would review each and every potential commit before it made its way into the repository in the interest of ensuring quality. While I am in sympathy with some of his points, ultimately the idea breaks down for a number of reasons, most particularly in terms of introducing a bottleneck. A single person will only be able to keep up with so many team members and if a sudden bout of the flu can bring your operation to a standstill, there’s a huge problem.
Unlike Jason Gorman, I believe that egalitarian development teams are not the answer. When everyone is responsible for something, it is cliche that nobody takes responsibility for it (they’ve even given the phenomena its own name). However, being responsible for something does not mean dictating. Dictators eventually tend to fall prey to tunnel vision.
Jason Gorman pointed out in a follow-up post, “Why Code Inspections Need To Be Egalitarian”, “You can’t force people, con people, bribe people or blackmail them into caring.” You can, however, help people to understand the reasons behind decisions and participate in the making of those decisions. Understanding and participation are more conducive to ownership and adoption than coercion. Promoting ownership and adoption of values vital to the mission is the essence of leadership.
A recent Tweet from Thomas Cagley illustrates the need for reflective, purposeful leadership:
Is the leadership style you employ a conscious choice? It should be.
— Thomas Cagley (@TCagley) March 5, 2014
In my experience, the best leaders exercise their power lightly. It’s less a question of what they can decide and more a question of should they decide out of hand. When your philosophy is “I make the decisions”, you make yourself a hostage to presence. Anywhere you’re not, no decision will be made, regardless of how disastrous that lack of action may be. I learned from an old mentor that the mark of a true leader is that they can sleep when they go on vacation. They’re still responsible for what happens, but they’ve equipped their team to respond reasonably to issues rather than to mill about helplessly.
In his follow-up post, “Oh Foreman, Where art Thou?”, Uncle Bob moderated his position a bit, introducing the idea of assistants to help in the reviews and extension of commit rights to those team members who had proved trustworthy. It’s a better position than the first post, but still a bit too controlling and self-certain. The goal should not be to grow a pack of followers who mimic the alpha wolf, but to grow the predators who snap at your heals. This keeps them and just as important, you, on the path of learning and growth.