“Architecture vs Design” on Iasa Global Blog

I’m back to contributing to the Iasa Global blog – my debut is a re-post of “Architecture vs Design”.

Leave a comment

Fixing IT – Taming the (Customer) Beast

Detente

In my previous post, I outlined the source of the dysfunctional nature of the relationship many IT departments have with their customers. This post will be about fixing that relationship. Having taken part in just such a venture, I can state that it is possible.

When I first became involved with corporate IT, I worked for a group that was developing a core line of business application for the company. Our process worked (well, “worked”) like this:

  1. The users would produce voluminous “specs” minutely detailing the new features they wanted in terms of behavior and presentation. These were delivered to product managers.
  2. The product managers would take the users’ “specs” and file them away for reference in their trash cans. The product managers would then generate their own “specs” minutely detailing the behavior and presentation of the features in accordance with the way they knew it really should have be.
  3. The new “specs” were presented to the remainder of the team for estimation, and the final project plan was created.
  4. Product managers, developers, testers, technical writers, trainers, etc. would all gather with their management and everyone would sign a “contract” to deliver what was specified by the due date.
  5. Developers would then proceed to implement the features as best they could, substituting their own ideas for those parts that were ambiguous or unrealistic. When everything was completed (or when the “code freeze” date was reached, whichever came first) the features were delivered to the testers.
  6. Testers would then generate bug tickets for anything that did not work the way they thought it should (regardless of whether it matched the documentation or not). Bug tickets were sent to the developers.
  7. Developers would remedy the defects according to the instructions in the ticket and batches of bug fixes would be released to the testers.
  8. Testers would pass the the tickets and forward them to the product managers.
  9. The product managers would re-open the tickets and direct that the features be coded according to the original specification (or not, if they had had a new idea in the interim). These tickets would then be returned to the developers.
  10. Developers would remedy the defects according to the new instructions in the ticket and batches of bug fixes would be released to the testers.
  11. Steps 6 through 10 would repeat until either the tester or product manager relented, typically due to exhaustion. When this point was reached for all features, user acceptance testing was began by the user representatives who had composed the original “specs”. The original go-live date was, of course, long since passed by.
  12. User acceptance testing consisted of repeated rounds of find-fix-push. The goal of the user representatives was to bend the meaning of “bug” such that something resembling the original requests would emerge. Really good ones could manage to get entirely new enhancements implemented as “fixes”. At the end of this process, a release to production was scheduled.
  13. After the deployment (and all the downtime due to remediation of the release management issues), the end users could then contemplate how they were going to work around the software to earn their paycheck.

Needless to say, this process led to our having a warm relationship with our customers. They frequently assured us that should we burst into flames, they wouldn’t hesitate to stomp the fire out. Several even routinely produced books of matches and offered to demonstrate.

In all seriousness, the process we worked under at that time seriously impaired our relationship with our customers, both on an organizational and an individual basis. It destroyed our customer’s trust in us as it all but guaranteed that we would be lying to them about what they would be getting and when they would get it. The fact that they would not find this out until the very last second only added insult to injury. When the product was declared dead and the majority of the group laid off, it came as no surprise to anyone.

What did come as a surprise was that a small cross-functional group from the team was retained and tasked with teaching a new process to the remainder of the IT group. The nine months prior to the demise of the product had been spent re-engineering the process by which it was developed. While the effort wasn’t sufficient to save the product, it did get the attention of those looking to transform the IT function.

Reduced to its essence, that shift in process was nothing more than a recognition that our job was to make it easier for others to do their job. It wasn’t about technology or software development, it was about providing value to our customers and meeting their expectations. Value was understood to be more than just taking orders, but actually working with customers to define their needs and jointly determine a solution that met those needs. Meeting expectations was understood to be involving customers in the planning and keeping them in the loop as it evolved, not committing to a date with insufficient information and then hiding the changing circumstances until it could be denied no longer.

Service Cycle

The result of this communication and collaboration was, slowly, step by step, the building of trust. As Tom Graves noted in “Product, service and trust”, “Trust is at the centre of everything in any enterprise – every product, every service”. In that same post, Tom used the image on the right to illustrate the cyclical nature of trust built via mutual respect, attention to needs, and delivery of results. Each success enhances the trust of the customer and the reputation of the provider. These successes can also form a virtuous circle where customer satisfaction increases the motivation of the team serving them.

It is important to understand that this is a process, rather than an event. As Seth Godin stated in “Gradually and then suddenly”:

It didn’t happen suddenly, you just noticed it suddenly.

The flipside works the same way. Trust is earned, value is delivered, concepts are learned. Day by day we improve and build an asset, but none of it seems to be paying off. Until one day, quite suddenly, we become the ten-year overnight success.

The reason this is so important should be clear. Business is dependent on technology more and more with each passing day. However, being dependent on technology and being dependent on a particular provider are two different things. When it is responsive and trustworthy, in-house IT can provide tremendous value to an enterprise. When it is not, the value proposition for outside providers becomes clearer.

, , , , , , , ,

6 Comments

Fixing IT – How to Make a Monster (Customer)

he may be ugly, but he is mine

Software development in general and IT in particular seems to have a love-hate relationship with our customers – as in, we really love to hate on our customers. We have Stupid User Tricks, ID10T issues, PEBCAK, and of course, Clients From Hell. Every once in a while, even Dilbert takes a break from bashing managers to take a swing or two at customers.

There’s even some evidence that the feelings are mutual.

Why?

How is it that we’ve managed to come to the point where distrust, even open hostility, is the norm? How is it that this situation is allowed to continue? What if we don’t change the dynamic?

To get an idea of how we got here, imagine the following scenario:

  • There’s a restaurant where you’re required to eat.
  • You don’t get to decide when you can eat; you have to ask (and ask) and eventually you’re allowed to sit at the table without any idea of when you’ll get another chance.
  • You don’t pay for what you eat, but you will have to justify each menu item you order.
  • The kitchen staff will be required to say how exactly long it will take to prepare the order, even if the item is not on the menu and no one has ever made it before.
  • The waiter, the chef, and the maitre d’ may not understand or be able to prepare your order, so they reserve the right to alter it without any notice – you’ll find out when it arrives.
  • Waiters interacting with diners after the initial order is considered poor practice; kitchen staff doing so is completely out of the question.
  • If the order doesn’t meet your approval, you can send it back to be fixed as much as you like.
    • Under those circumstances, one might expect the restaurant patrons to be a tad distrustful of the staff, who will probably respond in kind. The experienced patrons will have learned to order as much as possible, regardless of need, subject only to the restraint of getting approval. They will have learned to be vague enough to allow them to keep sending dishes back for fixes that are enhancements in disguise. The bolder patrons will either learn to cook for themselves or find another restaurant, perhaps both.

      Is this starting to sound familiar?

      The second question, how is it that this has been allowed to continue, is something of a mystery. While there has been a growing significant incidence of shadow IT, things still haven’t broken out into open rebellion. How much of this is inertia and how much of this is the current economy holding back expenditures? More ominously, how close to the edge are we?

      This brings us to the third question, the answer to which should be obvious. Trying to maintain the status quo will not work. In fact, doing so will be more likely to hasten the demise of IT as it becomes more of a commodity. Without major changes, IT risks becoming irrelevant and marginalized. Rather than worrying about blame (it should be obvious that this is a systemic problem rather than one or two bad actors), both business and IT need to find a way forward that maximizes value and minimizes friction. The risk to those organizations that cannot make this transition increases with each passing day.

, , , , , , ,

3 Comments

Professional Software Development – Can We Mandate What We Can’t Define?

The law is a what?!?

The only true wisdom is in knowing you know nothing.
Socrates

What types of software products have you worked on: desktop applications, traditional web, single-page applications, embedded, mobile, mainframe?

How about organizations: private for-profit, government, non-profit?

How about domains: finance, retail, defense, health care, entertainment, banking, law enforcement, intelligence, real estate, etc. etc. etc.?

Given that the realm of “software development” is currently huge (and probably expanding as you read this), how logical is it that someone (or even a group) could regulate what is acceptable process and practice? I won’t say that it would be impossible to come up with one unified set of regulations that would fit all circumstances, but I’m very comfortable estimating the likelihood as a minute fraction of a percent. If the entire realm were broken down into smaller groupings, the chance might increase, but the resulting glut of regulations would become an administrative nightmare and still wouldn’t address those circumstances that aren’t in the list above but are on the horizon.

Nonetheless, people continue to float the idea of regulation.

Last fall, Bob Martin floated the idea of government regulation as a reaction to the healthcare.gov fiasco. That would be the same government whose contracting regulations contributed to the fiasco in the first place, correct? That would be the same government that has legally mandated Agile for Department of Defense contracts? Legally mandated agility just seems to sound a bit suspicious. As Jeff Sutherland noted “Many in Washington are still trying to figure out what exactly that means but it is a start”. A start, for sure, but the start of what?

Ken Schwaber’s blog post “Can Software Developers Meet the Need?” takes a different approach. Schwaber proposes that:

A software profession governing body is needed. We need to formalize and regulate the skills, techniques, and practices needed to build different types of software capabilities. On one side, there is the danger of squeezing the creativity out of software development by unknowledgeable bureaucrats. On the other side is the danger of the increasingly vital software our society relies on failing critically.

We can either create such a governance capability, or the governments will legislate it after a particularly disastrous failure.

Call me a cynic, but I’m betting that the amount of bureaucratic squeezing that would result from this would far outweigh any gain in quality.

Most of the organization types listed above are already on the hook for harm caused by their IT operations; just ask Target and Knight Capital (don’t ask the Centers for Medicare & Medicaid Services). Is it more likely that a committee, whether private or public, can better manage the quality of software across all the various categories listed above? Could they be more likely to keep up with change in the industry? Color me doubtful.

, , , ,

10 Comments

Surfing the Plan

Hang loose

In a previous post, I used the Eisenhower quote “…plans are useless but planning is indispensable”. The Agile Manifesto expresses a preference for “Responding to change over following a plan”. A tweet I saw recently illustrates both of those points and touches on why so many seem to have problems with estimates:

Programming IRL:
“ETA for an apple pie?”
“2h”
8h later:
“Where is it?”
“You didn’t tell me the dishes were dirty and you lacked an oven.”

At first glance, it’s the age-old story of being given inadequate requirements and then being held to an estimate long after it’s proven unreasonable. However, it should also be clear that the estimate was given without adequate initial planning, no “plan B” and when the issues were discovered, there was no communication of the need to revise the estimate by an additional 300%.

Before the torches and pitchforks come out, I’m not assigning blame. There are no villains in the scenario, just two victims. While I’ve seen my share of dysfunctional situations where the mutual distrust between IT and the business was the result of bad actors, I’ve also seen plenty that were the result of good people trapped inside bad processes. If the situation can be salvaged, communication and collaboration are going to be critical to doing so.

People deal with uncertainty every day. Construction projects face delays due to weather. Watch any home improvement show and chances are you’ll see a renovation project that has to change scope or cost due to an unforeseen situation. Even surgeons find themselves changing course due to circumstances they weren’t aware of until the patient was on the table. What the parties need to be aware of is that the critical matter is not whether or not an issue appears, but how it’s handled.

The first aspect of handling issues is not to stick to a plan that is past its “sell by” date. A plan is only valid within its context and when the context changes, sticking to the plan is delusional. If your GPS tells you to go straight and your eyes tell you the bridge is out, which should you believe?

Sometimes the expiration of a plan is strategic; the goal is not feasible and continuing will only waste time, money, and effort. Other times, the goal remains, but the original tactical approach is no longer valid. There are multiple methods appropriate to tactical decision-making. Two prominent ones are Deming’s Plan-Do-Check-Act and Boyd’s Observe-Orient-Decide-Act. Each has its place, but have a looping nature in common. Static plans work for neither business leaders nor fighter pilots.

The second aspect of handling issues is communication. It can be easy for IT to lose sight of the fact that the plan they’re executing is a facet of the overarching plan that their customer is executing. Whether in-house IT or contractor, the relationship with the business is a symbiotic one. In my experience, success follows those who recognize that and breakdowns occur when it is ignored. Constant communication and involvement with that customer avoids the trust-killing green-green-green-RED!!! project management theater.

In his post “Setting Expectations”, George Dinwiddie nailed the whole issue with plans and estimates:

What if we were able to set expectations beyond a simple number? What if we could say what we know and what we don’t know? What if we could give our best estimate now, and give a better one next week when we know more? Would that help?

The thing is, these questions are not about the estimates. These questions are about the relationship between the person estimating and the person using the estimate. How can we improve that relationship?

, , , , , ,

3 Comments

Eyeballing Performance

I think I see the performance

What does slow code look like?

Tony DaSilva recently tweeted:

“Jeez, this code sure looks slow” is hardly helpful and just not quantitative enough for effective decision-making.

Tony’s tweet reminded me of a time where I had to explain to a coder why the data access classes of a particular performance-sensitive application used a DataReader to fill POCO data transfer objects (DTOs). After all, we could have just used one line of code to fill a DataSet; that would be much faster. Patient soul that I am (or pedantic, it depends on who you ask), I took the time to demonstrate how one line of code that we write may involve a lot lines of code within the library we’re calling. In fact, filling a DataSet involves using a DataReader, thus filling DTOs from a DataSet involves iterating the results of a query twice. The size difference between the DTOs and the DataSet when serialized was a bonus lesson.

Some performance issues, notably those involving redundant work, might be detected by inspection, assuming that the redundant work is visible. In the example above, it wasn’t. Many performance issues will only become visible via profiling. More importantly, without profiling data, the relative significance of the issue can’t be determined. Saving a few microseconds in a particular section of code isn’t going to be much help if several seconds are being lost to network or database issues. This type of ad hoc response is symptomatic of more than one performance analysis anti-pattern. Performance profiling and tuning requires a holistic approach to be effective.

It’s not just about better performance, it’s about better performance in the areas that make the most difference.

, , ,

Leave a comment

Shut up, salute & soldier on?

Yes boss

Leadership and management are currently hot topics, with the #NoManager movement among the hottest of the hot. My detailed opinion on flat organizations/holacracy is a post for another day, but one aspect that I fully agree with is the differentiation between leadership and management. They can and should coincide, but they don’t always. Most importantly, the number of leaders should exceed the number of managers. To re-state a point I made in “Lord of the Repository”, the best managers develop their team members so that the team is never without leadership, even when the manager is away.

Tony DaSilva’s recent post on the subject, “In Defense of Hierarchy”, spoke to some benefits that derive from hierarchies. One of those benefits identified was “orderly execution of operations”, supported by the following quote:

Imagine if students argued with their teachers, workers challenged their bosses, and drivers ignored traffic cops anytime they asked them to do something they didn’t like. The world would descend into chaos in about five minutes. – Duncan J. Watts

For each of Watts’ examples, his point is, in order, wrong, possibly wrong, and correct. I have experience that speaks to all three.

My career in software development is my second career; my first was in law enforcement, serving as a Deputy in a county Sheriff’s Office. One of the positions I held during my tenure there was Assistant Director of the Training Academy. My role was split between administration, instruction, and supervision of students (limited strictly to their time in training, I didn’t hold a supervisory rank that would apply beyond that). My position was then and has always been, that any trainer or teacher that cannot tolerate respectful, appropriate challenge is unworthy of their position. A student probing and testing the information being presented was something to be celebrated in my opinion, not discouraged.

An alert went out on the radio one afternoon that there was a fire in one of the housing units of the jail. After a quick run to the location, I found that the supervisors had succeeded in getting the fire knocked down, removing the person who had started it and detailed staff to evacuate the other inmates to a smoke-free secure area. However, the remaining staff were milling about without protective gear or spare fire extinguishers should the embers flare back up. While waiting for someone with authority, I took it upon myself to direct individuals to get the equipment that was needed. Once someone arrived and assumed control, I then headed back to normal duties.

While talking about the incident later on with a co-worker, they happened to mention that they were really shocked when I ordered the Major to go retrieve an air pack and he did so (n.b. the Major in question was the third ranking person in the department and five levels higher than me in the hierarchy). Needless to say, I was just as shocked. I hadn’t been paying attention to much beyond getting the situation safely under control and the Major hadn’t objected, so I didn’t notice the real-life inversion of control, though my colleague certainly did.

That incident illustrates several things about leadership. First, is the point I mentioned above that leadership and management/authority are separate things. I had no official authority, but exercised leadership until someone with authority was in a position to take over. Second, is that my unofficial authority rested on the trust and acquiescence of those executing my orders. I would argue that, far from undermining their trust, my openness to challenge in non-emergency situations made them more likely to follow me in the emergency.

So, to return to Watts’ examples – teachers should be challenged (appropriately), cops should be obeyed (until the emergency is in hand), and both you and the boss should be able to flex based on the whether the current situation requires a teacher or a cop.

Participative leadership is more likely to engender trust and buy-in. Smart leaders (be they managers, architects, team leads, etc.) aren’t looking for passive followers, they know it could cost them. As Tom Cagley observed in a his post “It Takes A Team”:

While a product owner prioritizes and a scrum master facilitates, it takes a whole team to deliver. The whole team is responsible for getting the job done which means that at different times in different situations different members will need to provide leadership. Every team member brings their senses to the project-party, which makes all of them responsible looking for trouble and then helping to resolve it even if there isn’t a scrum master around.

, , ,

4 Comments

Follow

Get every new post delivered to your Inbox.

Join 108 other followers

%d bloggers like this: