Twitter, Timelines, and the Open/Closed Principle

Consider this Tweet for a moment. I’ll be coming back to it at the end.

In my last post, I brought up Twitter’s rumored changes to the timeline feature as a poor example of customer awareness in connection with an attempt to innovate. The initial rumor set off a storm of protest that brought out CEO Jack Dorsey to deny (sort of) that the timeline will change. Today, the other shoe dropped, the timeline will change (sort of):

Essentially, it will be a re-implementation of the “While You Were Away” feature with an opt-out:

In the “coming weeks,” Twitter will turn on the feature for users by default, and put a notification in the timeline when it does, Haq says. But even then, you’ll be able to turn it off again.

Of course, Twitter’s expectation is that most people will like the timeline tweak—or at least not hate it—once they’re exposed to it. “We have the opt-out because we also prioritize user control,” Haq says. “But we do encourage people to give it a chance.”

So, what does this have to do with the Open/Closed Principle? The Wikipedia article for it contains a quote from Bertrand Meyer’s Object-Oriented Software Construction (emphasis is mine):

A class is closed, since it may be compiled, stored in a library, baselined, and used by client classes. But it is also open, since any new class may use it as parent, adding new features. When a descendant class is defined, there is no need to change the original or to disturb its clients.

Just as change to the code of class may disturb its clients, change to user experience of a product may disturb the clientele. Sometimes extension won’t work and change must take place. As it turns out, the timeline has been extended with optional behavior rather than changed unconditionally as was rumored.

Some thoughts:

  • Twitter isn’t the only social media application out there with a timeline for updates. Perhaps that chronological timeline (among other features) provides some value to the user base?
  • Assuming that value and the risk of upsetting the user base if that value was taken away, wouldn’t it have been wise to communicate in advance? Wouldn’t it have been even wiser to communicate when the rumor hit?

Innovation will involve change, but not all change is necessarily innovative. Understanding customer wants and needs is a good first step to identifying risky changes to user experience (whether real or just perceived). I’d argue this is even more pronounced when you consider that Twitter’s user base is really its product. Twitter’s customers are advertisers and data consumers who want and need an engaged, growing user base to view promoted Tweets and generate usage data.

Returning to the Tweet at the beginning of this post. Considering the accuracy of that recommendation, would it be reasonable to think turning over your timeline to their algorithms might degrade your user experience?

Advertisement

Innovation on Tap

Beer Tap

Two articles from the same site (CIO.com), both dealing with planned innovations, but with dramatically different results:

While the article about the Amazon leak doesn’t report on customer reactions, that response is unlikely to be negative for a variety of reasons, all of which involve benefit to the customer. The most important one, the big innovation (in my opinion), is that opening physical stores allows it to take advantage of a phenomenon that other retailers find problematic:

Another way Amazon eviscerates traditional retailers is via the practice of showrooming. That’s where you go to a brick-and-mortar store and find the product you want but then buy it on Amazon — sometimes standing right there in the store, using Amazon’s mobile app.

Physical Amazon stores can encourage and facilitate showrooming, because they will have friendly salespeople to help you find, install and use the app. Once they teach you how to showroom in an Amazon retail store, you can continue showrooming in other stores.

While large retailers have been able to “combat” showrooming by embracing it, selling items at online prices in physical stores digs deeply into profit margins. When your business model is predominantly brick and mortar, the hit is greater than when your model is predominantly online. In short, if they play their cards right, Amazon will be able to better serve their customers without hurting their own bottom line.

Awareness of your customers’ wants and needs is key to making decisions that are more like Amazon’s and less Twitter’s. An intentional, collaborative approach, such as that described by Greger Wikstrand in his post “Developing the ‘innovation habit’”, is one way to promote that awareness:

When I worked at Ericsson Research, I instigated a weekly one-hour innovation meeting for my team. ‘Can you really be innovative every Thursday at 9am?’ you may cynically ask. Well, actually, yes, you can.

What we did in that hour was commit to innovation, dedicating a place and a time to our individual and collective innovative mindsets. Sometimes this resulted in little ideas that helped us do things incrementally better. And sometimes—just sometimes—those innovation hours were the birthplace of big ideas.

Collaboration is important because “Architect knows best” is as much a design folly at the enterprise level as it is at the application level. A better model is that described by Tom Graves in “Auftragstaktik and fingerspitzengefühl”, where both information and guidance flow both up and down the hierarchy to inform decisions both strategic and tactical. These flows provides the context so necessary to making effective decisions.

You can’t pull a tap and draw a glass of innovation, but you can affect whether your system makes innovative ideas more or less likely to be recognized and acted on.

This is part eleven of a conversation Greger Wikstrand and I have been having about architecture, innovation, and organizations as systems. Previous posts in the series:

  1. “We Deliver Decisions (Who Needs Architects?)” – I discussed how the practice of software architecture involved decision-making. It combines analysis with the need for situational awareness to deal with the emergent factors and avoiding cognitive biases.
  2. “Serendipity with Woody Zuill” – Greger pointed me to a short video of him and Woody Zuill discussing serendipity in software development.
  3. “Fixing IT – Too Big to Succeed?” – Woody’s comments in the video re: the stifling effects of bureaucracy in IT inspired me to discuss the need for embedded IT to address those effects and to promote better customer-centricity than what’s normal for project-oriented IT shops.
  4. “Serendipity and successful innovation” – Greger’s post pointed out that structure is insufficient to promote innovation, organizations must be prepared to recognize and respond to opportunities and that innovation must be able to scale.
  5. “Inflection Points and the Ingredients of Innovation” – I expanded on Greger’s post, using WWI as an example of a time where innovation yielded uneven results because effective innovation requires technology, understanding of how to employ it, and an organizational structure that allows it to be used well.
  6. “Social innovation and tech go hand-in-hand” – Greger continued with the same theme, the social and technological aspects of innovation.
  7. “Organizations and Innovation – Swim or Die!” – I discussed the ongoing need of organizations to adapt to their changing contexts or risk “death”.
  8. “Innovation – Resistance is Futile” – Continuing on in the same vein, Greger points out that resistance to change is futile (though probably inevitable). He quotes a professor of his that asserted that you can’t change people or groups, thus you have to change the organization.
  9. “Changing Organizations Without Changing People” – I followed up on Greger’s post, agreeing that enterprise architectures must work “with the grain” of human nature and that culture is “walking the walk”, not just “talking the talk”.
  10. “Developing the ‘innovation habit’” – Greger talks about creating an intentional, collaborative innovation program.

Ignorance Isn’t Bliss, Just Good Tactics

Donkey

There’s an old saying about what happens when you assume.

The fast lane to asininity seems to run through the land of hubris. Anshu Sharma’s Tech Crunch article, “Why Big Companies Keep Failing: The Stack Fallacy”, illustrates this:

Stack fallacy has caused many companies to attempt to capture new markets and fail spectacularly. When you see a database company thinking apps are easy, or a VM company thinking big data is easy  — they are suffering from stack fallacy.

Stack fallacy is the mistaken belief that it is trivial to build the layer above yours.

Why do people fall prey to this fallacy?

The stack fallacy is a result of human nature  — we (over) value what we know. In real terms, imagine you work for a large database company  and the CEO asks , “Can we compete with Intel or SAP?” Very few people will imagine they can build a computer chip just because they can build relational database software, but because of our familiarity with building blocks of the layer up,  it is easy to believe you can build the ERP app. After all, we know tables and workflows.

The bottleneck for success often is not knowledge of the tools, but lack of understanding of the customer needs. Database engineers know almost nothing about what supply chain software customers want or need.

This kind of assumption can cost an ISV a significant amount of money and a lot of good will on the part of the customer(s) they attempt to disrupt. Assumptions about the needs of the customer (rather than the customer’s customer) can be even more expensive. The smaller your pool of customers, the more damage that’s likely to result. Absent a captive customer circumstance, incorrect assumptions in the world of bespoke software can be particularly costly (even if only in terms of good will). Even comprehensive requirements are of little benefit without the knowledge necessary to interpret them:

But, that being said:

This would seem to pose a dichotomy: domain knowledge as both something vital and an impediment. In reality, there’s no contradiction. As the old saying goes, “a little knowledge is a dangerous thing”. When we couple that with another cliche, “familiarity breeds contempt”, we wind up with Sharma’s stack fallacy, or as xkcd expressed it:

'Purity' on xkcd.com

In order to create and evolve effective systems, we obviously have a need for domain knowledge. We also have a need to understand that what we possess is not domain knowledge per se, but domain knowledge filtered through (and likely adulterated by) our own experiences and biases. Without that understanding, we risk what Richard Martin described in “The myopia of expertise”:

In the world of hyperspecialism, there is always a danger that we get stuck in the furrows we have ploughed. Digging ever deeper, we fail to pause to scan the skies or peer over the ridge of the trench. We lose context, forgetting the overall geography of the field in which we stand. Our connection to the surrounding region therefore breaks down. We construct our own localised, closed system. Until entropy inevitably has its way. Our system then fails, our specialism suddenly rendered redundant. The expertise we valued so highly has served to narrow and shorten our vision. It has blinded us to potential and opportunity.

The Clean Room pattern on CivicPatterns.org puts it this way:

Most people hate dealing with bureaucracies. You have to jump through lots of seemingly pointless hoops, just for the sake of the system. But the more you’re exposed to it, the more sense it starts to make, and the harder it is to see things through a beginner’s eyes.

So, how do we get those beginner’s eyes? Or, at least, how do we get closer to having a beginner’s eyes?

The first step is to reject the notion of our own understanding of the problem space. Lacking innate understanding, we must then do the hard work of determining what the architecture of the problem, our context, is. As Paul Preiss noted, this doesn’t happen at a desk:

Architecture happens in the field, the operating room, the sales floor. Architecture is business technology innovation turned to strategy and then executed in reality. Architecture is reducing the time it takes to produce a barrel of oil, decreasing mortality rates in the hospital, increasing product margin.

Being willing to ask “dumb” questions is just as important. Perception without validation may be just an assumption. Seeing isn’t believing. Seeing and validating what you’ve seen, is believing.

It’s equally important to understand that validating our assumptions goes beyond just asking for requirements. Stakeholders can be subject to biases and myopic viewpoints as well. It’s true that Henry Ford’s customers would probably have asked for faster horses, it’s also true that, in a way, that’s exactly what he delivered.

We earn our money best when we learn what’s needed and synthesize those needs into an effective solution. That learning is dependent on communication unimpeded by our pride or prejudice:

#ShadowSocialMedia or Why Won’t People Use the Product the Way They’re Supposed to

Scott Berkun dislikes the way people are using images to bypass Twitter’s 140 character limit:

His point is very valid, but:

Which is the issue. Sometimes there’s a need to go beyond that limit. Sure, you can chunk your thoughts up across multiple tweets, but users find it burdensome to respect Twitter’s constraint on the amount of text per tweet. Constrained customers, assuming they stick with a product, tend to come up with “creative” solutions to that product’s shortcomings that reflect what they value. The customers’ values may well conflict with the developers’. When “conflict” and “customer” are in the same sentence, there’s generally a problem..

Berkun’s response to @honatwork‘s rebuttal nearly captures the issue:

I say “nearly”, because Twitter was built long before 2015. The problem is that it’s 2015 and Twitter has not evolved to meet a need that clearly exists.

In the IT world, it’s common to hear terms like “Shadow IT” or “Rogue IT”. Both refer to users (i.e. customers) going beyond the pale of approved tools and techniques to meet a need. This poses a problem for IT in that the customer’s solution may not incorporate things that IT values and retrofitting those concerns later is far more difficult. Taking a “products, not projects” approach can minimize the need for customer “creativity”, for in-house IT and external providers.

Trying to hold back the tide just won’t work, because the purpose of the system is to meet the customers’ needs, not respect the designers’ intent.

Product Owner or Live Customer?

Wooden Doll

Who calls the shots for a given application? Who gets to decide which features go on the roadmap (and when) and which are left by the curb? Is there one universal answer?

Many of my recent posts have dealt in some way with the correlation between organizational structure and application/solution architecture. While this is largely to be expected due to Conway’s Law, as I noted in “Making and Taming Monoliths”, many business processes depend on other business processes. While a given system may track a particular related set of business capabilities, concepts belonging to other capabilities and features related to maintaining them, tend to creep in. This tends to multiply the number of business stakeholders for any given system and that’s before taking into account the various IT groups that will also have an interest in the system.

Scrum has the role of Product Owner, whose job it is to represent the customer. Alan Holub, in a blog post for Dr. Dobb’s, terms this “flawed”, preferring Extreme Programming’s (XP) requirement for an on-site customer. For Holub, introducing an intermediary introduces risk because the product owner is not a true customer:

Agile processes use incremental design, done bit by bit as the product evolves. There is no up-front requirements gathering, but you still need the answers you would have gleaned in that up-front process. The question comes up as you’re working. The customer sitting next to you gives you the answers. Same questions, same answers, different time.

Put simply, agile processes replace up-front interviews with ongoing interviews. Without an on-site customer, there are no interviews, and you end up with no design.

For Holub, the product owner introduces either guesses or delay. Worst of all in his opinion is when the product owner is “…just the mouthpiece of a separate product-development group (or a CEO) who thinks that they know more about what the customer wants than the actual customer…”. For Holub, the cost of hiring an on-site customer is negligible when you factor in the risks and costs of wrong product decisions.

Hiring a customer, however, pretty much presumes an ISV environment. For in-house corporate work, the customer is already on board. However, in both cases, two issues exist: one is that a customer that is the decision-maker risks giving the product a very narrow appeal; the second is that the longer the “customer” is away from being a customer, the currency of their knowledge of the domain diminishes. Put together, you wind up with a product that reflects one person’s outdated opinion of how to solve an issue.

Narrowness of opinion should be sufficient to give pause. As Jim Bird observed in “How Product Ownership works in the Real World”:

There are too many functional and operational and technical details for one person to understand and manage, too many important decisions for one person to make, too many problems to solve, too many questions and loose ends to follow-up on, and not enough time. It requires too many different perspectives: business needs and business risks, technical dependencies and technical risks, user experience, supportability, compliance and security constraints, cost factors. And there are too many stakeholders who need to be consulted and managed, especially in a big, live business-critical system.

As is the case with everything related to software development, there is a balance to be met. Coherence of vision enhances design to a point, but then can lead to a too-insular focus. Mega-monoliths that attempt to deal comprehensively with every aspect of an enterprise, if they ever launch, are unwieldy to maintain. By the same token, business dependencies become system dependencies and cannot be eliminated, only re-arranged (microservices being a classic example – the dependencies are not removed, only their location has changed). The idea that we can have only one voice directing development is, in my opinion, just too simplistic.

“Legacy Systems – Constraints, Conflicts, and Customers” on Iasa Global Blog

Crade to the grave (in 6.2 seconds)

As I was reading Roger Sessions’ latest white paper, “The Thirteen Laws of Highly Complex IT Systems”, Laws 1 and 2 immediately caught my eye:

Law 1. There are three categories of complexity: business, architectural and implementation.

Law 2. The three categories of complexity are largely independent of each other.

That complexity in these categories can vary independently (e.g. complex business processes can be designed and implemented simply just as simple processes can be designed and implemented in an extremely complex manner) is important to the understanding of complexity in IT.

See the full post on the Iasa Global Blog (a re-post, originally published here).

Design Follies – ‘Why can’t I do that?’

Man in handcuffs

It’s ironic that the traits we think of as making a good developer are also those that can get in the way of design and testing, but that’s just the case. Think of how many times you’ve heard (or perhaps, said) “no one would ever do that”. Yet, given the event-driven, non-linear nature of modern systems, if a given execution path can occur, it will occur. Our cognitive biases can blind us to potential issues that arise when our product is used in ways we did not intend. As Thomas Wendt observed in “The Broken Worldview of Experience Design”:

To a certain extent, the designer’s intent is irrelevant once the product launches. That is, intent can drive the design process, but that’s not the interesting part; the ways in which users adopt the product to their own needs is where the most insight comes from. Designer intent is a theoretical, speculative formulation even when based on the most rigorous research methods and valid interpretations. That is not to say intention and strategic positioning is not important, but simply that we need to consider more than idealized outcomes.

Abhi Rele, in “APIs and Data: Journey to the Center of the Customer Experience”, put it in more concrete terms:

If you think you’re in full control of your customers’ experience, you’re wrong.

Customers increasingly have taken charge—they know what they want, when they want it, and how they want it. They are using their mobile phones more often for an ever-growing list of tasks—be it searching for information, looking up directions, or buying products. According to Google, 34% of consumers turn to the device that’s closest to them. More often than not, they’re switching from one channel or device mid-transaction; Google found that 67% of consumers do just that. They might start their product research on the web, but complete the purchase on a smartphone.

Switch device in mid-transaction? No one would ever do that! Oops.

We could, of course, decide to block those paths that we don’t consider “reasonable” (as opposed to stopping actual error conditions). The problem with that approach, is that our definition of “reasonable” may conflict with the customer’s definition. When “conflict” and “customer” are in the same sentence, there’s generally a problem.

These conflicts, in the right domain, can even have deadly results. While investigating the Asiana Airlines crash from July of 2013, one of the findings of the National Transportation Safety Board (NTSB) was that the crew’s belief of what the autopilot system would do did not coincide with what it actually did (my emphasis):

The NTSB found that the pilots had “misconceptions” about the plane’s autopilot systems, specifically what the autothrottle would do in the event that the plane’s airspeed got too low.

In the setting that the autopilot was in at the time of the accident, the autothrottles that are used to maintain specific airspeeds, much like cruise control in a car, were not programmed to wake up and intervene by adding power if the plane got too slow. The pilots believed otherwise, in part because in other autopilot modes on the Boeing 777, the autothrottles would in fact do this.

“NTSB Blames Pilots in July 2013 Asiana Airlines Crash” on Mashable.com

Even if it doesn’t contribute to a tragedy, a poor user experience (inconsistent, unstable, or overly restrictive) can lead to unintended consequences, customer dissatisfaction, or both. Basing that user experience on assumptions instead of research and/or testing increases the risk. As I’ve stated previously, risky assumptions are an assumption of risk.

Faster Horses – Henry Ford and Customer Development on Iasa Global Blog

A faster horse

Henry (“Any customer can have a car painted any color that he wants so long as it is black”) Ford was not an advocate of customer development. Although there’s no evidence that he actually said “If I had asked people what they wanted, they would have said faster horses”, it’s definitely congruent with other statements he did make.

See the full post on the Iasa Global Blog (a re-post, originally published on CitizenTekk and mirrored here).

Fixing IT – The Importance of Ownership

My home is your castle?

Organizations that have a centralized budgeting and provision of IT services risk creating a disconnect between those that need technology services and those that provide those services. The first post in this series alluded to the problem – those needing the services have too little control over obtaining those services. If there is only a single provider with finite capacity and a finite budget, then there is the potential for consumers to be left out when demand exceeds supply. Governance processes set up to manage the allocation of services may be ineffective and can lead to dysfunctional behavior when business unit leaders attempt to game the system. Lastly, this situation creates a perverse incentive to gold plate projects in order to compensate for the uncertainty over when the business unit might next get the attention of IT.

The DevOps approach I suggested in my previous post only partially addresses the issues. Having a dedicated team means little if the size, composition, processes, and priorities of that team are outside your control. Much is heard about the concept of “alignment”, but mere alignment is insufficient. As Brenda Michelson noted in “Beware the “Alignment Trap””:

I started to use the term ‘business-IT integration’, because I’m thinking beyond traditional business-IT alignment. Alignment refers to the review and reconciliation of independent activities, in this context the reconciliation of business strategy and plans with IT strategy, architecture and plans.

For business to reap the true value of IT, business and IT must collaborate on the development of strategy, architecture and plans. This collaboration, which should continue through delivery and operations, is business-IT integration.

To achieve the integration Michelson referred to, one should bear in mind Conway’s law: “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations”. The reason to do so is elegantly expressed in Ruth Malan’s paraphrase of it: ” if the architecture of the system and the architecture of the organization are at odds, the architecture of the organization wins”. She goes on to say:

The organizational divides are going to drive the true seams in the system.

The architecture of the system gets cemented in the forms of the teams that develop it. The system decomposition drives team ownership. Then the organizational lines of communication become reflected in the interfaces, with cleaner, better preserved interfaces along the lines where organizational distance increases. In small, co-located teams, short-cuts can be taken to optimize within the team. But each short-cut that introduces a dependency is like rebar in concrete–structurally efficient, but rigid. If the environment changes, demanding new lines to be drawn, the cost becomes clear. The architecture is hard to adapt.

Enterprises are systems, social systems, but systems nonetheless. In the case of business units and IT, you have a social system creating, maintaining, and evolving digital systems. Assuming Conway’s law is true, simpler organizational structures should yield simplified systems (in both the technical and organizational realms). The opposite is also true, poor business architecture can actually reinforce dysfunction and prevent progress. Tom Graves post “Financial-architecture and enterprise-architecture” graphically illustrates just such a case where the accounting structure and approval processes conspired to prevent cost savings:

In fact, the business-case was obvious – or at least, obvious to anyone who’d actually looked at what we’d done. The potential savings in licence-fees alone ran into many millions of dollars a year – let alone the more subtle savings from re-using idle servers or freeing-up unneeded real-estate. By comparison, the costs were tiny: in some cases not much more than a few hours of paperwork, and overall perhaps a few staff-FTE for a few months at most.

But the sting in the tail was that requirement for proof, in terms of the organisation’s standard company-accounts. What we discovered, to our horror, was that the accounts themselves prevented us from doing so: the way those accounts were structured actually made it impossible to make almost any business-case at all for ever shutting anything down.

It should be clear that business structures and processes should be designed to facilitate the execution of business strategy. Part of this is putting technology decision-making into the hands of the business. IT is well positioned to inform those decisions in terms of technical trade-offs, but hardly qualified to decide from a business standpoint. IT Governance: How Top Performers Manage IT Decision Rights for Superior Results by Peter Weill and Jeanne W. Ross (excerpted by David Consulting Group – see table 5) shows that business-driven governance optimizes for profit and growth.

When making decisions about how to use technology in furtherance of business goals is coupled with the responsibility for funding those decisions, customers have greater incentive to make more prudent decisions about their use of technology. Additionally, when those costs are borne by the business unit, both the unit and the enterprise have a clearer picture of its financial state than if those IT costs are accounted for in IT’s budget.

When IT is in the position of advising on and enabling, rather than controlling, use of technology, some benefits accrue. When there’s no need to hide “Shadow IT” from the IT department, business units can take advantage of advice re: security, what other units are doing, etc. The same applies to using outsourcing to augment in-house IT. By being aware of these external capabilities, IT is positioned as a partner to the business unit in attaining its goals, allowing IT to play to its strengths. IT departments that can support business goals as advisers and coordinators of technology in addition to being providers can extend their capabilities with external providers without fear of replacement.

Fixing IT – Taming the (Customer) Beast

Detente

In my previous post, I outlined the source of the dysfunctional nature of the relationship many IT departments have with their customers. This post will be about fixing that relationship. Having taken part in just such a venture, I can state that it is possible.

When I first became involved with corporate IT, I worked for a group that was developing a core line of business application for the company. Our process worked (well, “worked”) like this:

  1. The users would produce voluminous “specs” minutely detailing the new features they wanted in terms of behavior and presentation. These were delivered to product managers.
  2. The product managers would take the users’ “specs” and file them away for reference in their trash cans. The product managers would then generate their own “specs” minutely detailing the behavior and presentation of the features in accordance with the way they knew it really should have be.
  3. The new “specs” were presented to the remainder of the team for estimation, and the final project plan was created.
  4. Product managers, developers, testers, technical writers, trainers, etc. would all gather with their management and everyone would sign a “contract” to deliver what was specified by the due date.
  5. Developers would then proceed to implement the features as best they could, substituting their own ideas for those parts that were ambiguous or unrealistic. When everything was completed (or when the “code freeze” date was reached, whichever came first) the features were delivered to the testers.
  6. Testers would then generate bug tickets for anything that did not work the way they thought it should (regardless of whether it matched the documentation or not). Bug tickets were sent to the developers.
  7. Developers would remedy the defects according to the instructions in the ticket and batches of bug fixes would be released to the testers.
  8. Testers would pass the the tickets and forward them to the product managers.
  9. The product managers would re-open the tickets and direct that the features be coded according to the original specification (or not, if they had had a new idea in the interim). These tickets would then be returned to the developers.
  10. Developers would remedy the defects according to the new instructions in the ticket and batches of bug fixes would be released to the testers.
  11. Steps 6 through 10 would repeat until either the tester or product manager relented, typically due to exhaustion. When this point was reached for all features, user acceptance testing was began by the user representatives who had composed the original “specs”. The original go-live date was, of course, long since passed by.
  12. User acceptance testing consisted of repeated rounds of find-fix-push. The goal of the user representatives was to bend the meaning of “bug” such that something resembling the original requests would emerge. Really good ones could manage to get entirely new enhancements implemented as “fixes”. At the end of this process, a release to production was scheduled.
  13. After the deployment (and all the downtime due to remediation of the release management issues), the end users could then contemplate how they were going to work around the software to earn their paycheck.

Needless to say, this process led to our having a warm relationship with our customers. They frequently assured us that should we burst into flames, they wouldn’t hesitate to stomp the fire out. Several even routinely produced books of matches and offered to demonstrate.

In all seriousness, the process we worked under at that time seriously impaired our relationship with our customers, both on an organizational and an individual basis. It destroyed our customer’s trust in us as it all but guaranteed that we would be lying to them about what they would be getting and when they would get it. The fact that they would not find this out until the very last second only added insult to injury. When the product was declared dead and the majority of the group laid off, it came as no surprise to anyone.

What did come as a surprise was that a small cross-functional group from the team was retained and tasked with teaching a new process to the remainder of the IT group. The nine months prior to the demise of the product had been spent re-engineering the process by which it was developed. While the effort wasn’t sufficient to save the product, it did get the attention of those looking to transform the IT function.

Reduced to its essence, that shift in process was nothing more than a recognition that our job was to make it easier for others to do their job. It wasn’t about technology or software development, it was about providing value to our customers and meeting their expectations. Value was understood to be more than just taking orders, but actually working with customers to define their needs and jointly determine a solution that met those needs. Meeting expectations was understood to be involving customers in the planning and keeping them in the loop as it evolved, not committing to a date with insufficient information and then hiding the changing circumstances until it could be denied no longer.

Service Cycle

The result of this communication and collaboration was, slowly, step by step, the building of trust. As Tom Graves noted in “Product, service and trust”, “Trust is at the centre of everything in any enterprise – every product, every service”. In that same post, Tom used the image on the right to illustrate the cyclical nature of trust built via mutual respect, attention to needs, and delivery of results. Each success enhances the trust of the customer and the reputation of the provider. These successes can also form a virtuous circle where customer satisfaction increases the motivation of the team serving them.

It is important to understand that this is a process, rather than an event. As Seth Godin stated in “Gradually and then suddenly”:

It didn’t happen suddenly, you just noticed it suddenly.

The flipside works the same way. Trust is earned, value is delivered, concepts are learned. Day by day we improve and build an asset, but none of it seems to be paying off. Until one day, quite suddenly, we become the ten-year overnight success.

The reason this is so important should be clear. Business is dependent on technology more and more with each passing day. However, being dependent on technology and being dependent on a particular provider are two different things. When it is responsive and trustworthy, in-house IT can provide tremendous value to an enterprise. When it is not, the value proposition for outside providers becomes clearer.