The Hidden Cost of Cheap – UX and Internal Applications

Sisyphus by Titian

Why would anyone worry about user experience for anything that’s not customer-facing?

This question was the premise of Maurice Roach’s post in the Zühlke blog, “Empathise with your users or you won’t solve their problems”:

Bring up the subject of user empathy with some engineers or product owners and you’ll probably hear comments that fall into one of the following categories:

  • Why do we need to empathise when the requirements tell us all we need to know about the problem at hand?
  • Is this really going to improve anything?
  • Sounds like an expensive waste of time
  • They’ll have to use whatever they’re given

These aren’t unexpected responses, it’s easy to put empathy into the “touchy feely”, “let’s all hug and get along” box of product management.

Roach’s answers:

Empathy does a number of things, but mainly it increases the likelihood that the delivery team will think of a user and their pain points when delivering a feature.

If an engineer, UX designer or product owner will has sat with a user, watched them interact with their current software or device they will have an understanding of their frustrations, concerns and impediments to success. The team will be focused on creating features with the things they have witnessed in mind, they’re thinking about how their software will affect a human being and no amount of requirement documentation will give them that emotional connection.

Empathy can also help to develop a shared trust in the application development process. The users see that the delivery team are interested in helping to solve their problems and the product delivery team see the real users behind the application.

All of these are valid reasons, but the list is incomplete. All of these answer the question from a software development point of view. To his credit, Roach pushes past the purely technical aspects into the world of the user. This expanded exploration of the context is, in my opinion, absolutely essential. What’s presented above is an IT-centric viewpoint that needed to be married with a business-centric viewpoint in order to get a fuller picture.

Nick Shackleton-Jones, in his post “The Future Is… Organisational Usability!”, outlined on the problem:

Here’s how your organisation works: you hire people who are increasingly used to a world where they can do pretty much anything via an app on their iPhone, and you subject them to a blizzard of process, policy, antiquated systems and outdated ways of working which pretty much stop them in their tracks, leaving them unproductive and demoralised. Frankly, it’s a miracle they manage to accomplish anything at all.

As he notes, enterprises are putting a lot of effort into digital initiatives aimed at making it easier for customers to engage with them. However:

…if we are going to be successful in future we need to make it much easier for our people to do their jobs: because they are going to be spending less time with us, and because we want engagement and retention, and because if we require high levels of capability (to work our complex systems) then our resourcing costs will go through the roof. We have to simplify ‘getting stuff done’. To put it another way: in an ideal world, any job in your organisation should be do-able by a 12-yr old.

While I disagree that “any job in your organisation should be do-able by a 12-yr old”, Shackleton-Jones point is well-taken that it is in the interests of the business to make it easier for people to do their jobs. All aspects of the system, whether organizational, procedural or technological, should be facilitating, not hindering, the mission. Self-inflicted, unnecessary impediments are morale-killers and degrade both effectiveness and efficiency. All three of these directly impact customer-experience.

While this linkage between employee user experience and customer experience makes usability important for line of business systems (both technological and social), it has value for peripheral systems as well. Time people spend on ancillary tasks (filling out time sheets, requesting supplies, etc.) is time not spent on their primary duties. You may not be able to eliminate those tasks, but you can minimize their expense by making them quick and easy to complete. The further someone’s knowledge/skill/experience level gets from “do-able by a 12-yr old”, the bigger the savings by paying attention to this.

Rather than asking if you can afford to pay attention to user experience, you might want to ask whether you can afford not to.

Twitter, Timelines, and the Open/Closed Principle

Consider this Tweet for a moment. I’ll be coming back to it at the end.

In my last post, I brought up Twitter’s rumored changes to the timeline feature as a poor example of customer awareness in connection with an attempt to innovate. The initial rumor set off a storm of protest that brought out CEO Jack Dorsey to deny (sort of) that the timeline will change. Today, the other shoe dropped, the timeline will change (sort of):

Essentially, it will be a re-implementation of the “While You Were Away” feature with an opt-out:

In the “coming weeks,” Twitter will turn on the feature for users by default, and put a notification in the timeline when it does, Haq says. But even then, you’ll be able to turn it off again.

Of course, Twitter’s expectation is that most people will like the timeline tweak—or at least not hate it—once they’re exposed to it. “We have the opt-out because we also prioritize user control,” Haq says. “But we do encourage people to give it a chance.”

So, what does this have to do with the Open/Closed Principle? The Wikipedia article for it contains a quote from Bertrand Meyer’s Object-Oriented Software Construction (emphasis is mine):

A class is closed, since it may be compiled, stored in a library, baselined, and used by client classes. But it is also open, since any new class may use it as parent, adding new features. When a descendant class is defined, there is no need to change the original or to disturb its clients.

Just as change to the code of class may disturb its clients, change to user experience of a product may disturb the clientele. Sometimes extension won’t work and change must take place. As it turns out, the timeline has been extended with optional behavior rather than changed unconditionally as was rumored.

Some thoughts:

  • Twitter isn’t the only social media application out there with a timeline for updates. Perhaps that chronological timeline (among other features) provides some value to the user base?
  • Assuming that value and the risk of upsetting the user base if that value was taken away, wouldn’t it have been wise to communicate in advance? Wouldn’t it have been even wiser to communicate when the rumor hit?

Innovation will involve change, but not all change is necessarily innovative. Understanding customer wants and needs is a good first step to identifying risky changes to user experience (whether real or just perceived). I’d argue this is even more pronounced when you consider that Twitter’s user base is really its product. Twitter’s customers are advertisers and data consumers who want and need an engaged, growing user base to view promoted Tweets and generate usage data.

Returning to the Tweet at the beginning of this post. Considering the accuracy of that recommendation, would it be reasonable to think turning over your timeline to their algorithms might degrade your user experience?

Hearts and Stars and Prison Riots (User Experience Matters)

So Twitter decided to make a change, and people have been reacting (and reacting to the reaction):

As Jeff Sussna noted, there’s a reason for the reaction:

In my old, pre-IT life, I’ve seen that same cavalier attitude toward change cause a real-life riot (for the record, it was a jail riot rather than a prison riot, but whatever), complete with fires set, property damaged and tear gas deployed. All for the want of a little notice.

Sometimes people react negatively to this type of change. The reaction might be stupid, but how much more stupid is triggering that type of reaction when you didn’t have to?

#ShadowSocialMedia or Why Won’t People Use the Product the Way They’re Supposed to

Scott Berkun dislikes the way people are using images to bypass Twitter’s 140 character limit:

His point is very valid, but:

Which is the issue. Sometimes there’s a need to go beyond that limit. Sure, you can chunk your thoughts up across multiple tweets, but users find it burdensome to respect Twitter’s constraint on the amount of text per tweet. Constrained customers, assuming they stick with a product, tend to come up with “creative” solutions to that product’s shortcomings that reflect what they value. The customers’ values may well conflict with the developers’. When “conflict” and “customer” are in the same sentence, there’s generally a problem..

Berkun’s response to @honatwork‘s rebuttal nearly captures the issue:

I say “nearly”, because Twitter was built long before 2015. The problem is that it’s 2015 and Twitter has not evolved to meet a need that clearly exists.

In the IT world, it’s common to hear terms like “Shadow IT” or “Rogue IT”. Both refer to users (i.e. customers) going beyond the pale of approved tools and techniques to meet a need. This poses a problem for IT in that the customer’s solution may not incorporate things that IT values and retrofitting those concerns later is far more difficult. Taking a “products, not projects” approach can minimize the need for customer “creativity”, for in-house IT and external providers.

Trying to hold back the tide just won’t work, because the purpose of the system is to meet the customers’ needs, not respect the designers’ intent.

Design Communicates the Solution to a Problem

Frozen fog during extreme cold with tree in farmers field at daybreak

Making anything unambiguous means finding a way for others to understand which gets us to the knotty problem of how we communicate the method we have taken to create unambiguousness. (probably not even close to being a real word).

Thomas Cagley, commenting on “Hatin’ on Nulls”.

Tom Cagley’s comment referred to the last few sentences of the “Hatin’ on Nulls” post:

Coherence and consistency should be considered hallmarks of an API. As Erik Dietrich noted in “Notes on Writing Discoverable Framework Code”, a good API should “make screwing up impossible”. Ambiguity makes screwing up very possible.

Ambiguity and uncertainty are facts of life. Architects must work with less than perfect certainty to resolve ambiguous concerns and design an optimal solution for the problem(s) at hand. An important goal in achieving that optimal design must be to create a design that is comprehensible to the user. The design communicates how to achieve the solution to a problem. In doing so, we must deal with the ambiguity and uncertainty so that our clients don’t have to.

This, in my opinion, is a key defense of the phrase “form follows function”. As Tom Graves’ post “Form follows non-function” pointed out, perception of what is “good” design depends on quality of service attributes (AKA non-functional requirements). The functional attributes are necessary, but far from sufficient.

Interfaces, whether aimed at humans or machines, should be comprehensible (APIs are first consumed by humans before they are consumed by applications). In both cases, a lack of clarity will make them hard to use. It doesn’t matter if the design is technically clever, if functionality is hard to discover, in many cases it may as well be absent. It’s certainly likely to be perceived that way.

If the customer can’t accomplish a desired function, they’re going to feel constrained. Constrained customers tend to be unhappy customers. Unhappy customers tend to convert to ex-customers

[“Frozen fog during extreme cold with tree in farmers field at daybreak” Image by Ian Furst via Wikimedia Commons.]

Hatin’ on Nulls

Dante's Inferno; Lucifer, King of Hell

When I first read Christian Neumanns’ “Why We Should Love ‘null'”, I found myself agreeing with his position. Yes, null references have “…led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage…” per Sir C. A. R. Hoare. Yes, many people heartily dislike null references and will go to great lengths to work around the problem. Finally, yes, these workarounds may be more detrimental than the problem they are intended to solve. While I agreed with the position that null references are a necessary inconvenience (the ill effects are ultimately the result of failure to check for null, not the null condition itself), I didn’t initially see the issue as being particularly “architectural”.

Further on in the article, however, Christian covered why null references and the various workarounds, become architecturally significant. The concept of null, nothing, is semantically important. A price of zero dollars is not intrinsically the same as a missing price. A date several millenia into the future does not universally convey “unknown” or “to be determined”. Using the null object pattern may eliminate errors due to unchecked references, but it’s far from “safe”. According to Wikipedia, “…a Null Object is very predictable and has no side effects: it does nothing“. That, however, is untrue. A Null Object masks a potential error condition and allows the user to continue on in ignorance. That, in my opinion, is very much doing something.

A person commenting on Christian’s post stated that “…a crash is the worst kind of experience a user can have”. That person argued that masking a null reference error may not be as bad for the user as a crash. There’s a kernel of truth there, but it’s a matter of risk. If an application continues on and the result is a misunderstanding of what’s been done or worse, corrupted data, how bad is that? If the application in question is a game, there’s little real harm. What if the application in question is dealing with health information? I stand by the position that where there is an error, no news is bad news.

As more and more applications become platforms via being service enabled, semantic issues gain importance. Versioning strategies can ensure structural compatibility, but semantic issues can still break clients. Coherence and consistency should be considered hallmarks of an API. As Erik Dietrich noted in “Notes on Writing Discoverable Framework Code”, a good API should “make screwing up impossible”. Ambiguity makes screwing up very possible.

Design Follies – ‘Why can’t I do that?’

Man in handcuffs

It’s ironic that the traits we think of as making a good developer are also those that can get in the way of design and testing, but that’s just the case. Think of how many times you’ve heard (or perhaps, said) “no one would ever do that”. Yet, given the event-driven, non-linear nature of modern systems, if a given execution path can occur, it will occur. Our cognitive biases can blind us to potential issues that arise when our product is used in ways we did not intend. As Thomas Wendt observed in “The Broken Worldview of Experience Design”:

To a certain extent, the designer’s intent is irrelevant once the product launches. That is, intent can drive the design process, but that’s not the interesting part; the ways in which users adopt the product to their own needs is where the most insight comes from. Designer intent is a theoretical, speculative formulation even when based on the most rigorous research methods and valid interpretations. That is not to say intention and strategic positioning is not important, but simply that we need to consider more than idealized outcomes.

Abhi Rele, in “APIs and Data: Journey to the Center of the Customer Experience”, put it in more concrete terms:

If you think you’re in full control of your customers’ experience, you’re wrong.

Customers increasingly have taken charge—they know what they want, when they want it, and how they want it. They are using their mobile phones more often for an ever-growing list of tasks—be it searching for information, looking up directions, or buying products. According to Google, 34% of consumers turn to the device that’s closest to them. More often than not, they’re switching from one channel or device mid-transaction; Google found that 67% of consumers do just that. They might start their product research on the web, but complete the purchase on a smartphone.

Switch device in mid-transaction? No one would ever do that! Oops.

We could, of course, decide to block those paths that we don’t consider “reasonable” (as opposed to stopping actual error conditions). The problem with that approach, is that our definition of “reasonable” may conflict with the customer’s definition. When “conflict” and “customer” are in the same sentence, there’s generally a problem.

These conflicts, in the right domain, can even have deadly results. While investigating the Asiana Airlines crash from July of 2013, one of the findings of the National Transportation Safety Board (NTSB) was that the crew’s belief of what the autopilot system would do did not coincide with what it actually did (my emphasis):

The NTSB found that the pilots had “misconceptions” about the plane’s autopilot systems, specifically what the autothrottle would do in the event that the plane’s airspeed got too low.

In the setting that the autopilot was in at the time of the accident, the autothrottles that are used to maintain specific airspeeds, much like cruise control in a car, were not programmed to wake up and intervene by adding power if the plane got too slow. The pilots believed otherwise, in part because in other autopilot modes on the Boeing 777, the autothrottles would in fact do this.

“NTSB Blames Pilots in July 2013 Asiana Airlines Crash” on Mashable.com

Even if it doesn’t contribute to a tragedy, a poor user experience (inconsistent, unstable, or overly restrictive) can lead to unintended consequences, customer dissatisfaction, or both. Basing that user experience on assumptions instead of research and/or testing increases the risk. As I’ve stated previously, risky assumptions are an assumption of risk.